Two lawmakers have reintroduced a bill that would make the nonconsensual sharing of digitally altered pornographic images a federal crime.
The legislators’ renewed a push on Tuesday to pass the “Preventing Deepfakes of Intimate Images Act” was led by Rep. Joseph Morelle, according to The Wall Street Journal.
The Democratic New York congressman first authored the act in May 2023, which he said in a press release at the time was created “to protect the right to privacy online amid a rise of artificial intelligence and digitally-manipulated content.”
Morelle’s renewed push of the “Preventing Deepfakes of Intimate Images Act” now includes a co-sponsor, Rep. Tom Kean, a Republican from New Jersey, The Journal reported.
The congressmen created their respective bills in the wake of an incident at Westfield High School in NJ, where AI-generated pornographic images of female students at the school were circulated by male classmates without their consent.
On Oct. 20, Westfield High School Principal Mary Asfendis confirmed the troubling event to the parents of each of the school’s roughly 1,900 students after girls reported the photos to school administrators.
“This is a very serious incident,” Asfendis wrote. “New technologies have made it possible to falsify images and students need to know the impact and damage those actions can cause to others.”
The reintroduced legislation was referred to the House Committee on the Judiciary, but the committee has yet to make a decision on whether or not to pass the bill.
Kean has also been outspoken about putting guardrails around AI, introducing the AI Labeling Act of 2023 in November, which would require creators to disclose whether images or written content were created using AI.
Aside from making the sharing of digitally altered intimate images — also known as “deepfakes” — a criminal offense, Morelle and Kean’s proposed legislation also would allow victims to sue offenders in civil court.
Representatives for Morelle and Kean did not immediately respond to The Post’s request for comment.
More than 30 female students at Westfield High School allegedly fell victim to the deepfake scheme, though it wasn’t immediately clear how many students were involved in creating the fake nude images, or if any disciplinary action had been taken.
At a high school school in Miami, however, two boys were suspended for creating and spreading images so disturbing that several victims did not want to return to class.
The perpetrators obtained headshots of the students — both male and female — from the school’s social media account and used an AI deepfake app to create the nude images that were then shared to social media.
According to visual threat intelligence company Sensity, more than 90% of deepfake images are pornographic.
Another concerning report from the University College London said that humans are unable to detect over a quarter of deepfake speech samples generated by AI.
The UCL researchers warned that deepfake technology is only getting stronger, as the latest pre-trained algorithms “can recreate a person’s voice using just a 3-second clip of them speaking.”
In an example of how convincing this technology can be, several Taylor Swift fans were reportedly scammed out of hundreds of dollars earlier this month after tricksters released advertisements employing AI-generated video of the Grammy winner peddling Le Creuset in an attempt to steal money and data from fans.
The ads — which can be found across all social media platforms — show Swift, 34, standing next to the Le Creuset Dutch oven, which, according to the official website, runs anywhere from $180 to $750 depending on the size and style.
Earlier this year, other deepfake images of Pope Francis in a Balenciaga puffer jacket and Donald Trump resisting arrest also took the internet by storm.