New Jersey legislators are looking to put stronger laws around the creation and use of video and audio deepfakes, an issue that state lawmakers have been aggressively pursuing for the past couple of years as national efforts by Congress to rein in the AI-based threats have stalled.
A bill that would criminalize the creation and distribution of deepfakes – lifelike hoax photos, videos, and audio created using machine learning algorithms – unanimously passed New Jersey’s Senate Budget and Appropriations Committee late last month and is now on its way to the full state Senate.
The bipartisan bill, sponsored by Democratic Senator Paul Moriarty and a Republican counterpart, Kristin Corrado, would make it a crime for people to produce or share audio or video deepfakes that were being used for illegal purposes. If convicted under the law, a person would face three to five years in prison and a fine of up to $30,000.
The bill – whose designation was changed from S-2544 to A-3540 – makes some free speech exceptions for such entities as media companies, technology firms, and advertisers, though it leans heavily on ensuring disclosure when possible.
Corrado in a statement after the vote said that New Jersey lawmakers have an “ethical responsibility to uphold transparency in the digital age. This bipartisan legislation would take a crucial step forward to protect individuals from the damaging effects of deceptive AI generated media by holding bad actors accountable.”
Deepfakes can be used for a host of nefarious reasons, from spreading disinformation and running scams to nonconsensual pornography, election manipulation, and identity theft. There have been efforts at the national level to push back against the rising tide of deepfakes, though initiatives in Congress have stalled.
It continues to be an issue in Washington DC. U.S. Senator Jeanne Shaheen, D-NH, in December urged the Justice Department (DOJ) and CISA to address AI-generated content and its threat to democratic processes, saying that “future uses of deepfake technology will be harder to detect and fend off as AI technology progresses.”
U.S. agencies like the National Institute for Standards and Technology (NIST) last year launched a program addressing generative AI that included plans to help create systems and software that can be used to detect AI-created fake visual or audio content. Other agencies, including CISA, the FBI, Defense Department (DoD), and National Security Agency (NSA) have issued warnings about the threats that deepfakes pose.
There also is action at the state level in places like California, where laws require online outlets to take down deceptive AI-created content and use watermarks or other techniques to enable people to detect deepfakes. Another makes it illegal to create and distribute sexually explicit deepfakes of a real person with the aim of doing harm.
New Jersey lawmakers have been addressing deepfakes on multiple fronts, including creating a bill that would make it illegal to knowingly create and distribute deepfakes in the 90 days before an election. They also want to address the issue of nonconsensual pornography with the latest bill. Early last year, a New Jersey teen filed a lawsuit against a high school classmate for creating and distributing AI-generated pornographic images of her and others.
“The public sharing of unlawfully generated ‘deepfakes’ can be just as devastating for a victim as having a real form of media disseminated without their consent,” New Jersey Senator Corrado said. “This is especially true in cases where artificial intelligence has been used to create pornography.”
The problems are only growing. In their latest Identity Fraud Report, Entrust researchers said a deepfake attempt happens every five minutes. In addition, law enforcement solutions provider CPI OpenFox said the number of online deepfakes is doubling every six months. Access management firm Teleport said in a report in October 2024 that 52% of U.S. and UK decision-makers said AI-generated impersonation was the most difficult attack to guard against.
Evolving generative AI tools are fueling this growth, according to Arik Atar, senior threat researcher at Radware.
“With the help of today’s technology, virtually anyone can create a passable deepfake,” Atar wrote in an email from the vendor about rising threats this year. “All that is required is a consumer-grade computer or smartphone and an internet connection. Without question, we are fast approaching an era where audiovisual content is no longer inherently trustworthy. As synthetic media becomes more widespread in 2025, video and audio will lose their inherent credibility.”
For businesses, this means they will have to rely less on avoid-visual content and more on cryptographically secure channels and should establish explicit policies about what are permissible and prohibited uses of the technology, he said.
Recent Articles By Author