Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

California Law to Test Crackdown on AI Election Deepfakes

California has taken a significant step in combating election-related deepfakes, becoming one of the first states to implement stringent laws targeting the use of artificial intelligence in political advertisements as the nation gears up for the 2024 elections. Governor Gavin Newsom recently signed three groundbreaking proposals during an artificial intelligence conference in San Francisco.

The new legislation comes in light of growing concerns regarding the risks of election disinformation fueled by artificial intelligence technologies. California’s laws are notable for their expansive scope, as they not only address false content about political candidates but also tackle misinformation related to election workers, voting machines, and overall election integrity.

Of the three laws signed by Newsom, only one is effective immediately to curb the spread of deepfakes before the upcoming election. This law prohibits the creation and publication of deceptive materials linked to elections during a critical 120-day window leading up to Election Day and for 60 days following it. Additionally, it empowers courts to intervene and halt the distribution of such materials, with civil penalties imposed on violators. The law does incorporate exemptions for parody and satire.

Newsom and state lawmakers have underscored the importance of this legislation in safeguarding public trust in U.S. elections against the backdrop of a particularly charged political atmosphere. However, the new laws have drawn sharp criticism from advocates for free speech and operators of social media platforms.

Elon Musk, the owner of the platform X, has called the law unconstitutional, asserting that it violates the First Amendment. Shortly after the laws were enacted, Musk shared an AI-altered video of Vice President Kamala Harris, which sparked renewed discussions on the implications of the new legislation.

Musk’s post, which described the video as a parody, highlighted the contentious nature of the law. He emphasized that this type of content could now be considered illegal under California’s new regulations, further igniting debates over the balance between combating misinformation and protecting freedom of expression.

However, experts question the practical effectiveness of the legislation in stopping deepfakes. Ilana Beller from Public Citizen, a nonprofit that tracks election deepfake legislation, pointed out that these laws have not yet been challenged in court. There is concern about the courts’ ability to quickly take action against rapidly disseminating fake content.

As Beller noted, the courts may not act swiftly enough to prevent harm once a fake image or video has been shared, potentially allowing significant damage to candidates or the electoral process. Even if a court were to issue an order to halt the spread of misleading content, it could take considerable time before the situation is resolved, by which point the misinformation may have already influenced public perception.

“In an ideal world, we’d be able to take the content down the second it goes up,” Beller said, emphasizing the need for rapid response to effectively mitigate the spread of harmful disinformation.

Despite these concerns, proponents of the legislation believe that having such laws in place could deter potential offenders who might otherwise engage in the spread of deceptive political materials. This deterrent effect could be crucial in the fight against election-related misinformation.

As for Musk’s provocative post, Newsom’s office has yet to respond to inquiries regarding whether it violates the newly enacted law. Assemblymember Gail Pellerin, who spearheaded the legislation, was not available for immediate comment following the announcement.

Additionally, Newsom’s legislation includes two other laws that expand upon previous efforts to tackle election deepfakes, initiated in California in 2019. These laws will require political campaigns to disclose the use of AI-generated materials and mandate the removal of deceptive content by online platforms such as X, though these measures will take effect next year, following the 2024 election.

Source: AP