Regulations of Deepfakes in Elections: Proceed With Caution
- jerry wang
- Oct 10, 2025
- 12 min read
Introduction
In recent years, the advancement of artificial intelligence (AI) technology has increased productivity, driven stock markets to historical highs, and transformed the day-to-day lives of millions of people globally. However, the rise of a particular area of AI technology—deepfakes—has presented unprecedented challenges to the democratic process in the United States, raising concerns about the integrity of elections and the potential for mass misinformation (GAO Report). The September 2024 decision from the Federal Election Commission (FEC) “interpretive rulemaking” proceeding sparked even more controversy on this subject as political campaigns increasingly grapple with AI-generated content (FEC Meeting Document 24-39-A).
At the heart of the debate lies a critical question: Should the government regulate deepfakes right now? Despite the perceived pressing and immediate need for AI regulations, this essay argues that the government, especially the FEC, should not regulate or ban deepfakes in elections, as it cannot do so without imposing rules that can reach protected speech. Instead, this task is best left to the legislature, the courts, and other entities that are more fit for addressing this problem. However, regulation may still appropriately target a narrow form of unprotected speech, such as defamation.
II. The AI Campaign Problem: First Amendment Principles & Deepfakes
A. General First Amendment Principles, Defamation, and Fraudulent Misrepresentation as Exemptions to Free Speech
General Principles
The First Amendment of the U.S. Constitution has protected freedom of speech against restrictions for over 200 years. Commonly known as the “Free Speech Clause,” the amendment protects private speech against restrictive government actions such as laws, policies, prosecutions, or compulsory requirements on speech. The concept of speech here includes written and spoken words, mediums such as photographs and videos, editorial functions, and certain conduct that is expressive in nature. There is no condition that speech has to contain a narrow, succinct, and articulable message, but messages with these traits are more likely to be found “sufficiently communicative” and protected by the First Amendment in court.
Nonetheless, there are limitations to this clause. Not all forms of expressive private speech are protected. There are two notable exceptions to this clause: defamation and fraud, both of which contain deceptive and untrue statements about a certain individual or fact. These forms of speech are historically unprotected and can be penalized under civil common law or statutory law in every state. For example, under the California Civil Code, defamation is categorized as either libel or slander, which are written or orally expressed false statements that damage one’s reputation. Both categories are treated as civil torts and allow the victim to sue for monetary damages (“Defamation”).
As shown, the Free Speech Clause is limited in its effect. It is not absolute, and speech restrictions can be allowed. Even so, lawmakers need to exercise caution when regulating unprotected speech because such regulations often draw constitutional challenges, especially when the law regulates based on content. Content-based distinctions are very hard to stand in court since they often result in overbreadth of the regulation. After a court decides that a law or governmental action reaches protected speech, it proceeds to apply a particular level of scrutiny or legal standard to the case.
Strict Scrutiny
Strict scrutiny typically applies to restrictions based on the content of speech. Under strict scrutiny, the government action generally needs to be narrowly tailored to advance a compelling public interest and be the “least restrictive means” of satisfying the compelling interest. This is a very high standard of scrutiny, and the government is rarely able to meet it.
For example, in United States v. Stevens, the Supreme Court found a federal law banning animal cruelty depictions to be unconstitutional in an overwhelming 8–1 majority decision. The basis of the decision was that the Court found the law to be overbroad in its definition of “animal cruelty” and the applications resulting from it (United States v. Stevens). The same strict standard of scrutiny will have to apply to government regulations involving AI.
B. What Are Deepfakes?
A deepfake is a type of deep learning AI that can generate fictional or fake content such as images and videos that are extremely difficult to distinguish from real content (GAO Report). It can incorporate images, features, and voices of real public figures in fabricated content. At first, it was predominantly used against women and other public figures (The Guardian). Now, it has found its way into the political sphere through misinformation and malicious impersonation of campaign candidates (Barclay).
Such incidents have already occurred in recent memory. During the 2023–2024 election in Slovakia, a deepfake audio clip of a politician rigging the vote was released two days before the election. It then went viral and caused the affected candidate to narrowly lose the election (Microsoft). Most notably, it was also used in the 2024 U.S. election against both Republicans and Democrats. A deepfake robocall imitating President Biden’s voice urged voters not to vote in the primary and was spread around, while another viral AI-edited photo falsely showed Secret Service agents smiling at the Donald Trump assassination attempt scene, implying the attempt was staged (Barclay). These uses have had substantial consequences on elections and show the extent to which AI technology can threaten our democracy.
C. FEC and Deepfakes
Given these dangerous applications of AI, many individuals and organizations felt a dire need for action from the FEC. Some entities, including the NGO Public Citizen, petitioned the FEC for a legislative rulemaking that would outlaw the use of AI in the then-upcoming 2024 U.S. election.
The FEC is an independent federal agency deriving its authority from Section 309 of the Federal Election Campaign Act of 1971. It primarily regulates the “acquisition and expenditure” of campaign funds, as well as compliance by participants. Section 30124 of the Act bars the fraudulent misrepresentation of campaign authority. It essentially makes it illegal for candidates to speak, write, or otherwise act on behalf of other candidates to damage their reputation or solicit funds (Federal Register).
Upon those petitions, the FEC opted to proceed with an “interpretive rulemaking” on September 10, 2024, right before the U.S. presidential election. In its 5–1 voted decision, the FEC clarified that the section on fraudulent misrepresentation of campaign authority is technology-neutral and applies equally to usages of AI technology. It did not issue any deliberate ban on AI in its rulemaking or provide any guideline on how misrepresentation using AI will be judged (FEC Meeting Document 24-39-A).
Vice Chair of the FEC, Ellen Weintraub, voted for the interpretive rulemaking but expressed that she would have preferred a full-on rulemaking. She claimed that the interpretive rulemaking was vague and did not address some key issues surrounding misrepresentation, such as the role of disclaimers, factors that constitute misrepresentation, and legal tests. A full-on rulemaking that addresses more aspects of the issue, as well as the interpretation of AI misrepresentation by Public Citizen, would have better protected the election process in her opinion (Weintraub Statement).
Commissioner Cooksey voted against interpretive rulemaking. He strongly opposed FEC intervention in this issue. First, he believes that the FEC simply does not have statutory authority to regulate this matter. He also believes that the FEC lacks expertise on AI, and any improper rulemaking might limit the benefits of AI in elections. He cited that other bodies, such as Congress and private industry, are already taking measures to control the negative usages of AI, and it would be better for the FEC to defer actions to them (Cooksey Statement).
III. AI and Deepfakes Should Remain Unregulated to Maintain Freedom of Speech
Deepfake Regulation Can Chill Political Discourse
The FEC was correct in its decision to proceed with interpretive rulemaking. The FEC specializes in campaign finance and compliance. It simply lacks the technical expertise in AI to put forward regulations that are narrowly tailored. As stated by FEC Commissioner Cooksey, the FEC simply lacks “both the expertise and legal authority” to regulate such matters (Cooksey Statement). The constitutional implications of this matter make the task of regulating even more challenging for the FEC and even some state legislatures. Attempts by these bodies often risk resulting in an overbroad regulation that limits the benefits of AI, infringes constitutionally protected speech, and would collapse against constitutional challenges.
AI and deepfakes pose many benefits for society that a regulation will risk taking away. Like any technology, AI brings both harm and benefits. While it can certainly be used to spread misinformation in the political context, it can also help with legitimate purposes like campaign marketing, campaign planning, connecting with voters, education, and outreach. In fact, there have already been examples of AI being used for good-natured purposes. For example, a deepfake video of David Beckham raising awareness of malaria in nine different languages was made with his consent. Additionally, during the 2022 Brazilian elections, an initiative used deepfake technology to create video representations of political candidates delivering their speeches in Brazilian Sign Language (GAO Report). It has vast potential in helping political candidates disseminate information and raise awareness, including to underserved communities.
However, regulations on speech can jeopardize these positive applications of deepfake technology. The Supreme Court has recognized that a law that is vague, overbroad, or arbitrary can create a significant burden on protected speech. This can cause individuals to curtail their expression. In the case of deepfakes, these regulations can create a “chilling effect” on even the legitimate uses of AI. Additionally, overbroad regulations can be weaponized by candidates to penalize these legitimate uses of AI by their opponents. This will not only stifle innovation in AI in the political sphere but also limit how candidates can engage with voters. In conclusion, too much is put at risk when the FEC or other bodies draft regulations that are oftentimes unconstitutional.
Along with preventing technological applications and innovation, overbroad regulations can also infringe upon constitutionally protected speech such as satire and parody. Political satires and parodies are historically powerful tools of criticizing authority and exposing problems with our society, and they can be created more effectively with the use of AI technology. They are heavily guarded by the First Amendment, with many Supreme Court cases upholding their legal status.
State Deepfake Regulation and Constitutional Challenges
Despite this, as many as twenty-six states are passing or have passed legislation that is very ambiguous on the issue of using deepfake for satire. For example, a Minnesota law criminalizes the dissemination of a “deep fake” prior to an election if done “with the intent to injure a candidate or influence the result of an election” (FIRE). This definition is problematic since most political ads already serve to antagonize and injure the reputations of opposing candidates, and this is even more the case for satires. In Hustler v. Falwell, the Supreme Court previously held that public figures need to prove actual malice to recover damages for intentional infliction of emotional distress as a result of a parody (Hustler v. Falwell). While this Minnesota law applies only to content that is “so realistic that a reasonable person would believe,” it does not place nearly as much burden of proof on the public figure as the Supreme Court does. It also does not make exceptions for satires and parodies like certain other state laws, making enforcement against satires still a possibility. Similar state laws regulating deepfake pose a threat to protected forms of expression like satire and parody. They illustrate the dangers of overbroad regulations that the FEC may issue if it regulated deepfakes in federal elections.
Lastly, a more practical implication of overbroad regulations is their unconstitutionality and therefore inability to withstand a constitutional challenge. As it stands, regulations on deepfake mainly target the content of the political material. Content-based restrictions will inevitably draw strict scrutiny in court if challenged. As explained previously, this standard places a burden of proof on the government that it is rarely able to meet. Although deepfake regulations advance a compelling governmental interest, they will most likely not meet the narrow tailoring requirement of the standard. This standard is so high that even regulations on unprotected speech can be struck down. California’s Defending Democracy from Deepfake Deception Act of 2024 is living proof of this. It allowed any viewer of “materially deceptive” AI content to sue the creator, with no exceptions made for satires or parody. A federal judge has already imposed a preliminary injunction on most of the law, citing the unconstitutional nature of the law (EFF).
Many current state laws only require disclaimers of AI content instead of total restrictions. These laws, which are categorized as compelled speech, are still heavily frowned upon by the courts. The same issue of overbroad regulation is present here. States such as Florida require disclaimers on all AI-generated campaign ads and materials, even those with legitimate purposes. The scope of these laws is excessively broad and hence unconstitutional (FIRE). In conclusion, it would require a substantial effort from the legislative branch to craft a new law that respects the Constitution while effectively policing malicious AI use.
Counterargument
However, those who want immediate regulation point to deepfakes’ ability to spread misinformation on a massive scale and the rapid advancement in the field, which requires government action. On top of the past incidents previously mentioned, AI and deepfake technologies are rapidly evolving. Deepfake relies on several underlying AI technologies, most notably Generative Adversarial Networks (GANs). It is trained using a large dataset of images, typically faces, to identify and reconstruct patterns. A GAN absorbs the data and uses two artificial neural networks that compete against each other, with one trying to produce a fake and the other trying to spot it (GAO Report). As more powerful computing resources have become available in recent years, artificial neural network technologies have advanced rapidly. The same models that were being trained on thousands of images can now be trained on hundreds of thousands or potentially millions. This not only makes realistic deepfakes more accessible to the general public but also makes it increasingly harder even for the trained eye to spot fakes. Proponents were evidently concerned by this growing potential of deepfake. Robert Weissman, co-president of Public Citizen, who petitioned the FEC to specifically regulate AI misrepresentation, stated that the speed at which deepfakes are developing calls for a deliberate rulemaking (The Hill).
While these concerns are certainly valid, their proposals for bodies like the FEC to regulate now will only lead to ineffective, overbroad, and unconstitutional laws. Alternatively, we suggest that this problem should be delegated to Congress, the industry, and the courts to address.
Before passing a bill, Congress should deliberate extensively with other stakeholders and consider the non-legal ways with which they can address this problem. Private entities like Microsoft have already taken the initiative by leveraging their technical expertise. Microsoft has co-founded the Coalition for Content Provenance and Authenticity (C2PA) to develop a technical standard for establishing the source and history of digital content (Microsoft). They aim to attach content credentials cryptographically to photos and videos spread online, which adds another layer of transparency and helps people themselves make informed decisions about the content they share or view online. Meanwhile, OpenAI introduced a deepfake detector that identifies images made with its DALL·E 3 generative AI model five months prior to the 2024 election (TechTarget). While it remains only available to a private group of testers as of May 2025 and does not detect deepfakes made with all models, it is an important step in the right direction.
It is imperative that Congress considers these and all the other alternative ways of allowing free speech while protecting the public from deepfakes. Educational campaigns, encouraging platforms to adopt their own industry standards, supporting initiatives like the C2PA, and funding AI detection tools are all non-legal tools that can mitigate the harm done by deepfakes without invoking First Amendment rights issues.
If Congress decides to proceed with formal rulemaking, it could do so through false personation laws. Chapter 43 of Title 18 of the U.S. Code sets out codified laws to protect public trust in the government from misrepresentation. Section 912 criminalizes the impersonation of government officials. It delineates two distinct offenses: impersonation coupled with acting as such or demanding something of value (U.S. Code). To address the issue of deepfakes, Congress can amend or add a parallel statute in this section that explicitly mentions using deepfakes to impersonate government officials. For example, Section 912 can be amended to specifically mention impersonation through artificial intelligence and add a third offense that encompasses impersonation to influence public opinion, with exemptions for satire and parodies. Since these are criminal laws, the government would have to prove beyond a reasonable doubt that the defendant falsely pretended to be a government official. This burden of proof makes conviction of satires and parodies highly unlikely.
IV. Conclusion
In conclusion, while it is undeniable that deepfakes pose a real threat to the democratic process, regulating them is a highly nuanced and complicated task. Imposing regulations through government bodies like the FEC would likely result in overbroad regulations that are unconstitutional and hinder legitimate uses of AI. As this technology advances even further, lawmakers need to exercise extensive caution to ensure that their goals are met without infringing on First Amendment rights. Rather than rushing to impose ineffective regulations, the responsibility should fall to Congress and the courts to develop nuanced, targeted solutions that address the misuse of AI without stifling its positive applications.
Works Cited
Barclay, Aadam. “Artificial Intelligence in Political Campaigns.” The Regulatory Review, 27 Nov. 2024, www.theregreview.org/2024/11/27/barclay-artificial-intelligence-in-political-campaigns/
.
Defamation. California Civil Code §§ 45–46.
Federal Register. “Federal Election Commission.” www.federalregister.gov/agencies/federal-election-commission
.
FEC Meeting Document 24-39-A. 10 Sept. 2024, www.fec.gov/resources/cms-content/documents/mtgdoc-24-39-A.pdf
.
FIRE. “Deepfakes, Democracy, and the Perils of Regulating New Communications Technologies.” www.thefire.org/research-learn/deepfakes-democracy-and-perils-regulating-new-communications-technologies
.
GAO Report. “Deepfakes: Technology, Detection, and Policy Challenges.” www.gao.gov/assets/gao-20-379sp.pdf
.
Hustler v. Falwell, 485 U.S. 46 (1988).
Microsoft. “Fighting Deepfakes with More Transparency About AI.” news.microsoft.com/source/features/ai/fighting-deepfakes-with-more-transparency-about-ai/.
TechTarget. “OpenAI Deepfake Detector Belated but Welcome.” www.techtarget.com/searchenterpriseai/news/366583843/OpenAI-deepfake-detector-belated-but-welcome
.
The Guardian. “What Are Deepfakes and How Can You Spot Them?” 13 Jan. 2020, www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them
.
The Hill. “FEC Avoids AI Rulemaking.” thehill.com/policy/technology/4888687-fec-avoids-ai-rulemaking/.
U.S. Code, Title 18, Chapter 43.
United States v. Stevens, 559 U.S. 460 (2010).
Weintraub, Ellen. Statement on REG-2023-02. FEC, www.fec.gov/resources/cms-content/documents/REG-2023-02-A-in-Campaign-Ads-Vice-Chair-Statement.pdf
.
Cooksey, Sean. Statement on REG-2023-02. FEC, www.fec.gov/resources/cms-content/documents/Statement-re-REG-2023-02-NOD-Cooksey.pdf
.


Comments