My name is Jennifer Huddleston, and I am a technology policy research fellow at the Cato Institute. My research focuses primarily on the intersection of law and technology, including issues related to speech and the governance of emerging technologies, such as artificial intelligence (AI). Therefore, I welcome the opportunity to submit a statement regarding the potential need for and effect of a government intervention into requirements around the use of AI in election-related speech.

In this statement for the record, I will focus on two key points:

  • AI is a general-purpose technology and overly broad regulatory actions based in fear of potential misuse by bad actors are likely to have unintended consequences that could prevent numerous beneficial and benign uses, as well as the overall development of this technology, including in elections and campaigns.
  • Before presuming new law or rules are necessary, policymakers should carefully define the perceived harm they are trying to address and consider if existing regulations may already address that harm. They should also examine if there are cases where outdated interpretations or regulations may be preventing a better technological solution. This is true for the use of AI in election campaigns.

The most basic definition of AI is a computer or robot that can perform tasks associated with human intelligence and discernment. Most of us have been encountering AI much longer than we realize in tools like autocorrect, autocomplete, chatbots, and translation software. While generative AI has garnered recent attention, the definitions of AI typically found in most policy proposals would impact far more than the use of ChatGPT.

The last few election cycles in the United States have seen rising fears about misinformation, disinformation, and deepfakes. For all the fears around technology and elections, however, we shouldn’t presume only the worst-case scenarios. Fears about the potential manipulation of new forms of media and technology have merged, and society has had to evolve their awareness of potential deception or manipulation of that media and redefine what makes information “real” or “true.”1 Some may bemoan the decreasing trust in media more generally, but this distrust existed before AI, and even before the creation of social media.2 This skepticism and awareness may actually become a positive when it comes to concerns about the use of AI, as it may render AI deepfakes more akin to the annoyance of spam emails and prompt greater scrutiny of certain types of content more generally.3

While the internet has increased the popularity and speed with which an individual piece of content can be shared, it also has developed its own norms around understanding the veracity of certain claims. These norms have developed without government dictates and will likely continue to develop in the face of new technologies like AI. As Jeffrey Westling, Director of Technology and Innovation Policy at the American Action Forum, writes, “These societal norms can and will continue to drive trust in video as the viewer will understand that these institutions investigated the claims beyond just what appears on screen. And to the extent that videos become more consistently faked, society will shift back towards looking at the context behind the video.”4

With all this considered, we should be cautiously optimistic that history shows a societal ability to adapt to new challenges in understanding the veracity of information put before us and to avoid overly broad rushes to regulate everything but the kitchen sink for fear of what could happen. While the saying “a lie may travel halfway round the world before the truth puts on its shoes” may have some concern with how quickly a fraudulent or manipulated image may spread, there are a variety of non-government responses that often come into play. For example, many online platforms now provide further context around certain types of media, including both election-related speech and manipulated media. These precise rules vary, allowing different platforms to come to different decisions around the same piece of material.5

It should be emphasized that not all uses of AI in election advertisements should be presumed to be manipulative or fraudulent. In fact, even when it comes to election advertising, there are beneficial and non-manipulative uses of technologies like AI. For example, AI could be used to translate an existing ad in English to the native language of a group of voters that might not otherwise be reached or add subtitles to reach communities of individuals with disabilities. It could also be used to lower the costs of production and post-production, such as removing a disruption in a shot. Even these examples are more direct interactions that may be more visible than the countless examples of AI that may be used in spell-checking a script or using an algorithm in a search engine to conduct research or promote an ad. These actions are not manipulative or deceptive nor do they give rise to concerns about mis or disinformation. However, under many definitions, some or all of these actions would result in labeling requirements that an advertisement used AI. Given the broad use of AI , such a “warning label” could become meaningless as it applies to both benign and manipulative uses. Existing law does not get tossed out the window just by the appearance of new technologies, and actions by bad actors must be addressed in existing FEC rules. New technologies should not change the underlying rules of the road. Similarly, potential regulations must also consider the impact on issues such as speech.

It is important to note that “truth in advertising” laws do not apply to current political advertising.6 While it may be distasteful to some, current law does not require political ads to be only truthful or factual.7 Political speech, including that contained in election advertising, is protected by the First Amendment. As a result, there are significant limits on when or how the government can intervene in narrowly tailored ways when responding to a compelling government interest. This does not mean that there is no redress if harm occurs, as existing laws such as defamation can still apply.

As generative AI remains rather new, many societal norms around its use are still evolving; however, already existing standards around election advertising online are incorporating the use of AI into their policies. For example, Google tweaked its policy earlier this month to mandate AI disclosures in political ads.8 As has been seen with current policies around election advertising, different platforms may reach different specifics on these disclosures, like they have with other concerns around election advertising. Gradually, general norms and best practices will evolve and adapt with changing understandings, much like other technologies, in a far quicker way than top-down law could.

As mentioned above, it is important to remember that existing laws did not disappear with the emergence of AI. FEC rules can still apply to actions by AI, and the recourse will remain the same for violations by new technology. Many of the expressed concerns about potential AI generated disinformation or misinformation are not unique to AI but rather a new manifestation of existing concerns. Instead of rushing to create problematic licensure regimes or regulations that are likely to become quickly outdated, agencies and Congress should clearly articulate the harm that they are trying to solve and why it is not addressed by existing regulations. Any such regulations should be narrowly tailored to specific applications — even within an area such as election law — and not broadly applied to all uses of technology. Policymakers should recognize that AI, like other technologies, is ultimately a tool. Like any tool, it can be used for both productive and disruptive purposes; however, many of the potential disruptive concerns are likely already addressed. Rather than rush to create new regulations, policymakers should examine if their concerns are addressed by existing regulations or if they are truly novel concerns. Additionally, particularly around issues such as election advertising, policymakers must also consider the impact regulation could have on already utilized forms of AI as well as important values such as free speech.