Chair Mace, Ranking Member Connolly, and distinguished members of the Committee on Oversight and Accountability Subcommittee on Cybersecurity, Information Technology, and Government Innovation:

My name is Jennifer Huddleston, and I am a technology policy research fellow at the Cato Institute. My research focuses primarily on the intersection of law and technology, including issues related to the governance of emerging technologies, such as artificial intelligence (AI). Therefore, I welcome the opportunity to testify regarding the recent AI executive order issued by the Biden Administration.

In this testimony, I will focus on two key points:

  • The AI EO represents a significant shift in the US approach to AI and technology policy in general;
  • The AI EO raises concerns about appropriate separation of powers at a time when Congress is debating the most sensible policy framework for AI.

The AI EO Represents a Shift Towards a Precautionary and Permissioned Approach to Technology Policy

The United States has typically had a light touch approach to technology regulation, particularly for general purpose technologies like AI and the internet before it. This approach has allowed and continues to allow the U.S. to flourish in this sector in both innovation and economic impact, as entrepreneurs and consumers choose the trajectory of the future — not government bureaucrats. Unfortunately, the Biden Administration’s executive order on artificial intelligence does not follow this pattern.

While much of the public attention on AI emerged with the rapid popularity and adoption of consumer-facing, generative AI products like ChatGPT and Dall‑E in late 2022, artificial intelligence and machine learning have been around for quite some time at both an enterprise and consumer level. Prior to the popularity of generative AI products, the average consumer was already experiencing the benefits of and interacting with AI in their everyday lives: tools used to detect potential credit card fraud, chat bots for customer service, predictive auto-complete options when they compose emails or texts, and any number of elements in the content moderation and search results process use artificial intelligence. Perhaps even more importantly, AI is being used in life-changing and life-saving ways in a wide range of industries. AI is helping fight wildfires1 and enabling stroke victims to speak again.2 It is estimated AI could increase productivity by 1.5 percent per year and global GDP by $7 trillion over the next decade.3

With significant benefits of this next technological revolution emerging, it is important that a rush to respond to worst-case scenarios does not wrongly malign or regulate away the benign and beneficial applications of a technology. Not all uses can be predicted, and asking the government to do so would stifle any applications that innovators may dream up to solve some of our most complex problems.

The Biden Administration’s EO, however, signals a desire for a more regulatory approach to AI through the scale at which it directs administrative resources to a variety of actions that lay the groundwork for likely regulation, as well as its invocation of the Defense Production Act (more discussion of this will occur later in this statement). Not only is this a different approach than the United States has traditionally taken to general purpose technologies and a different attitude towards entrepreneurship and innovation, it also significantly shifts the approach seen in the previous administration’s executive orders on AI. Previously, the actions around AI had been focused on advancing innovation and entrepreneurship in this space and “reduc[ing] barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy, American values, and United States economic and national security.”4 Similarly, the Obama administration in a 2016 report noted, “AI will continue to contribute to economic growth and will be a valuable tool for improving the world, as long as industry, civil society, and government work together to develop the positive aspects of the technology, manage its risks and challenges, and ensure that everyone has the opportunity to help in building an AI-enhanced society and to participate in its benefits,” and focused on ways to support its development while responding to appropriate safety and civil liberties concerns in necessary scenarios.5

The Biden Administration’s executive order looks less favorably on this important technology and risks sending a negative signal to innovators and consumers about its development and potential applications. Even where the EO leaves it to agency discretion on how to proceed, it suggests that there is like a case for action and nudges agencies in a “do something” direction more so than prior administrations’ position.

The AI EO Raises Concerns About Appropriate Separation of Powers

Beyond the general shift in the light touch regulatory approach that has supported entrepreneurs and innovators to make the United States a leader in information technology, the AI EO also represents a significant overreach of executive power. The EO provides for significant regulatory power over AI when Congress continues to debate the potential legislative policy framework for this issue, including its delegations to agencies. Even those who may applaud the underlying policy of the AI EO should be concerned about the executive overreach and a breach of the separation of powers. It is typically difficult to regain control over an issue once such authority has been claimed by the executive in such a manner. Similarly, it should not be presumed that such authority can or will only be used by those with policy points of view that coincide with one’s preferences. 

The most notable example of this overreach in the AI EO is the use of the Defense Production Act (DPA) to justify its provisions. Originally designed to provide the executive with authority to meet a national security crisis, the AI EO invokes the Defense Production Act not to respond to such a crisis, but rather to require innovators of AI products deemed high risk that they both notify the government and submit to government run “red teaming” regarding the potential risks.6 While it’s possible Congress might develop a similar regime or requirement with the accompanying consequences of such a permissioned approach, to do so via an executive order falls well beyond the typical uses of the DPA. As we have seen in the past, once such authority is given to the executive it is often hard to restrain it or return the appropriate authority to Congress.

This executive overreach cannot be presumed to have occurred because of a need for swift action nor a lack of attention to the issue on Congress’s part. Numerous bills have been introduced in both the House and the Senate considering potential approaches to AI and, at times, it seems almost every committee has held a hearing on the impact AI might have on matters within its jurisdiction. Additionally, both the House and Senate are undergoing informative processes to help determine where or if regulation is necessary. That is not to say that these proposals do not raise their own concerns about the negative impact on innovation, speech, or other issues, but rather to illustrate the rush to regulate a complicated and multi-faceted issue by executive order and direct agencies without a clear delegation from Congress.

Conclusion

The United States’ light touch approach to the Internet helped enable its global leadership and realize its economic potential in recent years. This has benefited both consumers and entrepreneurs. As we encounter our next disruptive technological era with AI, we must consider not only the risks but also the benefits of such technology, many of which we may currently be unable to predict. The United States has a chance to distinguish itself from more regulatory approaches once again and embrace an approach that allows entrepreneurs and consumers to use technology to find creative solutions to problems and needs. The recent AI executive order, however, signals a more regulatory approach to this new technology. But regardless of how one feels about the underlying content of the order, the decision to develop a substantial regulatory framework on such an important issue via executive order should give pause for concerns about the potential usurpation of Congress’s role.