My name is Jennifer Huddleston, and I am a technology policy research fellow at the Cato Institute. My research focuses primarily on the intersection of law and technology, including issues related to the governance of emerging technologies such as artificial intelligence (AI). I welcome the opportunity to submit a statement regarding the potential effects of government requirements around transparency in artificial technology.
In this statement for the record, I will focus on two key points:
- AI is a general-purpose technology that is already frequently in use by the average consumer and a wide array of industries. Overly broad regulations are likely to have unintended consequences and impact numerous existing products.
- The US policy towards AI should remain flexible and responsive to specific harms or risks and seek to continue the light touch approach that has encouraged innovative activity and its accompanying benefits. Consumers and innovators, not government, are ultimately the best at deciding what applications of a new technology are the most beneficial to consumers.
The most basic definition of AI is a computer or robot that can perform tasks associated with human intelligence and discernment. Most of us have been encountering AI much longer than we realize in tools like autocorrect, autocomplete, chatbots, and translation software. While generative AI has garnered recent attention, the definitions of AI typically found in most policy proposals would impact far more than the use of ChatGPT.
The result is a proposal to require transparency that a product uses AI, which would mandate disclosures on many already common products like search engines, spellchecks, and voice assistants on smart phones without any underlying change to the product. Not only would this require compliance costs for these existing products, but it also impacts the value of the underlying transparency. If nearly every product requires a warning that it uses AI at some level in its processes, then such a transparency requirement becomes nearly meaningless to the consumer who is fatigued from seeing the same constant disclosure. Furthermore, such general labels fail to provide meaningful information that consumers can understand and act upon if desired. The government should not be in the business of being a user interface designer as the best ways to communicate such information will vary from product to product and use case to use case.
Because AI is a broad general-purpose technology, one-size-fits-all regulations are likely a poor fit. Building on the success of past light touch approaches that refrained from a precautionary approach to technologies including the internet, policymakers should resist the urge to engage in top-down rulemaking that attempts to predict every best- and worst-case scenario and accidentally limits the use of technology. Industrial policy that seeks to direct technological development in only one way may miss the creative uses that entrepreneurs seeking to respond to consumer demands would naturally find. For this reason, policymakers should ensure regulations are carefully targeted at specific harms or applications that would be certain or highly likely to be catastrophic or irreversible instead of broad general-purpose regulations.
Instead of a top-down approach, government should also consider the ways that it can work with innovators and consumers to resolve concerns that may require a degree of certainty, but for which static regulation is likely to be problematic. This should include both looking at the possibility of removing potential barriers as well as supporting the development of industry best practices as appropriate. If necessary, these best practices could be more formalized to address concerns around specific harms or create legal certainty. As I discussed in comments to the NTIA in June 2023:
“Soft law tools—such as multi-stakeholder working groups to develop best practices— may help identify appropriate limits on certain applications while also providing continued flexibility during periods of rapid development. These soft law tools also support the interactions of various interest groups and do not presume that a regulatory outcome is needed. In other cases, they may identify areas where deregulation is needed to remove outdated law or where hard law is needed to respond to harms or create legal certainty around practices. This also provides opportunities for innovators to learn of regulators and society’s concerns and provide solutions that may alleviate the sense of unease while still encouraging beneficial and flexible uses of a technology.
Soft law tools and best practices can be formalized in a way that provides opportunities for transparency and information sharing. In creating voluntary standards, best practices and other soft law tools can also address areas where direct harm is less clear, but a variety of concerns exist.”1
Innovation is often disruptive and can bring with it a sense of unease and even fear. While such uncertainty and fear is currently seen with regards to AI, previous advances in information technologies and media have had similar concerns around many issues, including trust of information.2 Societal norms often develop around the appropriate uses of technology and allow for a more flexible and responsive approach to concerns than a static law or regulation. Policymakers should build on the success of past light touch regulatory approaches that have made the US a leader in technological innovation and narrowly tailor necessary interventions to respond to otherwise unaddressed harms and specific uses or applications of the technology.