Since the launch of Open AI’s ChatGPT in late 2022, federal policymakers have been debating how to regulate artificial intelligence (AI) transparency and safety. But so far, Congress has not enacted any significant legislation. That could soon change, as the Senate Committee on Commerce, Science, and Transportation could report out as many as eight AI-related bills before the end of the 118th Congress.

State efforts / Though public and media attention focus on policymaking in Washington DC, the real action on AI has, so far, largely happened at the state level. According to the National Conference of State Legislatures, in the 2024 legislative session (as of June 3), at least 40 states, Puerto Rico, the Virgin Islands, and the District of Columbia’s local government introduced AI bills, and six states, Puerto Rico, and the Virgin Islands adopted resolutions or enacted legislation. Among those developments:

  • Illinois enacted legislation to regulate the use of AI in certain employment settings.
  • Indiana created a task force to consider AI regulation.
  • Maryland adopted policies and procedures concerning state government development, procurement, deployment, use, and assessment of AI systems.
  • Utah enacted the Artificial Intelligence Policy Act, providing several consumer protections.
  • West Virginia created a select committee to consider AI regulation.

Colorado / When it comes to the passage of significant legislation addressing transparency and safety issues related to AI, those states all take a backseat to Colorado. There, the legislature passed the Colorado Artificial Intelligence Act (S.B. 205) in August, though it will not take effect until February 1, 2026, at the earliest.

The new law defines “high risk AI systems” as those that make consequential decisions affecting consumers’ lives, including educational opportunity, government services, insurance, financial and legal services, employment opportunity, healthcare services, and housing. The law describes duties for both developers and deployers (that is, users) of high-risk AI systems, emphasizing the importance of using reasonable care to mitigate risks of algorithmic discrimination.

Required duties of AI developers include providing documentation and disclosures regarding the intended uses and potential limitations of high-risk systems. Developers are required to promptly report any instances of algorithmic discrimination to the Colorado attorney general, who has exclusive enforcement authority (precluding all private rights of action). AI deployers are charged with implementing risk management programs, conducting impact assessments, notifying consumers of the use of high-risk AI systems, and offering consumers the opportunity to appeal adverse decisions made by AI systems. Developers and deployers must also provide a public statement to consumers summarizing the types of high-risk AI systems they develop or use, and how to mitigate algorithmic discrimination risks.

Colorado may not be done legislating AI, but future efforts seem intended to ensure regulation does not suppress this promising technology. Gov. Jared Polis (D) has called on the legislature to amend the law before it takes effect “to ensure the final regulatory framework will protect consumers and support Colorado’s leadership in the AI sector.” Proposed revisions would limit the law’s scope to only the most high-risk systems, follow a more traditional enforcement framework without mandatory proactive disclosures, and emphasize a regulatory focus on developers of high-risk AI systems rather than smaller companies that deploy them.

What is next? / S.B. 205 had its genesis in a bipartisan, multistate (involving nearly 30 states) AI working group under the auspices of the National Council of State Legislatures (NCSL). The group intends to coordinate approaches to regulating AI systems and facilitate informed legislative action, emphasizing the need to balance AI regulation and innovation. The Colorado legislation reflects this dual objective by including provisions focused on promoting responsible AI development while mitigating risks of algorithmic discrimination. Furthermore, Colorado’s commitment to improvement and adaptation in its approach to AI regulation can be observed in its delayed effective date of implementation as well as Polis’s suggested revisions.

Colorado’s efforts are having influence in other states. In late August, the California Legislature passed S.B. 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. It would have required developers of advanced—and large—AI models to adopt safety measures to prevent the technologies from being misused. In late September, Gov. Gavin Newsom (D) vetoed the legislation, arguing it should not be limited to only the largest and most expensive AI models—ones that cost at least $100 million to train—and it did not consider whether the models would be deployed in high-risk situations. In his veto message, Newsom announced that he would be working with prominent AI researchers, including Fei-Fei Lei of Stanford, to develop new AI safety legislation that he would be willing to support.

In addition, the NCSL working group hopes to see comprehensive AI system legislation introduced in a dozen or more states in 2025. It would be useful for the working group to offer model state legislation that those and other lawmakers could consider.

AI is a new frontier in both technology and public policy, and concerns about it often seem rooted more in science fiction than sound understanding. The laboratories of the states will hopefully help distinguish good policy from unjustifiably costly, obstructive, or simply unnecessary government intervention.