Colorado is pumping the brakes on first-of-its-kind AI regulation to find a practical path forward
In May 2024, Colorado made history by passing the Colorado Artificial Intelligence Act, becoming the first state in the U.S. to implement comprehensive legislation aimed at regulating “high-risk” artificial intelligence systems. This groundbreaking law sought to address the potential dangers of AI in critical areas such as employment, healthcare, and housing, aiming to protect consumers from algorithmic discrimination while fostering innovation within the burgeoning tech sector. Governor Jared Polis, who signed the bill with some reluctance, recognized the need for preventive measures but also expressed concerns about the potential economic impact of a complex compliance regime. As a result, the law’s implementation has faced significant pushback from the tech industry, prompting Colorado lawmakers to delay its enactment to June 2026 and consider amending or repealing key provisions.
The initial excitement surrounding the Colorado AI Act has since been tempered by practical challenges and political pressures. Industry advocates have raised alarms about the administrative burdens the law could impose on startups, while consumer groups have fought to maintain its protective measures. The dynamic between these stakeholders has led to a special legislative session convened by Polis to reconsider the law’s provisions. This situation mirrors similar hesitations in other states, such as California and Connecticut, where ambitious AI legislation has either stalled or been curtailed. As Colorado navigates these complexities, it faces a pivotal moment that could determine whether its pioneering efforts serve as a model for other states or a cautionary tale about the difficulties of regulating rapidly evolving technologies.
Looking ahead, experts suggest that Colorado could benefit from adopting a more incremental approach to AI policymaking, focusing on gradual improvements rather than sweeping reforms. By engaging with developers and community stakeholders, the state can refine its definitions of high-risk applications, clarify compliance responsibilities, and implement pilot programs to test regulatory frameworks. This method of “small ball” policymaking could help balance the need for consumer protection with the imperative to support innovation, positioning Colorado as a leader in responsible AI governance. Ultimately, the state’s experience may serve as a valuable lesson for lawmakers across the nation as they grapple with the challenges of regulating artificial intelligence in a way that safeguards public interests while encouraging technological advancement.
Colorado was first to pass comprehensive AI legislation in the U.S.
wildpixel/Getty Images
When the
Colorado Artificial Intelligence Act
passed in May 2024, it made
national headlines
. The law was the
first of its kind
in the U.S. It was a comprehensive attempt to govern “high-risk” artificial intelligence systems across various industries before they could cause real-world harm.
Gov. Jared Polis signed it
reluctantly
– but now, less than a year later, the governor is supporting a
federal pause on state-level AI laws
. Colorado lawmakers have
delayed the law’s enactment
to June 2026 and are seeking to repeal and replace portions of it.
Lawmakers face pressure from the
tech industry
,
lobbyists
and the practicalities related to the
cost of implementation
.
What Colorado does next will shape whether its early move becomes a model for other states or a lesson in the challenges of regulating emerging technologies.
I study how
AI and data science are reshaping policymaking
and democratic accountability. I’m interested in what Colorado’s pioneering efforts to regulate AI can teach other state and federal legislators.
The first state to act
In 2024, Colorado legislators decided not to
wait for the U.S. Congress to act
on nationwide AI policy. As Congress
passes fewer laws
due to
polarization
stalling the legislative process
, states have increasingly taken the lead on shaping AI governance.
The Colorado AI Act defined “high-risk” AI systems as those influencing consequential decisions in employment, housing, health care and other areas of daily life. The law’s goal was
straightforward but ambitious
: Create preventive protections for consumers from algorithmic discrimination while encouraging innovation.
Colorado’s leadership on this is not surprising. The state has a climate that embraces
technological innovation
and a
rapidly growing AI sector
. The state positioned itself at the frontier of AI governance,
drawing from international models such as the EU AI Act
and from privacy frameworks such as the
2018 California Consumer Privacy Act
. With an initial effective date of Feb. 1, 2026, lawmakers gave themselves ample time to refine definitions, establish oversight mechanisms and build capacity for compliance.
When the law passed in May 2024, policy analysts and advocacy groups hailed it as a breakthrough. Other states, including Georgia and Illinois,
introduced bills closely modeled after Colorado’s AI bill
, though those proposals did not advance to final enactment. The law was described by the
Future of Privacy Forum
as the “first comprehensive and risk-based approach” to AI accountability. The forum is a nonprofit research and advocacy organization that develops guidance and policy analysis on data privacy and emerging technologies.
Legal commentators,
including attorneys general across the nation
, noted that Colorado created robust AI legislation that other states could emulate in the absence of federal legislation.
Politics meets process, stalling progress
Praise aside, passing a bill is one thing, but putting it into action is another.
Immediately after the bill was signed,
tech companies and trade associations
warned that the act could create heavy administrative burdens for startups and deter innovation. Polis, in his signing statement, cautioned that “
a complex compliance regime
” might slow economic growth. He urged legislators to revisit portions of the bill.
CBS News Colorado reports on state lawmakers racing to replace the state’s artificial intelligence law before February 2026.
Polis convened a special legislative session
to reconsider portions of the law.
Multiple bills were introduced
to amend or delay its implementation.
Industry advocates
pressed for narrower definitions and longer timelines. All the while,
consumer groups
fought to preserve the act’s protections.
Meanwhile, other states watched closely and changed course on sweeping AI policy. Gov. Gavin Newsom
slowed California’s own ambitious AI bill after facing similar concerns
. Meanwhile
Connecticut failed to pass its AI legislation
amid a veto threat from Gov. Ned Lamont.
Colorado’s early lead turned precarious. The same boldness that made it first also made the law vulnerable – particularly because, as seen in other states,
governors can veto, delay or narrow AI legislation as political dynamics shift
.
From big swing to small ball
In my opinion, Colorado can remain a leader in AI policy by pivoting toward “
small ball,” or incremental, policymaking
, characterized by gradual improvements, monitoring and iteration.
This means focusing not just on lofty goals but on the practical architecture of implementation. That would include defining what counts as high-risk applications and clarifying compliance duties. It could also include launching pilot programs to test regulatory mechanisms before full enforcement and building impact assessments to measure the effects on innovation and equity. And finally, it could engage developers and community stakeholders in shaping norms and standards.
This incrementalism is not a retreat from the initial goal but rather realism. Most durable policy
emerges from gradual refinement
, not sweeping reform. For example, the EU’s AI Act is
actually being implemented in stages
rather than all at once, according to legal scholar Nita Farahany.
A video from EU Made Simple explains the EU’s AI regulation, which was the first in the world.
Effective governance of complex technologies requires iteration and adjustment. The same was true for
data privacy
,
environmental regulation
and
social media oversight
.
In the early 2010s, social media platforms grew unchecked,
generating public benefits but also new harms
. Only after extensive research and
public pressure
did governments begin
regulating content and data practices
.
Colorado’s AI law may represent the start of a similar trajectory: an early, imperfect step that prompts learning, revision and eventual standardization across states.
The core challenge is striking a workable balance. Regulations need to protect people from unfair or unclear AI decisions without creating such heavy burdens that businesses hesitate to build or deploy new tools. With its thriving tech sector and pragmatic policy culture, Colorado is well positioned to model that balance by embracing incremental, accountable policymaking. In doing so, the state can turn a stalled start into a blueprint for how states nationwide might govern AI responsibly.
Stefani Langehennig receives funding from the American Political Science Association’s (APSA) Centennial Center Research Center.