Colorado is pumping the brakes on first-of-its-kind AI regulation to find a practical path forward
In May 2024, Colorado made history by passing the Colorado Artificial Intelligence Act, the first comprehensive legislation in the United States aimed at regulating “high-risk” AI systems across various sectors, including employment, housing, and healthcare. This ambitious law was designed to protect consumers from algorithmic discrimination while fostering innovation within the rapidly growing AI sector in the state. Governor Jared Polis, who signed the act with some reluctance, recognized the potential for a “complex compliance regime” to stifle economic growth and urged lawmakers to revisit certain provisions. As a result, less than a year after its passage, Colorado lawmakers have delayed the law’s implementation to June 2026, amidst growing pressure from the tech industry and concerns about the costs associated with compliance.
The swift shift in Colorado’s approach to AI regulation reflects broader challenges faced by states attempting to navigate the intricate balance between protecting citizens and encouraging technological advancement. While the Colorado AI Act initially inspired other states, such as Georgia and Illinois, to propose similar legislation, these efforts have often stalled due to political dynamics and pushback from industry lobbyists. As Colorado’s leadership in AI governance becomes precarious, the state is now reconsidering its strategy. Lawmakers are exploring a more incremental approach to policymaking, which could involve defining high-risk applications more clearly, launching pilot programs, and engaging stakeholders in developing regulatory standards. This shift towards “small ball” policymaking aims to refine regulations gradually, allowing for adjustments based on real-world impacts, thereby setting a potential blueprint for other states to follow in the responsible governance of AI technologies.
Colorado’s pioneering efforts serve as a critical case study in the complexities of AI regulation, illustrating the need for a balanced approach that safeguards consumers while promoting innovation. As the state navigates this challenging landscape, its experiences may inform future legislative efforts both at the state and federal levels, shaping the trajectory of AI governance across the nation.
Colorado was first to pass comprehensive AI legislation in the U.S.
wildpixel/Getty Images
When the
Colorado Artificial Intelligence Act
passed in May 2024, it made
national headlines
. The law was the
first of its kind
in the U.S. It was a comprehensive attempt to govern “high-risk” artificial intelligence systems across various industries before they could cause real-world harm.
Gov. Jared Polis signed it
reluctantly
– but now, less than a year later, the governor is supporting a
federal pause on state-level AI laws
. Colorado lawmakers have
delayed the law’s enactment
to June 2026 and are seeking to repeal and replace portions of it.
Lawmakers face pressure from the
tech industry
,
lobbyists
and the practicalities related to the
cost of implementation
.
What Colorado does next will shape whether its early move becomes a model for other states or a lesson in the challenges of regulating emerging technologies.
I study how
AI and data science are reshaping policymaking
and democratic accountability. I’m interested in what Colorado’s pioneering efforts to regulate AI can teach other state and federal legislators.
The first state to act
In 2024, Colorado legislators decided not to
wait for the U.S. Congress to act
on nationwide AI policy. As Congress
passes fewer laws
due to
polarization
stalling the legislative process
, states have increasingly taken the lead on shaping AI governance.
The Colorado AI Act defined “high-risk” AI systems as those influencing consequential decisions in employment, housing, health care and other areas of daily life. The law’s goal was
straightforward but ambitious
: Create preventive protections for consumers from algorithmic discrimination while encouraging innovation.
Colorado’s leadership on this is not surprising. The state has a climate that embraces
technological innovation
and a
rapidly growing AI sector
. The state positioned itself at the frontier of AI governance,
drawing from international models such as the EU AI Act
and from privacy frameworks such as the
2018 California Consumer Privacy Act
. With an initial effective date of Feb. 1, 2026, lawmakers gave themselves ample time to refine definitions, establish oversight mechanisms and build capacity for compliance.
When the law passed in May 2024, policy analysts and advocacy groups hailed it as a breakthrough. Other states, including Georgia and Illinois,
introduced bills closely modeled after Colorado’s AI bill
, though those proposals did not advance to final enactment. The law was described by the
Future of Privacy Forum
as the “first comprehensive and risk-based approach” to AI accountability. The forum is a nonprofit research and advocacy organization that develops guidance and policy analysis on data privacy and emerging technologies.
Legal commentators,
including attorneys general across the nation
, noted that Colorado created robust AI legislation that other states could emulate in the absence of federal legislation.
Politics meets process, stalling progress
Praise aside, passing a bill is one thing, but putting it into action is another.
Immediately after the bill was signed,
tech companies and trade associations
warned that the act could create heavy administrative burdens for startups and deter innovation. Polis, in his signing statement, cautioned that “
a complex compliance regime
” might slow economic growth. He urged legislators to revisit portions of the bill.
CBS News Colorado reports on state lawmakers racing to replace the state’s artificial intelligence law before February 2026.
Polis convened a special legislative session
to reconsider portions of the law.
Multiple bills were introduced
to amend or delay its implementation.
Industry advocates
pressed for narrower definitions and longer timelines. All the while,
consumer groups
fought to preserve the act’s protections.
Meanwhile, other states watched closely and changed course on sweeping AI policy. Gov. Gavin Newsom
slowed California’s own ambitious AI bill after facing similar concerns
. Meanwhile
Connecticut failed to pass its AI legislation
amid a veto threat from Gov. Ned Lamont.
Colorado’s early lead turned precarious. The same boldness that made it first also made the law vulnerable – particularly because, as seen in other states,
governors can veto, delay or narrow AI legislation as political dynamics shift
.
From big swing to small ball
In my opinion, Colorado can remain a leader in AI policy by pivoting toward “
small ball,” or incremental, policymaking
, characterized by gradual improvements, monitoring and iteration.
This means focusing not just on lofty goals but on the practical architecture of implementation. That would include defining what counts as high-risk applications and clarifying compliance duties. It could also include launching pilot programs to test regulatory mechanisms before full enforcement and building impact assessments to measure the effects on innovation and equity. And finally, it could engage developers and community stakeholders in shaping norms and standards.
This incrementalism is not a retreat from the initial goal but rather realism. Most durable policy
emerges from gradual refinement
, not sweeping reform. For example, the EU’s AI Act is
actually being implemented in stages
rather than all at once, according to legal scholar Nita Farahany.
A video from EU Made Simple explains the EU’s AI regulation, which was the first in the world.
Effective governance of complex technologies requires iteration and adjustment. The same was true for
data privacy
,
environmental regulation
and
social media oversight
.
In the early 2010s, social media platforms grew unchecked,
generating public benefits but also new harms
. Only after extensive research and
public pressure
did governments begin
regulating content and data practices
.
Colorado’s AI law may represent the start of a similar trajectory: an early, imperfect step that prompts learning, revision and eventual standardization across states.
The core challenge is striking a workable balance. Regulations need to protect people from unfair or unclear AI decisions without creating such heavy burdens that businesses hesitate to build or deploy new tools. With its thriving tech sector and pragmatic policy culture, Colorado is well positioned to model that balance by embracing incremental, accountable policymaking. In doing so, the state can turn a stalled start into a blueprint for how states nationwide might govern AI responsibly.
Stefani Langehennig receives funding from the American Political Science Association’s (APSA) Centennial Center Research Center.