Colorado is pumping the brakes on first-of-its-kind AI regulation to find a practical path forward
In May 2024, Colorado made history by passing the Colorado Artificial Intelligence Act, the first comprehensive legislation in the United States aimed at regulating “high-risk” AI systems across various sectors. This landmark law was designed to protect consumers from algorithmic discrimination in critical areas such as employment, housing, and healthcare, while simultaneously fostering innovation within the rapidly evolving tech landscape. Governor Jared Polis, who signed the act amid concerns over its complexity and potential economic impact, has since shifted his stance, now advocating for a federal pause on state-level AI regulations. This change comes as Colorado lawmakers have delayed the law’s implementation until June 2026, amidst growing pressure from the tech industry and lobbyists regarding the practicalities and costs of compliance.
The initial enthusiasm surrounding the Colorado AI Act has been tempered by the realization that enacting legislation is only the first step; effective implementation poses significant challenges. Following the law’s passage, tech companies expressed concerns that the compliance requirements could stifle innovation and create administrative burdens, particularly for startups. In response, Governor Polis convened a special legislative session to reconsider parts of the law, resulting in multiple proposals aimed at amending or delaying its rollout. This situation has prompted other states, such as California and Connecticut, to reevaluate their own AI legislation in light of similar concerns, highlighting the precariousness of Colorado’s pioneering position in AI governance.
To navigate these complexities, experts suggest that Colorado adopt a more incremental approach to policymaking, focusing on gradual improvements and practical implementation strategies rather than sweeping reforms. This could involve defining high-risk AI applications more clearly, establishing compliance frameworks, and engaging stakeholders in the development of norms and standards. By embracing this “small ball” strategy, Colorado could not only refine its AI regulations but also set a precedent for other states, striking a balance between protecting consumers and encouraging technological advancement. As the state continues to grapple with the implications of its groundbreaking legislation, its actions will be closely watched as a potential model—or cautionary tale—for AI governance across the nation.
Colorado was first to pass comprehensive AI legislation in the U.S.
wildpixel/Getty Images
When the
Colorado Artificial Intelligence Act
passed in May 2024, it made
national headlines
. The law was the
first of its kind
in the U.S. It was a comprehensive attempt to govern “high-risk” artificial intelligence systems across various industries before they could cause real-world harm.
Gov. Jared Polis signed it
reluctantly
– but now, less than a year later, the governor is supporting a
federal pause on state-level AI laws
. Colorado lawmakers have
delayed the law’s enactment
to June 2026 and are seeking to repeal and replace portions of it.
Lawmakers face pressure from the
tech industry
,
lobbyists
and the practicalities related to the
cost of implementation
.
What Colorado does next will shape whether its early move becomes a model for other states or a lesson in the challenges of regulating emerging technologies.
I study how
AI and data science are reshaping policymaking
and democratic accountability. I’m interested in what Colorado’s pioneering efforts to regulate AI can teach other state and federal legislators.
The first state to act
In 2024, Colorado legislators decided not to
wait for the U.S. Congress to act
on nationwide AI policy. As Congress
passes fewer laws
due to
polarization
stalling the legislative process
, states have increasingly taken the lead on shaping AI governance.
The Colorado AI Act defined “high-risk” AI systems as those influencing consequential decisions in employment, housing, health care and other areas of daily life. The law’s goal was
straightforward but ambitious
: Create preventive protections for consumers from algorithmic discrimination while encouraging innovation.
Colorado’s leadership on this is not surprising. The state has a climate that embraces
technological innovation
and a
rapidly growing AI sector
. The state positioned itself at the frontier of AI governance,
drawing from international models such as the EU AI Act
and from privacy frameworks such as the
2018 California Consumer Privacy Act
. With an initial effective date of Feb. 1, 2026, lawmakers gave themselves ample time to refine definitions, establish oversight mechanisms and build capacity for compliance.
When the law passed in May 2024, policy analysts and advocacy groups hailed it as a breakthrough. Other states, including Georgia and Illinois,
introduced bills closely modeled after Colorado’s AI bill
, though those proposals did not advance to final enactment. The law was described by the
Future of Privacy Forum
as the “first comprehensive and risk-based approach” to AI accountability. The forum is a nonprofit research and advocacy organization that develops guidance and policy analysis on data privacy and emerging technologies.
Legal commentators,
including attorneys general across the nation
, noted that Colorado created robust AI legislation that other states could emulate in the absence of federal legislation.
Politics meets process, stalling progress
Praise aside, passing a bill is one thing, but putting it into action is another.
Immediately after the bill was signed,
tech companies and trade associations
warned that the act could create heavy administrative burdens for startups and deter innovation. Polis, in his signing statement, cautioned that “
a complex compliance regime
” might slow economic growth. He urged legislators to revisit portions of the bill.
CBS News Colorado reports on state lawmakers racing to replace the state’s artificial intelligence law before February 2026.
Polis convened a special legislative session
to reconsider portions of the law.
Multiple bills were introduced
to amend or delay its implementation.
Industry advocates
pressed for narrower definitions and longer timelines. All the while,
consumer groups
fought to preserve the act’s protections.
Meanwhile, other states watched closely and changed course on sweeping AI policy. Gov. Gavin Newsom
slowed California’s own ambitious AI bill after facing similar concerns
. Meanwhile
Connecticut failed to pass its AI legislation
amid a veto threat from Gov. Ned Lamont.
Colorado’s early lead turned precarious. The same boldness that made it first also made the law vulnerable – particularly because, as seen in other states,
governors can veto, delay or narrow AI legislation as political dynamics shift
.
From big swing to small ball
In my opinion, Colorado can remain a leader in AI policy by pivoting toward “
small ball,” or incremental, policymaking
, characterized by gradual improvements, monitoring and iteration.
This means focusing not just on lofty goals but on the practical architecture of implementation. That would include defining what counts as high-risk applications and clarifying compliance duties. It could also include launching pilot programs to test regulatory mechanisms before full enforcement and building impact assessments to measure the effects on innovation and equity. And finally, it could engage developers and community stakeholders in shaping norms and standards.
This incrementalism is not a retreat from the initial goal but rather realism. Most durable policy
emerges from gradual refinement
, not sweeping reform. For example, the EU’s AI Act is
actually being implemented in stages
rather than all at once, according to legal scholar Nita Farahany.
A video from EU Made Simple explains the EU’s AI regulation, which was the first in the world.
Effective governance of complex technologies requires iteration and adjustment. The same was true for
data privacy
,
environmental regulation
and
social media oversight
.
In the early 2010s, social media platforms grew unchecked,
generating public benefits but also new harms
. Only after extensive research and
public pressure
did governments begin
regulating content and data practices
.
Colorado’s AI law may represent the start of a similar trajectory: an early, imperfect step that prompts learning, revision and eventual standardization across states.
The core challenge is striking a workable balance. Regulations need to protect people from unfair or unclear AI decisions without creating such heavy burdens that businesses hesitate to build or deploy new tools. With its thriving tech sector and pragmatic policy culture, Colorado is well positioned to model that balance by embracing incremental, accountable policymaking. In doing so, the state can turn a stalled start into a blueprint for how states nationwide might govern AI responsibly.
Stefani Langehennig receives funding from the American Political Science Association’s (APSA) Centennial Center Research Center.