Colorado's Governor Jared Polis convened an AI Policy Work Group that, on , formally proposed repealing and replacing the Colorado AI Act, the state's landmark artificial intelligence regulation that had been nearly two years in the making. The proposal would scrap the current law's design-focused compliance requirements and replace them with a narrower framework centered on consumer-facing rights. The shift arrives at a politically charged moment: federal pressure on states with stringent AI laws is intensifying, and Colorado's legislative session runs only through .

What the Colorado AI Act Actually Does and Why It Became Controversial

To understand why the state is proposing to rewrite its own law before it even takes full effect, it helps to understand what the CAIA was built to do.

Signed into law in 2024, CAIA was loosely modeled on the European Union's AI Act, the sweeping regulation that categorizes AI systems by risk level and imposes compliance obligations accordingly. Colorado adopted the same core logic: identify "high-risk AI systems," then require the developers who build them and the deployers who operate them to meet a prescribed set of safety and accountability standards.

Think of it like a building code for software. A company constructing a skyscraper faces far more regulatory scrutiny than someone installing a garden shed. CAIA applied the same tiered logic to AI: a system that makes consequential decisions about people's healthcare, housing, or employment faced much stricter oversight than one that recommends a playlist. Developers had a statutory "duty of care," a legal obligation to design their systems responsibly. Deployers who put those systems into use had explicit operational obligations tied to that duty.

The law was originally slated to take effect on , but was delayed to June 2026, giving legislators more time to address industry concerns about the compliance burden. Those concerns, it turns out, proved significant enough to prompt a wholesale rethink. The full legislative text is available on the Colorado General Assembly website.

The New Proposal: From Regulating Systems to Notifying Consumers

The Work Group's proposal does not simply trim CAIA at the edges. It inverts the law's fundamental logic.

Where CAIA focused on how AI systems are designed and deployed, placing obligations on developers to exercise a duty of care and on deployers to meet explicit compliance requirements, the replacement framework focuses almost entirely on what consumers are told after an AI system affects them.

The scope of covered technology also narrows significantly. CAIA's "high-risk AI systems" become "Covered ADMTs." An ADMT qualifies as covered under the proposed framework only when its output "materially influences" a consequential decision, and the proposal defines "materially influences" as the output being a "non-de minimis factor" in the outcome. In plain language: if an AI recommendation is one of several inputs a human considers in deciding whether to approve a loan application, it likely qualifies. If it nudges you toward a product in an online store, it almost certainly does not.

That distinction matters enormously for scope, because the proposal explicitly carves out a long list of AI applications: advertising, marketing, product recommendations, search results, content moderation, cybersecurity tools, fraud prevention, and spam filtering are all scoped out of the new framework. Entire categories of AI deployment that CAIA would have covered are removed from regulatory reach in a single stroke.

What Changes for Developers and Deployers

The practical compliance picture shifts substantially for both groups that CAIA targeted.

For AI developers, the companies that build and train AI models, the proposed framework replaces the duty of care with documentation obligations. Instead of being legally required to design responsibly (a standard that invites litigation over its meaning), developers would be required to produce and maintain records covering intended uses, categories of training data, known limitations, and instructions for monitoring the system. Records must be kept for three years. The shift is from an affirmative legal duty to a paper trail.

For AI deployers, businesses that integrate AI tools into their products and services, the change is more dramatic. Under CAIA, deployers carried a set of explicit operational obligations: assessments, disclosures, and corrective mechanisms. Under the proposed replacement, most of those explicit requirements disappear. What remains is a "point-of-interaction" notice, telling consumers when an AI system is being used in a decision that affects them, and an obligation to notify consumers within 30 days if an AI-influenced decision produces an "adverse outcome."

The notification requirement is the most consequential consumer protection remaining in the new framework. If an AI system contributes to a denial of credit, a rejection of a housing application, or an unfavorable employment decision, the consumer must be told within a month. But the framework does not require the deployer to explain how the AI reached its output, nor does it mandate any particular corrective process.

No Private Right of Action and a Bar on Joint Liability

Two provisions in the proposed framework address how legal disputes get resolved, and both tilt in favor of industry.

Like the original CAIA, the replacement proposal contains no private right of action. Consumers cannot sue companies directly for violations of the law. Enforcement runs through the state attorney general's office, not through individual lawsuits. This is a meaningful limitation on accountability; even if a company fails to issue a 30-day adverse outcome notice, the affected consumer has no direct legal recourse under the AI statute itself.

The proposal also bars joint and several liability between AI developers and deployers, except where such liability already exists under existing law. Joint and several liability is a legal doctrine that allows a plaintiff to recover full damages from any one defendant in a multi-party case, regardless of how fault is distributed. By restricting it here, the framework prevents a scenario where a deployer faces full liability for a harm caused primarily by a developer's design choices, or vice versa. Critics argue this weakens the overall deterrent effect of the law; proponents counter that it prevents liability from cascading unpredictably through AI supply chains.

The Federal Pressure Behind the Timing

The Work Group's proposal did not emerge in a political vacuum. Two federal developments in the weeks surrounding its release are directly relevant.

On , three days after the Colorado proposal was released, a National AI Legislative Framework was published at the federal level, signaling an intent to harmonize AI regulation across states. That followed President Donald Trump's Executive Order on artificial intelligence, which raised concerns about federal preemption: the constitutional principle that federal law can supersede state law in areas where Congress has established a national standard.

The preemption concern is not merely theoretical. The Trump administration's AI policy has been explicit about viewing state-level AI regulation as a potential impediment to domestic AI development and competitiveness. More concretely, the federal government tied BEAD broadband funding, a significant infrastructure grant program, to AI policy posture, with guidance suggesting states with "onerous" AI laws could lose access to that funding.

For Colorado, which has ambitions around rural broadband expansion, that funding risk is real. The timing of the Work Group's proposal, arriving less than a week before the National AI Legislative Framework, reads less like coincidence than coordination. State legislators are watching a narrowing window before the May 13 session deadline to pass any replacement legislation that could defuse the federal standoff. The federal framework's relationship to how companies like those covered in our reporting on big tech backing Anthropic against Trump AI policy reflects the same national tension playing out at the state level.

CAIA vs. The Proposed Framework: A Direct Comparison

Setting the two frameworks side by side clarifies how substantially the proposal rewrites Colorado's AI policy.

  • Covered technology: CAIA covers "high-risk AI systems." The proposal covers "Covered ADMTs" with a materially-influences threshold, and excludes advertising, marketing, product recommendations, search, content moderation, cybersecurity, fraud prevention, and spam filtering.
  • Developer obligations: CAIA imposes a duty of care. The proposal replaces it with documentation obligations covering intended uses, training data categories, limitations, and monitoring instructions.
  • Deployer obligations: CAIA requires explicit operational compliance. The proposal reduces this to a point-of-interaction notice and a 30-day adverse outcome notification.
  • Records retention: Both frameworks require three years of record retention for developers and deployers.
  • Private right of action: Neither framework grants consumers the right to sue directly. Enforcement remains with the attorney general under both versions.
  • Joint liability: CAIA does not address this explicitly. The proposal bars joint and several liability between developers and deployers except under existing law.

The net effect is a framework that is substantially less prescriptive for industry, substantially narrower in scope, and substantially more reliant on disclosure as its primary consumer protection mechanism. Whether disclosure without enforcement teeth constitutes meaningful protection is the central argument critics of the proposal are making.

Mixed Reactions in the Legislature

Colorado legislators have offered what observers describe as mixed reactions to the Work Group's proposal. The state's AI Act was itself the product of contentious debate between industry groups, consumer advocates, and civil rights organizations, and the replacement proposal reactivates many of those same fault lines.

Supporters of the rewrite argue that CAIA's prescriptive compliance framework would have been difficult for smaller Colorado businesses to navigate, and that the proposed replacement preserves meaningful consumer rights, particularly the adverse outcome notification, while removing compliance burdens that would have pushed AI development activity toward states with less regulation. They also point to the federal preemption risk as a practical reason to align more closely with the emerging national framework.

Opponents contend that stripping the duty of care removes the most substantive accountability mechanism in the original law, and that a 30-day notification requirement without a corresponding right to challenge the decision or sue for harm provides little practical protection. The exclusion of entire categories of AI deployment, particularly advertising and content moderation, areas with well-documented potential for consumer harm, has drawn pointed criticism from digital rights advocates.

The analysis from law firm Hogan Lovells, authored by attorneys Mark Brennan, James Denvil, and Sophie Baum, notes that the shift "significantly reduces prescriptive compliance" for both developers and deployers, while characterizing the change in focus as moving from regulating AI system design and deployment to consumer-facing rights and transparency.

What Happens Next

Colorado's legislative session closes on . For any replacement framework to take effect before CAIA's delayed June 2026 enforcement date, the state legislature would need to pass legislation within that window, a compressed timeline given the political complexity of the issue.

Three scenarios are plausible. First, the legislature passes a version of the Work Group's proposal, Colorado avoids the federal preemption conflict, and the replacement framework takes effect in place of CAIA. Second, the legislature fails to act in time, CAIA takes effect in June as scheduled, and the federal standoff over "onerous" state AI laws escalates. Third, a negotiated middle path emerges, retaining some of CAIA's structural obligations while accommodating enough of the Work Group's revisions to satisfy both industry concerns and consumer advocates.

The federal dimension makes the calculus particularly volatile. If Congress moves toward a national AI framework that explicitly preempts state law, Colorado's entire regulatory effort, whether CAIA or its replacement, could be superseded regardless of what the state legislature does before May 13. The National AI Legislative Framework published on does not have the force of law, but it signals a direction that state legislators ignore at some risk. How major AI companies are navigating the federal landscape is explored further in reporting on big tech AI spending scrutiny in 2026.

What the Colorado episode clarifies, regardless of how the legislative session ends, is that AI regulation in the United States is not settling into a stable equilibrium. States that moved early to establish comprehensive frameworks are now navigating a federal government that views those frameworks as obstacles, funding leverage as a policy tool, and national harmonization as an urgent priority. Colorado is unlikely to be the last state to face this pressure: it may simply be the first to respond to it in writing.

Sources

  1. Colorado AI Act Replacement Analysis - Hogan Lovells
  2. Colorado AI Act Legislative Text - Colorado General Assembly
  3. Executive Order on Artificial Intelligence - White House
  4. Trump National AI Legislative Framework - White House