Watch Out California, Colorado May Be the New Trendsetter
by: Dhara Shah and Afam Okeke
While we await final buzz around the end of California’s legislative session, we can turn to the 2024 legislative session in Colorado, which brought us much to consider. This includes both a new AI law and amendments to the existing Colorado Privacy Act.
Colorado’s governor signed into law this season’s most notable state AI legislation (following the more narrow Utah AI Policy Act, read our coverage on this law here). The Colorado Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (“CO AI Act”) has an effective date of February 1, 2026 and sets forth various obligations for both developers and deployers of high-risk AI systems – we dive into this below.
Alongside the new CO AI Act, we saw a notable amendment, HB 1130, to the existing Colorado Privacy Act. This amendment broadens the scope of the Colorado Privacy Act when it comes to biometric data – requiring disclosure and consent prior to the collection of biometric data of Colorado residents (including consumers and employees).
Colorado AI Act: The Key Terms
To best understand the scope of the Colorado AI Act, we will want to dive into some key terms, including developers, deployers, and high-risk AI systems.
The CO AI Act applies to both developers and deployers of high risk AI systems (with certain exemptions). It defines a “developer” as persons who develop - or intentionally and substantially modify - a high-risk AI system. A “deployer,” on the other hand, are persons who deploy a high-risk AI system, have at least 50 full-time employees, do not use its own data to train the high-risk AI system, and only use it for the disclosed purposes.
Now, it is important to note that the law applies to “high-risk AI systems.” Which includes any AI system that makes a consequential decision relating to the provision, denial, cost, or terms of (1) education enrollment/opportunity; (2) employment; (3) a financial/lending service; (4) an essential government service; (5) healthcare services; (6) housing; (7) insurance; or (8) legal services.
Colorado AI Act: The Key Requirements
Now that we understand who the Colorado AI Act applies to, it is important to understand the obligations the CO AI Act places on both deployers and developers. The Act sets forth novel obligations for both developers and deployers of these high-risk AI systems. Some key requirements are highlighted below:
Let Users Know That “Hey, You Are Using AI”. Where it is not already obvious to a reasonable person, a deployer or developer must disclose to a user that they are interacting with an AI system.
Establish Transparency. Developers must disclose to deployers certain documentation about their AI systems. This includes providing summaries of the data used to train the AI system, noting the limitations and risks of algorithmic discrimination from the AI-system, and other information that will permit the deployer to comply with the law. Deployers must provide consumers with notice and certain website disclosures.
Create a Risk Management Policy. Deployers must create a risk management policy, including to identify, document, or mitigate risk of algorithmic discrimination.
Complete An Impact Assessment. Deployers must complete an impact assessment at least annually and no later than 90 days after any intentional or substantial modification is initiated to the high-risk AI system. Deployer must maintain impact assessment records for at least three years following the final deployment of the high-risk AI system.
Review and Report. Deployer must disclose to the Attorney General and deployers any known or reasonably foreseeable risk of algorithmic discrimination arising from intended use of the system. Deployers must review, at least annually, the deployment of each high-risk AI system deployed to ensure that the system is not causing algorithmic discrimination.
Notify End Users of Consequential Decisions and Ability to Appeal. Deployers must notify a consumer if the high-risk AI makes a consequential decision concerning a consumer; provide a consumer with an opportunity to correct any incorrect personal data that the system processed in making a consequential decision; and provide consumer with an opportunity to appeal an adverse consequential decision, by human review if technically feasible.
The Colorado AI Act: Enforcement
While the Colorado AI Act does not provide for a private right of action, it does give the CO attorney general authority to enforce this law and to promulgate new rules as necessary to enforce the CO AI Act.
The law also provides developers and deployers with an affirmative defense if they discover and cure a violation as a result of feedback, adversarial testing, or internal review process; and are otherwise in compliance with the latest version of NIST’s “Artificial Intelligence Risk Management Framework” or other nationally or internationally recognized risk management frameworks for AI systems that are substantially equivalent or more stringent to the CO AI Act, including frameworks designated by the AG at its own discretion.
So, What Does This Mean For My Business? Colorado’s recent focus on emerging technologies like artificial intelligence and biometrics signifies a much larger shift in state legislatures’ focus on ensuring proper protections are in place while these technologies become more and more common. Businesses should be sure to take an “AI by Design” approach when implementing and integrating these technologies into their products and services by preparing the necessary disclosures, conducting risk assessments, and establishing policies for internal use.
Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.