On May 17, 2024, Colorado became the first U.S. state to pass a law aimed at protecting consumers from harm arising out of the use of artificial intelligence (“AI”) systems. Senate Bill 24-205, or the “CAIA,” is designed to regulate the private-sector use of AI systems and will impose obligations on Colorado employers, including affirmative reporting requirements. The CAIA, which will take effect on February 1, 2026, applies to Colorado businesses that use AI systems to make, or that are used as a substantial factor in making, employment decisions.

While President Biden has released an executive order on the development and use of AI, there is no comprehensive federal legislation regulating the use of AI systems. Despite signing the bill into law, Colorado Governor Jared Polis released a signing statement expressing his reservations with the CAIA and encouraging the legislature to improve upon the law before it takes effect. Colorado employers should monitor for guidance on and amendments to the CAIA, in addition to preparing for compliance.

What Employers Need to Know

The CAIA imposes a duty of reasonable care on developers (i.e., creators) and deployers (i.e., users) of high-risk AI systems to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. Although the law does not exclusively regulate employers, high-risk AI systems include AI systems that make, or are a substantial factor in making, employment-related decisions.  The law provides a narrow exemption for businesses with less than fifty employees that do not use their own data to train the AI system.

Under the CAIA, “algorithmic discrimination” means any condition in which the use of an AI system results in differential treatment or impact that disfavors a consumer or group of consumers on the basis of characteristics protected under federal law or Colorado law, including age, color, ethnicity, disability, national origin, race, religion, veteran status, and sex.

The law creates a rebuttable presumption of reasonable care if a deployer takes certain compliance steps, including:

  1. Risk-management policy and program.  Deployers must adopt a risk-management policy and program meeting certain defined criteria. The risk-management policy and program must be regularly reviewed and updated, as well as reasonable in consideration of various listed factors.
  2. Impact assessment.  Deployers must also complete annual impact assessments for high-risk AI systems.  An impact assessment must include, at a minimum, a statement of the purpose, intended use, and benefits of the system, an analysis of whether the system poses known or reasonably foreseeable risks of algorithmic discrimination and a description of how the deployer mitigates those risks, a summary of the data processed as inputs and outputs of the system, an overview of the categories of data, if any, the deployer used to customize the system, any metrics used to evaluate the performance and known limitations of the system, a description of transparency measures taken, including any measures taken to disclose the use of the system to consumers and a description of the post-deployment monitoring and user safeguards provided concerning the system.
  3. Notices. The CAIA also requires deployers to provide various notices to consumers (i.e., Colorado residents). Prior to using an AI system to make employment-related decisions, employers must inform applicants that an AI system will be used and disclose the purpose of the system, the nature of the decision(s) the system may make, and a plain-language description of the system. Additionally, for an applicant adversely affected by the decision of an AI system, the employer must provide the principal reason(s) for the adverse decision, an opportunity to correct any incorrect personal data used by the AI system, and an opportunity to appeal the adverse decision. A covered employer must also post in a “clear and readily available” manner on its website a notice of the types of AI systems that are currently deployed, the known or reasonably foreseeable risks of algorithmic discrimination, and the data collected and used by the deployer. Finally, deployers must disclose to Colorado’s attorney general the discovery of algorithmic discrimination within their AI systems within 90 days after the discovery.

The law provides an affirmative defense in an enforcement action by the attorney general if a deployer (i) discovers and cures a violation as a result of feedback, adversarial testing or red teaming (as those terms are defined by the National Institute of Standards or Technology (NIST)), or an internal review process, and (ii) the deployer is otherwise compliant with the NIST’s Artificial Intelligence Risk Management Framework or another internationally recognized framework for artificial intelligence management. Colorado’s attorney general has the exclusive authority to enforce the CAIA.

Although the CAIA is the first of its kind in the U.S., the law shares structural similarities to the Artificial Intelligence Act recently adopted by the European Union.  Acknowledging industry opposition, Governor Polis expressed in the signing statement his hope that the CAIA will be significantly improved upon before it takes effect and emphasized that “Colorado remains home to innovative technologies.” Colorado employers should continue monitoring for guidance on or amendments to the CAIA, as well as preparing for compliance.