GLACIS·US state AI laws·Tracker·Updated April 2026

The US state AI laws tracker, the working playbook for April 2026.

Where every comprehensive state AI statute now stands: Colorado’s SB 24-205 deferred to June 2026, California’s ADMT regulations live with a phased compliance cascade through 2030, Texas TRAIGA in force, New York’s RAISE Act finalised, and the Trump administration’s December 2025 preemption executive order looming over all of it. With the citations attorneys general expect.

By Joe Braidwood, CEO GLACIS·30 min read·Updated 24 April 2026

Jan 2026
CA AB 2013, SB 53, ADMT (risk-assessments), TX TRAIGA all in force
Jun 30 2026
Colorado AI Act effective (delayed from Feb 1)
Apr 1 2027
CA ADMT pre-use notices begin for significant decisions
Jan 1 2027
NY RAISE Act, WA HB 2225, OR SB 1546 effective
Joe Braidwood
Joe Braidwood
CEO, GLACIS
35 min read

Executive summary

The United States still lacks comprehensive federal AI legislation; what it has is a growing patchwork of state laws operating against the backdrop of an active federal preemption push. As of April 2026, Texas HB 149 (TRAIGA) is live (effective January 1, 2026), California’s CPPA ADMT regulations are in force with significant-decision obligations phasing in April 1, 2027, California’s SB 53 Frontier AI Transparency Act applies to large frontier developers, and the Colorado AI Act (SB 24-205) becomes enforceable on June 30, 2026 after its August 2025 delay. New York’s RAISE Act is signed (chapter amendment March 27, 2026) and effective January 1, 2027.

State AI laws generally address three categories: (1) algorithmic discrimination in high-stakes decisions, (2) automated decision-making transparency and consumer rights, and (3) sector-specific AI use in employment, healthcare, insurance, and financial services. Many laws leverage existing consumer protection or privacy frameworks rather than creating entirely new regulatory structures.

The federal counter-current. President Trump’s December 11, 2025 executive order “Eliminating State Law Obstruction of National AI Policy” stood up a DOJ AI Litigation Task Force on January 10, 2026 specifically to challenge laws like Colorado’s. A bipartisan coalition of 36 state attorneys general has opposed broad preemption. Until the courts and Congress sort this out, organisations operating nationally should still adopt a “highest common denominator” posture aligned to NIST AI RMF and ISO/IEC 42001 — both of which create rebuttable presumptions of reasonable care under the Colorado AI Act.

Q1 → Q2 2026 update brief

Trump executive order on state AI law preemption signed December 11, 2025; DOJ AI Litigation Task Force operative January 10, 2026, with Colorado SB 24-205 named as a priority challenge target.[F1]

California CPPA ADMT regulations approved by OAL on September 22, 2025 with the phasing finalised: risk-assessment compliance from January 1, 2026, ADMT pre-use notices for significant decisions from April 1, 2027, first attestation due April 1, 2028, cybersecurity audit certifications cascading 2028–2030.[F2]

California SB 53 (Frontier AI Transparency Act) in force January 1, 2026. ~5–8 frontier developers in scope; transparency reports, NIST AI RMF or ISO/IEC 42001 alignment, 15-day critical-incident reporting, $1M civil penalty per violation.[F3]

New York RAISE Act chapter amendment signed March 27, 2026; effective January 1, 2027 with DFS oversight and AG enforcement to $1M/$3M.[F4]

Washington enacted three AI laws in March 2026: HB 1170 (AI content disclosure, eff. February 1, 2027), HB 2225 (companion chatbots, eff. January 1, 2027), and SSB 5886 (digital-likeness rights, eff. June 10, 2026). Oregon followed with SB 1546 (companion chatbots).[F5]

NYC Local Law 144 enforcement reform: NY State Comptroller’s December 2, 2025 audit found DCWP enforcement “ineffective”; DCWP shifted to proactive investigations in 2026.[F6]

1
Comprehensive AI Law Enacted
30+
States with AI Bills
Jun 2026
Colorado AI Act Effective
$20K
Max Penalty per Violation

In This Guide

The US AI Regulatory Landscape

Unlike the European Union, which enacted a comprehensive AI Act covering all member states, the United States has taken a fragmented approach to AI regulation. In the absence of federal legislation, individual states have begun enacting their own AI laws—creating a complex compliance landscape for organizations operating across state lines.

Why States Are Acting

Several factors are driving state-level AI regulation:

Types of State AI Laws

State AI legislation generally falls into several categories:

Categories of State AI Legislation

Category Focus Example States
Comprehensive AI Laws Broad regulation of high-risk AI systems across multiple domains Colorado (enacted), Texas (enacted, HB 149), California (pending)
Employment AI AI in hiring, promotion, termination decisions Illinois (AIPLA), New York (Local Law 144), Maryland
Biometric AI Facial recognition, voice recognition, biometric data Illinois (BIPA), Texas, Washington
Privacy + AI Automated decision-making provisions in privacy laws California (CCPA/CPRA), Virginia (VCDPA), Connecticut (CTDPA)
Healthcare AI AI in clinical decisions, insurance, care management California (pending), New York (proposed)
Government AI AI use by state and local government agencies California, Washington, multiple states

Colorado: The First Comprehensive State AI Law

Colorado’s Artificial Intelligence Act (SB 24-205), signed May 17, 2024 and effective June 30, 2026, is the first comprehensive US state law regulating AI systems. It establishes obligations for both "developers" (those who build AI) and "deployers" (those who use AI in consequential decisions).

Enacted Effective June 30, 2026

Colorado AI Act Key Points

  • Scope: High-risk AI in employment, housing, credit, healthcare, education, insurance, government services, legal services
  • Standard: "Reasonable care" to prevent algorithmic discrimination
  • Safe Harbor: NIST AI RMF or ISO 42001 compliance creates rebuttable presumption
  • Penalties: Up to $20,000 per violation (Consumer Protection Act)
  • Enforcement: Attorney General only (no private right of action)

Developer Requirements

Developers of high-risk AI systems must:

Deployer Requirements

Deployers of high-risk AI systems must:

For comprehensive coverage, see our Colorado AI Act Complete Compliance Guide.

California: The Privacy Leader Expands to AI

California leads US states in data privacy regulation, and its frameworks increasingly address AI. While California hasn’t enacted a comprehensive AI law equivalent to Colorado’s, multiple overlapping regulations affect AI deployment:

California Consumer Privacy Act (CCPA/CPRA)

Enacted Effective January 1, 2023 (CPRA amendments)

CCPA/CPRA AI Provisions

  • Profiling opt-out: Consumers can opt out of automated decision-making
  • Access rights: Consumers can access information about automated decisions
  • Risk assessments: Required for processing posing significant risk (including profiling)
  • Penalties: $2,500-$7,500 per intentional violation

California Automated Decision-Making Technology (ADMT) Regulations

The California Privacy Protection Agency (CPPA) finalized its ADMT, risk assessment, and cybersecurity audit regulations, which took effect January 1, 2026. The rules phase in over the following 24 months, with the most consequential obligations for AI systems used in "significant decisions" beginning January 1, 2027. Key provisions:

The regulations are now in force. Businesses already had to begin complying on January 1, 2026 for baseline obligations; significant-decision ADMT requirements, risk assessments, and the first cybersecurity audit cycle phase in through 2027 and 2028 depending on business size.

California Pending AI Legislation

California’s legislature has considered multiple AI bills, including proposals modeled on the EU AI Act:

Illinois: Biometrics and Employment AI Pioneer

Illinois has been at the forefront of regulating specific AI applications, particularly biometric data and employment decisions:

Illinois Biometric Information Privacy Act (BIPA)

Enacted Effective 2008

BIPA Requirements

  • Scope: Fingerprints, face geometry, iris scans, voice prints, hand geometry
  • Notice & consent: Written consent required before collection
  • Private right of action: Individuals can sue directly
  • Penalties: $1,000 per negligent violation; $5,000 per intentional violation

BIPA has generated significant litigation against AI companies using facial recognition technology, with settlements reaching hundreds of millions of dollars. The law effectively prohibits most commercial facial recognition uses without explicit consent.

Illinois Artificial Intelligence Video Interview Act (AIVIA)

Enacted Effective January 1, 2020

AIVIA Requirements

Employers using AI to analyze video interviews must: (1) notify applicants that AI will be used; (2) explain how the AI works and what characteristics it evaluates; (3) obtain applicant consent before the interview; (4) limit who can view the video; (5) delete videos upon applicant request.

Illinois Employment AI Legislation

Illinois continues to expand employment AI regulation:

Texas: TRAIGA Now in Force

Texas, with its large technology sector and business-friendly reputation, has moved from a measured stance to enacting one of the most consequential state AI laws. Texas HB 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), took effect January 1, 2026 and is now live. It prohibits certain AI uses (including intentional discrimination and unlawful manipulation), establishes disclosure obligations for consumer-facing AI, and creates Attorney General enforcement with civil penalties up to $200,000 per violation.

Texas HB 149 (TRAIGA)

Enacted Effective January 1, 2026

Texas Responsible AI Governance Act

  • Scope: Developers and deployers of AI systems doing business in Texas, producing AI products or services used by Texas residents, or whose AI affects Texas residents
  • Prohibited uses: AI intentionally developed or deployed for unlawful discrimination, unlawful behavioral manipulation, social scoring by government, or generation of unlawful visual content
  • Government disclosure: State agencies interacting with consumers via AI must disclose the interaction
  • Enforcement: Attorney General exclusive; civil penalties up to $200,000 per prohibited use and $40,000 per day for continuing violations; 60-day cure period
  • Regulatory sandbox: Establishes a sandbox program administered by the Texas Department of Information Resources for testing innovative AI systems

Texas Capture or Use of Biometric Identifier Act (CUBI)

Enacted Effective 2009

Texas CUBI

Requires notice and consent before capturing biometric identifiers for commercial purposes. Unlike Illinois BIPA, Texas does not provide a private right of action—enforcement is through the Attorney General. Penalties up to $25,000 per violation.

Texas Data Privacy and Security Act (TDPSA)

Effective July 1, 2024, the TDPSA includes provisions affecting AI:

Texas AI Advisory Council

Texas established an AI Advisory Council to study AI issues and recommend legislation. Areas under consideration include:

New York: Local and State AI Regulation

New York presents a complex regulatory landscape with both city-level and state-level AI requirements:

New York City Local Law 144 (Automated Employment Decision Tools)

Enacted Effective July 5, 2023

NYC Local Law 144

  • Scope: Automated employment decision tools (AEDTs) used in NYC hiring/promotion
  • Bias audit: Annual independent audit for disparate impact by race, ethnicity, sex
  • Publication: Audit summary must be publicly posted
  • Notice: Candidates must be notified at least 10 days before AEDT use
  • Penalties: $500 first violation; $500-$1,500 subsequent violations per day

April 2026 enforcement update. The New York State Comptroller’s December 2, 2025 audit found NYC DCWP enforcement of Local Law 144 “ineffective” — 75% of 311 calls about AEDTs were misrouted. DCWP committed to proactive investigations starting in 2026, increasing the likelihood of enforcement actions for employers and AEDT vendors operating in NYC.

New York RAISE Act (frontier AI)

The Responsible AI Safety and Education Act (S6953B / A6453B) was originally signed by Governor Hochul on December 19, 2025; the final chapter amendment was signed on March 27, 2026. It is the second US state frontier-model law (after California SB 53) and takes effect January 1, 2027.

Other New York AI legislation

Other State AI Laws and Pending Legislation

Beyond the major states covered above, AI regulation is advancing across the country:

States with Privacy Laws Including AI Provisions

Virginia (VCDPA)

Enacted

Effective January 1, 2023

  • • Profiling opt-out rights
  • • Data protection assessments for profiling
  • • No private right of action

Connecticut (CTDPA)

Enacted

Effective July 1, 2023

  • • Profiling opt-out for legal/significant decisions
  • • Data protection assessments required
  • • 60-day cure period

Utah (UCPA)

Enacted

Effective December 31, 2023

  • • Consumer access to profiling information
  • • More limited than other state laws
  • • AG enforcement only

Montana (MCDPA)

Enacted

Effective October 1, 2024

  • • Profiling opt-out rights
  • • Data protection assessments
  • • 60-day cure period

Oregon (OCPA)

Enacted

Effective July 1, 2024

  • • Profiling opt-out for automated decisions
  • • Data protection assessments
  • • Cure period through 2026

Delaware (DPDPA)

Enacted

Effective January 1, 2025

  • • Profiling opt-out rights
  • • No revenue threshold
  • • Broad applicability

States with Biometric/Facial Recognition Laws

State Law Private Action Key Requirements
Illinois BIPA Yes Most stringent; written consent required
Texas CUBI No Notice and consent; AG enforcement
Washington HB 1493 No Notice required; enrollment consent
Arkansas PIPA No Notice and consent requirements
Maryland SB 169 No Facial recognition restrictions in employment

States with Government AI Restrictions

Several states have enacted or proposed restrictions on government use of AI:

State AI Law Comparison Matrix

This matrix provides a high-level comparison of key AI regulatory requirements across major states:

Requirement Colorado California Illinois New York Texas
Comprehensive AI Law ✓ Enacted Partial Partial Pending ✓ HB 149
Employment AI CPRA ✓ AIVIA ✓ LL144
Biometric AI CCPA ✓ BIPA Pending ✓ CUBI
Impact Assessments ✓ Required ✓ CPRA ✓ LL144 ✓ TDPSA
Consumer Opt-Out Limited
Private Right of Action No Limited Yes (BIPA) No No
Safe Harbor (Frameworks) ✓ NIST/ISO

Federal context and the preemption push

The federal landscape changed materially in late 2025 and now actively shapes how state AI law plays out.

The December 2025 executive order

On December 11, 2025, President Trump signed “Eliminating State Law Obstruction of National Artificial Intelligence Policy.” The order:

A bipartisan coalition of 36 state attorneys general publicly opposed broad federal preemption in March 2026; the Senate previously voted 99–1 to strip a similar preemption provision from the One Big Beautiful Bill Act. The political contest is unresolved as of April 2026.

Senate AI Working Group and proposed federal bills

Senator Marsha Blackburn’s TRUMP AMERICA AI Act would codify the executive order into statute and create comprehensive federal AI governance, but remains in committee. The Bipartisan Senate AI Working Group reports continue to be a roadmap document rather than enacted law.

NAAG state AG AI Task Force

In early 2026 Utah Attorney General Derek Brown (R) and North Carolina Attorney General Jeff Jackson (D) launched a bipartisan AI Task Force in partnership with OpenAI, Microsoft, and the Attorneys General Alliance. The task force coordinates state AG investigations and monitors emerging AI risks — especially child-safety and chatbot harms.

Why no comprehensive federal AI law yet

Despite bipartisan interest, federal AI legislation has stalled due to:

Existing Federal AI-Related Laws

While no comprehensive AI law exists, several federal laws affect AI deployment:

Federal Agency Guidance

Federal agencies have issued AI guidance within their regulatory domains:

NIST AI Risk Management Framework

The NIST AI RMF, released January 2023, provides a voluntary framework that multiple state laws reference. Colorado explicitly provides a safe harbor for organizations following NIST AI RMF or ISO/IEC 42001. NIST 1.1 has not yet been released; through 2026 NIST is publishing addenda and profiles, including the Generative AI Profile (NIST AI 600-1, July 2024) and an AI RMF Profile on Trustworthy AI in Critical Infrastructure (concept note released April 7, 2026).

Multi-State Compliance Strategy

Organizations operating across multiple states need a strategic approach to managing divergent requirements:

Highest Common Denominator Approach

Rather than maintaining separate compliance programs for each state, implement controls satisfying the strictest applicable requirements:

GLACIS logoGLACIS
GLACIS Framework

Multi-State AI Compliance

1

Adopt NIST AI RMF

Implement NIST AI Risk Management Framework as baseline. Provides Colorado safe harbor and maps to most state requirements. Document implementation across all four functions: Govern, Map, Measure, Manage.

2

Implement Comprehensive Impact Assessments

Create impact assessment templates that satisfy Colorado, California CPRA, NYC LL144, and pending state requirements. Include bias testing, discrimination risk analysis, and consumer rights documentation.

3

Build Unified Consumer Rights Infrastructure

Implement consumer-facing capabilities: opt-out mechanisms, explanation rights, data correction, appeal processes with human review. Design once, deploy across all states.

4

Document for Multiple Regulators

Maintain documentation that can be adapted for any state regulator: risk management policies, bias testing results, training records, incident response procedures. Use standardized formats (model cards, dataset cards).

5

Monitor Regulatory Evolution

Establish processes to track new state legislation, regulatory guidance, and enforcement actions. Update compliance programs proactively rather than reactively. Subscribe to AG office updates and industry associations.

Sector-Specific Considerations

Certain industries face additional state-specific requirements:

Healthcare AI

Employment AI

Financial Services AI

Frequently asked questions

Which state has the strictest AI law?

Colorado’s AI Act and Texas HB 149 are the broadest enacted state AI laws, covering high-risk or prohibited AI uses across multiple sectors. Illinois BIPA remains the strictest for biometric AI due to its private right of action and significant damages. For employment AI, NYC Local Law 144 sets rigorous bias audit requirements. California’s ADMT regulations, now in force since January 1, 2026, add a substantial automated decision-making layer on top of CCPA/CPRA.

Do state AI laws apply to companies headquartered elsewhere?

Yes. State AI laws typically apply based on where consumers are located, not where companies are headquartered. If you serve Colorado residents, make decisions affecting Illinois employees, or deploy AI impacting NYC job candidates, you must comply with those jurisdictions’ laws regardless of your company’s location.

Will federal AI law preempt state laws?

Uncertain. If comprehensive federal AI legislation passes, it may or may not preempt state laws depending on the law’s language. Historically, federal privacy laws (like HIPAA and FCRA) have included limited preemption, allowing states to enact more protective requirements. Current state AI laws generally don’t conflict with federal requirements—they fill gaps in federal coverage.

How do I know which state laws apply to my AI system?

Consider: (1) Where are the people affected by your AI decisions located? (2) What type of AI application is it (employment, healthcare, credit, etc.)? (3) What data does it process (biometric, personal information)? (4) Who deploys the system (government, private sector)? Most organizations operating nationally should assume the strictest applicable requirements apply.

What’s the difference between a developer and deployer under state AI laws?

Developers create or substantially modify AI systems (model providers, algorithm developers). Deployers use AI systems to make decisions affecting consumers (employers using hiring AI, lenders using credit scoring). An organization can be both if they build and use their own AI. Each role has distinct compliance obligations under laws like the Colorado AI Act.

Do small businesses need to comply with state AI laws?

It depends on the law. Some states (like California and Virginia) have revenue or data volume thresholds. Colorado AI Act applies based on high-risk AI use, not company size. NYC LL144 applies to any employer using AEDTs in NYC hiring. Illinois BIPA has no size exemption. Check specific law thresholds, but assume requirements apply if you’re using high-risk AI.

Key takeaways

  • Colorado leads: First comprehensive state AI law, effective June 2026 with reasonable care standard
  • Patchwork is growing: 30+ states have AI bills; major states have enacted targeted laws
  • NIST AI RMF provides safe harbor: Colorado explicitly recognizes framework compliance
  • Illinois BIPA is highest risk: Private right of action creates significant litigation exposure
  • National companies need unified approach: Implement highest common denominator controls
  • More regulation coming: California ADMT significant-decision phase in January 2027, healthcare AI bills, and new state laws through 2026 and 2027

References

  1. [F1] White House, “Eliminating State Law Obstruction of National AI Policy” (Dec 11, 2025) — whitehouse.gov; Paul Hastings client alert (Dec 2025); Gibson Dunn analysis (Jan 2026).
  2. [F2] California Privacy Protection Agency, “California Finalizes Regulations to Strengthen Consumers’ Privacy” (Sept 23, 2025) — cppa.ca.gov; Skadden, Wiley, White & Case briefs (Sept–Oct 2025).
  3. [F3] Office of Governor Newsom, SB 53 signing statement (Sept 29, 2025) — gov.ca.gov; Future of Privacy Forum “California’s SB 53: The First Frontier AI Law, Explained”; Brookings (2025).
  4. [F4] Office of Governor Hochul, RAISE Act chapter amendment release (Mar 27, 2026) — governor.ny.gov; Wiley “New York Finalizes RAISE Act” (Mar 2026).
  5. [F5] Washington House Democrats, HB 1170 release (Feb 16, 2026); Cooley client alert (Apr 6, 2026); Mayer Brown “Oregon and Washington Join California in Enacting Companion Chatbot Laws” (Apr 2026).
  6. [F6] NY State Comptroller, “Enforcement of Local Law 144 — Automated Employment Decision Tools” (Dec 2, 2025) — osc.ny.gov; DLA Piper GENIE (Jan 2026).

Multi-state AI compliance

Ready to make the receipts? See what GLACIS can attest in 5 minutes.

Our evidence pack demonstrates compliance across multiple state frameworks — Colorado AI Act, California ADMT, NIST AI RMF, ISO/IEC 42001 — with the cryptographic logs an AG would expect. One investment, multi-jurisdictional coverage.

Build the evidence pack

Related Guides