days until enforcement

Runtime evidence for the
Colorado AI Act

SB 24-205 is the first comprehensive US state AI law — $20,000 per violation per consumer or transaction, effective June 30, 2026. GLACIS gives developers and deployers of consumer-facing high-risk AI a runtime evidence layer for algorithmic-discrimination controls and impact assessments — the trail that supports your rebuttable presumption of reasonable care.

What the law says

Who it covers

Developers and deployers of high-risk AI systems making consequential decisions in employment, education, housing, healthcare, financial services, insurance, legal, and government sectors.

Key requirements

  • • Duty of care to prevent algorithmic discrimination
  • • Impact assessments (initial + annual)
  • • Risk management aligned with NIST AI RMF / ISO 42001
  • • Consumer notification before consequential decisions
  • • Public disclosure on website
  • • Report discrimination to AG within 90 days

When it takes effect

days until June 30, 2026

AG exclusive enforcement begins on day one. A 60-day cure period exists as an affirmative defense — but only if you discover the issue through your own testing. Without monitoring, there’s nothing to cure.

What happens if you don't

$20,000
per violation, per consumer or transaction

If your AI system processes 100 transactions per day, that's $2,000,000/day in potential penalty exposure. AG has exclusive enforcement authority.

The rebuttable presumption — and how to earn it

Colorado offers a rebuttable presumption of reasonable care for organizations that demonstrate NIST AI RMF or ISO 42001 compliance — the strongest AI-specific reasonable-care defense in US law.

Policies alone don't qualify

Having a PDF that describes your NIST mapping isn’t enough. The rebuttable presumption requires evidence that you actually followed the framework — not just that you documented it.

GLACIS strengthens the defense

GLACIS runs inside your infrastructure. Local runtime controls emit signed evidence receipts for every consequential decision — continuously assembled into evidence packs that show your NIST AI RMF-mapped controls actually executed. Independently verifiable, with zero sensitive-data egress.

How GLACIS gets you there

1. Scope the workflow

Pick one consumer-facing high-risk AI workflow. We map it to Colorado’s algorithmic-discrimination duty of care and the NIST AI RMF subcategories that anchor the rebuttable presumption.

2. Stand up runtime evidence

Local runtime controls deploy inside your infrastructure. Every consequential decision emits a signed evidence receipt — independently verifiable, with zero sensitive-data egress.

3. Ship a signed evidence pack

Receipts assemble into an evidence pack your General Counsel can hand to the AG, your customers, or your board — impact assessments and control execution, in one verifiable artifact.

Scope a Colorado AI Act evidence sprint

$48k · ten business days · one named workflow. Signed evidence pack ready to share with your GC.

Get started

Start with one high‑risk AI workflow.

Book a focused Agent Runtime Security & Evidence Sprint, then deploy runtime assurance where the risk is real.

From assessment to platform deployment. See pricing →