GLACIS·US state AI laws·California·Updated April 2026

California AI laws, the consolidated playbook for April 2026.

Eighteen statutes and rules now govern AI in California. The most-asked questions in 2026: AB 2013 training-data transparency (live), SB 53 frontier transparency (live), CPPA ADMT (phased), the FEHA employment-AI rules (live), the SB 942 watermarking law (delayed to August 2026), and where AB 1018 actually landed (held in inactive file).

By Joe Braidwood, CEO GLACIS·18 min read·Updated 24 April 2026

Oct 1 2025
FEHA employment-AI / ADS rules in force
Jan 1 2026
AB 2013 (training-data) + SB 53 (frontier) effective
Aug 2 2026
SB 942 (CAITA) operative date (delayed by AB 853)
Apr 1 2027
CPPA ADMT pre-use notices begin
Q1 → Q2 2026 update brief

SB 53 (Frontier AI Transparency Act) effective January 1, 2026. Roughly 5–8 frontier developers (OpenAI, Anthropic, Google DeepMind, Meta, Microsoft) in scope. Transparency reports before deploying or substantially modifying a frontier model; annual NIST AI RMF or ISO/IEC 42001-aligned framework; 15-day critical-incident reporting (24-hour for imminent threats); whistleblower protections; up to $1M civil penalty per violation by AG.[CA1]

SB 942 (California AI Transparency Act / CAITA) operative date moved. Originally Jan 1, 2026; AB 853 (signed Oct 13, 2025) extended the operative date to August 2, 2026. Watermarking, provenance metadata, and free public AI-detection tool obligations apply to covered providers.[CA2]

AB 1018 (Automated Decisions Safety Act) – ordered to inactive file September 13, 2025 at request of Sen. Wiener. Did not become law; expect re-introduction in 2026 session.[CA3]

AB 2013 (training-data transparency) live since January 1, 2026. Public disclosure for any generative AI system available to Californians since January 1, 2022. Enforced via the Unfair Competition Law — both AG and private-action exposure.[CA4]

FEHA employment-AI rules in force since October 1, 2025. Disparate-impact framework, 4-year record retention, anti-bias testing as defense.

Executive summary

California regulates AI across training data, employment, consumer disclosure, and political advertising—more domains than any other US state. AB 2013 (effective January 2026) requires generative AI developers to publicly disclose training data information. The Civil Rights Council’s employment AI rules (effective October 2025) make employers liable for AI-driven discrimination even without intent.

SB 1001 (2019) requires bots to disclose their artificial identity, while AB 2355 (2025) mandates AI disclosure in political advertising. The ambitious SB 1047 frontier AI safety bill was vetoed by Governor Newsom in September 2024, but its concepts continue to influence national AI policy discussions.

California’s deepfake laws targeting platforms (AB 2655) and distribution (AB 2839) have faced legal challenges—blocked and struck down respectively on First Amendment and Section 230 grounds—highlighting the constitutional complexities of AI content regulation.

AB 2013: Training Data Transparency

Signed September 28, 2024 and effective January 1, 2026, AB 2013 is the first US law requiring generative AI developers to publicly disclose training data information. It applies retroactively to systems released or substantially modified since January 1, 2022.

Required Disclosures

Developers must publicly disclose:

  • Description of datasets: How they further the AI system’s purpose
  • Number of data points: Scale of training data
  • IP content: Whether datasets include copyrighted, trademarked, or patented data
  • Data acquisition: Whether datasets were purchased or licensed
  • Personal information: Whether datasets contain personal or aggregate consumer information
  • Processing: Any cleaning, processing, or modification to datasets

Who’s Covered

  • Developers of generative AI systems
  • Systems available to California residents
  • Retroactive to January 1, 2022

Trade Secret Challenge

Companies must balance transparency with proprietary information protection. While trade secrets are not explicitly exempted, the disclosure requirements focus on categories and characteristics rather than specific dataset contents.

Employment AI Discrimination Rules

The California Civil Rights Council approved AI employment regulations on June 27, 2025, effective October 1, 2025. These rules apply the Fair Employment and Housing Act (FEHA) to Automated-Decision Systems (ADS) used in employment decisions.

Key Requirements

Liability Standard

  • Unlawful to use ADS resulting in discrimination
  • Liability even without discriminatory intent
  • Disparate impact creates liability

Defense & Evidence

  • Anti-bias testing can be used as defense
  • Absence of testing can be evidence against
  • Retain ADS records for 4 years

Record Retention (4 Years)

Selection Criteria

How ADS evaluates candidates

Outputs

Decisions and recommendations

Audit Findings

Bias testing results

Political Ads & Bot Disclosure

SB 1001

Bot Disclosure • Effective July 2019

  • Platforms with 10M+ monthly US users
  • Unlawful to deceive about artificial identity
  • Covers commercial transactions & elections
  • "Clear, conspicuous" disclosure required

AB 2355

Political AI Ads • Effective Jan 2025

  • Committees with $2,000+ contributions
  • Required disclaimer on AI-generated ads
  • "Generated or substantially altered using AI"
  • FPPC enforcement

Deepfake Laws: Legal Challenges

AB 2655 (Blocked)

Required platforms to remove deceptive election content. Blocked by federal court January 3, 2025 through June 28, 2025 on Section 230 preemption grounds.

AB 2839 (Struck Down)

Prohibited distribution of deceptive AI content near elections. Struck down October 2024 as violating First Amendment—judge ruled it "hinders humorous expression."

SB 1047: The Vetoed Frontier AI Bill

Vetoed September 29, 2024

Governor Newsom vetoed SB 1047, stating it "does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data." Despite the veto, the bill’s concepts continue to influence AI policy discussions nationally.

What SB 1047 Would Have Required

Coverage

  • Models costing $100M+ to train
  • Models with 10²⁶+ FLOPs
  • "Frontier" AI models only

Requirements

  • Safety and security protocols
  • Shutdown capabilities
  • Third-party annual audits (from 2026)
  • 72-hour incident reporting
  • Whistleblower protections

Critical harms defined: WMD creation, cyberattacks on critical infrastructure ($500M+ damage), autonomous crimes causing mass casualties. Penalties would have been up to 10% of training computing costs.

Notable Industry Support

Despite industry opposition, SB 1047 had surprising support from within AI companies:

  • xAI CEO Elon Musk publicly supported the bill
  • 113+ employees of OpenAI, DeepMind, Anthropic, Meta, and xAI signed letters of support

References

  1. [CA1] Office of Governor Newsom, SB 53 signing statement (Sept 29, 2025) — gov.ca.gov; Future of Privacy Forum, “California’s SB 53: The First Frontier AI Law, Explained”; Brookings, “What is California’s AI safety law?”.
  2. [CA2] Troutman Pepper Locke, “California AI Transparency Act Amendments Signed Into Law” (Oct 2025); California legislative information for AB 853.
  3. [CA3] California legislative information, AB 1018 status (inactive file Sep 13, 2025); EPIC summary; Foley & Lardner (Mar 2026).
  4. [CA4] Crowell & Moring, “California’s AB 2013 Requires Generative AI Data Disclosure by January 1, 2026”; Goodwin, “California’s AB 2013 Takes Effect” (Jan 2026); Davis+Gilbert.

Operating AI in California?

Make the receipts. Book the Sprint.

The Glacis Agent Runtime Security & Evidence Sprint produces signed evidence receipts of training-data disclosure (AB 2013), frontier transparency reporting (SB 53), ADMT pre-use notices, and FEHA disparate-impact testing — the kind of evidence a CPPA inspector would expect. Runtime controls run inside your infrastructure with zero sensitive-data egress.

Book the Agent Runtime Security Sprint See a sample evidence pack →