GLACIS·EU AI Act series·General Counsel·Updated April 2026

EU AI Act for General Counsel.

A walkthrough of the Regulation for legal teams: Article 2 extraterritorial scope, vendor and deployer liability allocation, post-market obligations under Articles 26 and 72, the April 2026 enforcement gap, and what the Digital Omnibus on AI delay does to procurement clauses already drafted around the original 2 August 2026 baseline.

Book the Agent Runtime Security Sprint Series hub →
General Counsel Deputy GC Privacy counsel Regulatory affairs
Article 2
Extraterritoriality — output used in the EU
Articles 25, 26
Provider/deployer liability allocation, vendor reps and warranties
Articles 72, 73, 99
Post-market obligations and the penalty structure (€35M / 7%)
Apr 2026
No public enforcement actions to date — supervisory authorities still finalising set-up
What changed in April 2026

The Digital Omnibus on AI is in trilogue. Both Parliament (IMCO/LIBE, joint position adopted 18 March 2026) and Council favour fixed deadlines: 2 December 2027 for stand-alone high-risk systems and 2 August 2028 for systems embedded in regulated products. Procurement and vendor clauses written around the original 2 August 2026 baseline now need a "pending Omnibus adoption" carve-out. Until adopted, the original date remains the operative working baseline.

No public enforcement actions for prohibited practices have been confirmed. Several member states are still finalising their market surveillance authorities — Germany has confirmed BNetzA, Belgium has designated BIPT, Italy law 132/2025 is in force with AgID and ACN named, the Netherlands ran public consultation on its Implementation Act 20 April – 1 June 2026. The recognised "enforcement gap" is real, but does not negate the obligations.

The GPAI Code of Practice has been signed by roughly two dozen providers; Meta did not sign and xAI signed only the Safety and Security chapter. For procurement, this changes the standard vendor representations on Article 55 GPAI obligations — deployers using a non-signatory model are operating without an industry-recognised compliance baseline.

By Joe Braidwood·14 min read·Updated April 24, 2026

Executive summary

The Regulation creates obligations on system behaviour, not just on data handling. Article 2 extends the scope to non-EU providers and deployers whose AI outputs are used in the Union. Article 3 substantially expands the definition of "provider" — entities that materially modify a third-party AI system, or place it on the market under their own name, can inherit provider-level obligations and liability. The pending AI Liability Directive will introduce a rebuttable presumption of causation when a defendant cannot demonstrate AI-Act compliance.

Penalty ceilings under Article 99 are €35M or 7% of global turnover for prohibited practices, €15M or 3% for other non-compliance, and €7.5M or 1% for incorrect information to authorities. As of April 2026 there are no public enforcement actions, but several member states have completed their authority designations and the Article 73 15-day serious-incident reporting requirement is already operative for high-risk systems on the market.

The legal-defence posture rests on contemporaneous evidence: Article 9 risk-management records, Article 11 technical documentation, Article 12 logs, Article 14 oversight traces, Article 73 incident files. This guide maps the Act onto General Counsel responsibilities, addresses the dual framing required while the Omnibus is in trilogue, and shows how GLACIS produces the evidence trail with chain-of-custody preserved.

In this guide

Why the Act creates new legal exposure

The EU AI Act represents a fundamental shift in how organizations must approach AI deployment. For General Counsel, three aspects create particularly significant legal exposure:

Behavioural obligations, not just data rules

GDPR focused on how organizations handle data. The AI Act focuses on how AI systems behave and make decisions. This means legal liability now extends to algorithmic outputs, model accuracy, bias in automated decisions, and the effectiveness of human oversight mechanisms. These are areas where legal teams historically had limited visibility.

Expanded definition of "provider"

Under Article 3, organizations that substantially modify AI systems or put their name on AI products may become "providers"—assuming full compliance obligations including conformity assessment. A company that integrates a third-party AI model into a high-risk use case (employment screening, credit decisions) may inherit provider-level liability regardless of who built the underlying model.

The AI Liability Directive

The forthcoming EU AI Liability Directive will create a rebuttable presumption of causation when AI systems cause harm and the defendant cannot demonstrate compliance. This effectively shifts the burden of proof to defendants. Organizations unable to produce evidence of proper risk management, testing, and oversight will face significant disadvantages in litigation.

Key General Counsel responsibilities under the Act

Liability assessment and risk allocation

General Counsel must map AI systems across the organization and classify them according to the Act’s risk taxonomy. For each high-risk system, liability must be clearly allocated between internal teams, vendors, and partners. Key questions include:

  • Who bears liability for model performance and accuracy?
  • How is responsibility allocated when multiple parties contribute to a system?
  • What insurance coverage exists for AI-specific liability?
  • Are indemnification provisions adequate for regulatory penalties?

Contractual obligations

Vendor agreements require immediate review. Contracts with AI providers must address:

  • Clear allocation of provider vs. deployer obligations
  • Representations regarding risk classification and conformity status
  • Audit rights for compliance verification
  • Incident notification aligned with Article 73 (15-day serious incident reporting)
  • Indemnification for regulatory penalties from vendor non-compliance
  • Data governance warranties per Article 10
  • Documentation delivery for downstream compliance

Customer terms must be updated to include appropriate AI disclosures, particularly for systems requiring transparency under Article 50 (chatbots, emotion recognition, deepfakes).

Regulatory-engagement strategy

The AI Act establishes national competent authorities in each member state, coordinated by the EU AI Office. General Counsel should develop relationships with relevant authorities before enforcement actions arise. Consider:

  • Identifying which member state authorities have jurisdiction
  • Monitoring regulatory guidance and codes of practice
  • Participating in regulatory sandboxes where available
  • Preparing for potential market surveillance activities

Evidence-preservation requirements

Article 12 requires high-risk AI systems to generate logs that enable monitoring and investigation. General Counsel must ensure:

  • Logging systems capture decision-relevant data
  • Retention periods meet regulatory requirements
  • Legal hold procedures extend to AI system logs
  • Chain of custody protocols exist for algorithmic evidence

Documentation and disclosure obligations

Article 11 mandates comprehensive technical documentation before high-risk systems enter the market. Article 13 requires transparency for users. General Counsel oversight ensures documentation is legally sound and disclosures don’t create unintended liability exposure.

Questions General Counsel should be asking the organisation

AI Inventory and Classification

"Do we have a complete inventory of AI systems, and has each been classified under the EU AI Act risk categories? Who made those classification decisions, and is the rationale documented?"

Vendor Compliance

"For AI systems we procure, have we verified our vendors’ conformity status? Do our contracts clearly allocate EU AI Act obligations, and do we have audit rights?"

Evidence Generation

"If a regulator requested evidence that our risk management system operates effectively, what would we produce? Is that evidence timestamped and tamper-evident, or would we be reconstructing from scattered logs?"

Human Oversight

"Can we demonstrate that humans actually review and can override AI decisions? Is there an audit trail of human interventions, or just a policy saying oversight exists?"

Incident Response

"Do we have a protocol for AI-related incidents that meets the 15-day serious incident reporting requirement? Has legal been involved in defining what constitutes a reportable incident?"

Board Awareness

"Has the board been briefed on AI-related legal exposure? Are AI risks included in enterprise risk management, and is the board receiving regular updates?"

Red flags indicating legal and compliance gaps

No AI System Inventory

If the organization cannot produce a comprehensive list of AI systems in use, classification and compliance are impossible.

"We’re Just Using Vendor Tools"

Belief that vendor-provided AI absolves organizational liability. Deployers have independent obligations; integration into high-risk use cases may trigger provider-level duties.

Documentation Exists Only as Policies

Policies describing what should happen without evidence of what actually happens. Regulators will demand operational proof.

No AI-Specific Contract Language

Vendor and customer contracts that don’t address AI-specific obligations, liability allocation, or compliance representations.

Human Oversight is Theoretical

Claims of human-in-the-loop processes without audit trails showing humans actually review decisions or documentation of override capabilities.

IT Owns AI Governance Alone

AI governance treated as a technical function without legal, compliance, and business unit involvement. This siloed approach misses liability implications.

Personal-liability considerations

While the EU AI Act primarily imposes organizational penalties, General Counsel should be aware of pathways to personal liability:

Member State Implementation

Individual member states may implement the AI Act in ways that create personal liability for directors or officers. Monitor transposition legislation in key jurisdictions where the organisation operates.

Civil Litigation

When AI systems cause harm, affected parties may pursue civil claims against executives for negligent oversight. The AI Liability Directive’s burden-shifting will make such claims easier to sustain.

Fiduciary Duties

Directors have fiduciary obligations that include overseeing material risks. AI presents board-level risks; failure to ensure adequate governance may breach fiduciary duties.

Regulatory Action Against Individuals

In egregious cases—particularly involving prohibited AI practices or willful non-compliance—regulators may pursue action against responsible individuals, especially where they can demonstrate knowledge of violations.

Affirmative-defence requirements (Colorado AI Act intersection)

The Colorado AI Act (SB 24-205), effective June 30, 2026, provides a notable affirmative defense framework relevant to US organizations also subject to EU AI Act obligations.

The Colorado "Cure" Defense

Colorado provides developers and deployers an affirmative defense if they:

  1. Discover the violation through reasonable monitoring
  2. Cure the violation within a reasonable timeframe
  3. Notify affected consumers where required
  4. Document the discovery and remediation

This defense is only available to organizations with functioning compliance programs. Continuous monitoring that generates contemporaneous evidence is essential—you cannot invoke a "cure" defense if you lack systems to discover violations in the first place.

Implications for EU AI Act Compliance

Organizations operating under both regimes should align their compliance infrastructure. The EU AI Act’s logging requirements (Article 12) and post-market monitoring obligations (Article 72) create the operational foundation needed to invoke Colorado’s affirmative defense.

Evidence standards for regulatory defence

When regulators investigate or litigation arises, evidence quality determines outcomes. General Counsel must understand what constitutes defensible evidence under AI regulations:

Contemporaneous Documentation

Evidence generated in real-time carries far more weight than after-the-fact reconstruction. Timestamped logs showing controls executed at specific moments defeat arguments that compliance was merely aspirational.

Tamper-Evident Records

Regulators are sophisticated enough to question whether logs have been modified. Cryptographic attestation—evidence that hasn’t been and cannot be altered—provides the strongest foundation for defense.

Mapping to Regulatory Requirements

Evidence must clearly correspond to specific regulatory obligations. General documentation about "AI governance" is less valuable than evidence specifically demonstrating Article 9 risk management, Article 10 data governance, or Article 14 human oversight.

The "Proof Gap" Problem

Most organizations suffer from a "proof gap"—the difference between controls that exist on paper and verifiable evidence that controls actually operate. Closing this gap is essential for regulatory defense. Policy documents prove intent; operational evidence proves execution.

Working with other stakeholders

Chief Information Security Officer (CISO)

Coordinate on: logging infrastructure, data security for AI systems, cybersecurity requirements under Article 15, incident detection and response, vulnerability management for AI-specific threats.

Chief Compliance Officer (CCO)

Coordinate on: compliance program design, regulatory mapping, training and awareness, audit schedules, remediation tracking, policy development.

Business Unit Leaders

Coordinate on: AI use case identification, risk classification input, operational implementation of controls, human oversight execution, incident escalation protocols.

Data Protection Officer (DPO)

Coordinate on: GDPR/AI Act intersection, data governance under Article 10, privacy impact assessments, cross-border data considerations, subject access requests involving AI.

Board reporting on AI risk

General Counsel should ensure the board receives regular, substantive reporting on AI-related legal exposure:

Recommended Board Reporting Elements

  • AI System Inventory: Number and classification of AI systems, changes since last report
  • Compliance Status: Progress against regulatory deadlines, gap analysis, remediation timelines
  • Incident Summary: AI-related incidents, near-misses, regulatory inquiries
  • Regulatory Developments: New guidance, enforcement actions in the industry, legislative updates
  • Risk Quantification: Estimated exposure, insurance coverage, liability reserves
  • Resource Needs: Budget, personnel, and technology requirements for compliance

Litigation-readiness checklist

AI Litigation Readiness

How GLACIS provides defensible evidence

GLACIS addresses the core challenge General Counsel face: producing evidence that AI controls actually operate, not just documentation that they should.

Cryptographic Attestation

GLACIS generates tamper-evident proof that controls executed at specific moments. Attestations cannot be backdated or modified—providing the evidentiary foundation regulators and courts require.

Regulatory Mapping

Evidence is automatically mapped to EU AI Act articles, NIST AI RMF functions, and ISO 42001 controls—enabling instant demonstration of compliance against specific requirements.

Continuous Monitoring

Rather than point-in-time audits, GLACIS provides ongoing verification that controls remain effective—essential for Colorado’s affirmative defense and EU post-market monitoring.

Audit-Ready Reports

Board reports, regulatory submissions, and litigation support packages generated on demand—reducing the burden on legal teams during high-pressure situations.

Frequently asked questions

What are the key dates General Counsel should track?

February 2, 2025: Prohibited AI practices banned. August 2, 2025: GPAI model obligations apply. August 2, 2026: High-risk AI system requirements in full effect. August 2, 2027: Extended deadline for certain medical AI devices. The Colorado AI Act takes effect June 30, 2026.

How should we handle AI systems from US-based vendors?

Vendor location doesn’t determine compliance obligations—deployment location and affected individuals do. If you deploy a US vendor’s AI system in the EU or it affects EU residents, the EU AI Act applies. The vendor contracts must address EU-specific obligations, and you should verify vendors can support your compliance needs (documentation, conformity evidence, incident notification).

What’s the relationship between GDPR and AI Act enforcement?

The regulations are complementary and enforced by overlapping (but not identical) authorities. AI systems processing personal data must comply with both. National competent authorities for the AI Act will coordinate with data protection authorities. Non-compliance can trigger penalties under both frameworks—potentially doubling exposure for a single system.

Should we engage with regulatory sandboxes?

Regulatory sandboxes (Article 57-62) offer valuable benefits: regulatory guidance during development, potential for modified obligations, and relationship-building with authorities. For organizations developing novel AI applications, sandbox participation can reduce compliance uncertainty. However, sandbox benefits don’t exempt you from core obligations—and sandbox interactions create records that may be discoverable.

How do we handle existing AI systems that may not comply?

Conduct an immediate gap analysis. For systems that cannot achieve compliance by applicable deadlines, options include: (1) modification to meet requirements, (2) re-classification to a lower risk category if legitimately appropriate, (3) geographic restriction to exclude EU markets, or (4) decommissioning. Document the analysis and decision rationale—regulators will scrutinize "re-classification" decisions carefully.

What privilege considerations apply to AI compliance work?

Structure AI audits and assessments carefully to preserve privilege where appropriate. Legal-directed compliance assessments may qualify for attorney-client privilege or work product protection. However, operational compliance documentation (logs, attestations, routine monitoring) generally won’t be privileged. Consult with outside counsel on privilege strategies before commencing major AI compliance initiatives.

References

  1. European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, July 12, 2024. EUR-Lex 32024R1689
  2. European Commission. "Proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive)." September 28, 2022. europa.eu
  3. Colorado General Assembly. "SB21-169: Artificial Intelligence Transparency." 2024. leg.colorado.gov
  4. European Commission. "Questions and Answers: Artificial Intelligence Act." March 13, 2024. europa.eu
  5. ISO/IEC. "ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System." December 2023. iso.org

Defensible evidence

Close the proof gap before a competent authority asks.

The Glacis Agent Runtime Security & Evidence Sprint produces the contemporaneous record the Regulation contemplates: Article 9 risk telemetry, Article 11 technical documentation pulled from runtime, and Article 12 logs as signed evidence receipts with chain-of-custody. Runtime controls run inside your infrastructure with zero sensitive-data egress. The record is assembled before the inquiry, not during it.

Book the Agent Runtime Security Sprint See a sample evidence pack →

Related guides

EU AI Act series hubArticles, penalty structure, GLACIS coverage map.
Full compliance guideRisk categories, Articles 9–15 in detail, GPAI, conformity assessment, Omnibus status.
EU AI Act vs HIPAASide-by-side crosswalk for healthcare and life-sciences operators with US obligations.
Colorado AI Act$20K-per-violation US analogue with affirmative-defence framework.
For CCOsArticles 9, 11, 17, 26 framed for the compliance lead.
The proof gap (whitepaper)Why compliance documentation alone is insufficient.