GLACIS·EU AI Act series·Chief compliance officer·Updated April 2026

EU AI Act for the chief compliance officer.

A walkthrough of Articles 9, 11, 17 and 26 framed for the compliance lead. The Digital Omnibus on AI is in trilogue with proposed delays to 2 December 2027 (stand-alone) and 2 August 2028 (embedded), but the working baseline is still 2 August 2026 — and the technical documentation, quality management system, and Article 12 evidence trail need to be ready either way.

Book the Agent Runtime Security Sprint Series hub →
CCO Head of compliance Quality management Audit lead
Article 9
Risk-management system across the AI lifecycle
Article 11
Technical documentation per Annex IV
Article 17
Quality management system covering compliance, post-market monitoring, incident reporting
Article 26
Deployer obligations — instructions for use, human oversight, monitoring
What changed in April 2026

The Digital Omnibus on AI moved into trilogue on 23 March 2026 with both Parliament and Council favouring fixed deadlines: 2 December 2027 for stand-alone high-risk systems and 2 August 2028 for systems embedded in regulated products. The delay buys preparation time, not relief — Article 17 quality management still has to exist on the original Aug 2026 baseline if the Omnibus stalls.

The GPAI Code of Practice was finalised in July 2025 and is now signed by roughly two dozen providers (Anthropic, Google, IBM, Microsoft, Mistral, OpenAI, Cohere, Aleph Alpha, Almawave and others). Meta did not sign; xAI signed only the Safety and Security chapter. Vendor due-diligence under Article 25 should now ask each upstream model provider for the chapter-by-chapter signature record. CEN-CENELEC accelerated the harmonised-standards programme (target Q4 2026), so prEN 18286 (quality management) becomes the reference Article 17 anchor.

By Joe Braidwood·12 min read·Updated April 24, 2026

Executive summary

As CCO you own the AI inventory, the quality management system under Article 17, the conformity-assessment file, and the post-market monitoring loop that feeds Article 72 incident reporting. The €35M / 7% turnover ceiling under Article 99 sits with the legal entity, but the audit trail that decides whether the regulator gets there is built — or not — by the compliance function.

The Digital Omnibus on AI proposes to push the high-risk application date from 2 August 2026 to 2 December 2027 (stand-alone) and 2 August 2028 (embedded). Most law firms now publish dual framing: continue planning to the original Aug 2026 deadline as the prudent baseline; treat the delayed dates as planning room, not relief.

This guide maps the CCO obligations onto specific Articles, gives an audit-readiness checklist that assumes a notified-body or competent-authority review, and shows how GLACIS produces the Article 12 evidence trail without manual collection.

In this guide

Why the Act demands new compliance infrastructure

Traditional compliance frameworks assume human decision-making with clear audit trails. The Regulation recognises that AI systems operate differently — making thousands of decisions per second, drifting between releases, and producing outputs that traditional controls can’t inspect. The result is three structural challenges for the compliance function.

1. Scale of decision-making

A single AI system can process millions of transactions a day. Traditional sampling-based audits don’t produce meaningful assurance at that scale. The Article 12 design assumes automated, continuous logging captured in real time, not after-the-fact reconstruction.

2. Technical complexity

Assessing whether an AI system complies draws on machine-learning, security, and data-governance expertise simultaneously. Most compliance teams now run a hybrid model: a CCO-owned programme office plus engineering and security partners who carry the technical assertions inside the QMS.

3. Evidence requirements

The Regulation calls for specific artefacts: Annex IV technical documentation, Article 9 risk-management records, Article 12 logs, and a conformity declaration. Policies and procedures alone do not satisfy a notified body — Article 12 specifically anticipates cryptographically verifiable runtime evidence that controls actually executed.

GDPR compliance infrastructure does not stretch to cover AI; the Regulation introduces obligations (Article 10 data governance, Article 14 human oversight, Article 15 robustness) that have no GDPR analogue. The CCO programme has to be purpose-built.

Key CCO responsibilities under the Act

The Regulation assigns obligations to "providers" and "deployers" of AI systems. The CCO carries the programmatic responsibility for ensuring those obligations are met across the inventory. The five anchors:

Risk classification and inventory management

Knowing exactly which AI systems are in use and how each is classified under the Regulation is the foundation of every other obligation. The inventory should cover:

  • Complete AI inventory: every AI system in development, testing, and production — including third-party AI embedded in vendor products and procurement-acquired SaaS.
  • Risk classification: map each system to prohibited (Article 5), high-risk (Annex III or Annex I products), limited-risk (Article 50 transparency), or minimal-risk categories.
  • Role determination: document whether the entity is the provider, the deployer, or both for each system — this drives which obligations apply.

Quality management system oversight (Article 17)

High-risk AI providers must establish a quality management system (QMS) covering the entire AI lifecycle. Article 17 specifies the QMS must include:

  • Strategy for regulatory compliance
  • Techniques and procedures for design, development, and testing
  • Examination, test, and validation procedures
  • Technical specifications including standards applied
  • Systems and procedures for data management
  • Risk management system per Article 9
  • Post-market monitoring system per Article 72
  • Incident reporting procedures per Article 62

Documentation and record-keeping (Articles 11, 12, 18)

Article 11 requires technical documentation demonstrating compliance with all requirements. This includes system architecture, development methodology, training data descriptions, testing procedures, and performance metrics.

Article 12 mandates automatic logging capabilities. Logs must capture events relevant to identifying risks, facilitate post-market monitoring, and be retained for the system’s lifetime or a minimum period (typically 10 years for high-risk systems).

Article 18 requires keeping documentation available for national competent authorities for 10 years after the AI system is placed on the market or put into service.

Conformity assessment coordination (Articles 43–44)

Before a high-risk AI system is placed on the market, it must complete a conformity assessment. Most Annex III systems allow internal-control assessment (self-certification under Article 43); biometric identification and certain medical AI devices require third-party notified-body involvement. The CCO function coordinates the assessment process, ensures documentation completeness, and maintains the EU declaration of conformity.

Incident reporting (Article 62)

Providers must report serious incidents to competent authorities within 15 days. A "serious incident" includes events causing death, serious health damage, serious disruption of critical infrastructure, serious property damage, or serious harm to fundamental rights. Established procedures for incident detection, severity classification, authority notification, and corrective-action documentation should be in the QMS before a high-risk system goes live.

Post-market monitoring (Article 72)

Providers must establish post-market monitoring systems proportionate to the AI system’s nature and risks. This includes collecting and analysing data on system performance, logging compliance, incident patterns, and user feedback — then feeding the result back into risk assessments and technical documentation. Article 72 closes the loop: the monitoring output is part of the audit file the regulator can request.

Building the AI compliance programme

A robust AI compliance program requires four foundational elements:

1. Governance Structure

  • AI governance committee with cross-functional representation
  • Clear roles and responsibilities (RACI matrix)
  • Escalation pathways for risk decisions
  • Board-level accountability

2. Policy Framework

  • AI acceptable use policy
  • Risk classification standards
  • Vendor AI assessment requirements
  • Incident response procedures

3. Technical Infrastructure

  • AI system inventory platform
  • Automated logging infrastructure
  • Risk assessment tooling
  • Documentation management system

4. Monitoring and Assurance

  • Continuous control monitoring
  • Internal audit program
  • Performance metrics and KPIs
  • Third-party attestation strategy

Questions CCOs should be asking their organisations

The questions below are useful inside the AI governance committee to identify gaps before a competent-authority review:

Inventory and Classification

  • Q1 Do we have a complete inventory of all AI systems, including AI embedded in third-party software?
  • Q2 Has each system been classified against EU AI Act risk categories?
  • Q3 Are we the provider, deployer, or both for each system?

Technical Compliance

  • Q4 Do our high-risk AI systems generate automatic logs per Article 12?
  • Q5 Where are logs stored, for how long, and who has access?
  • Q6 Can we demonstrate human oversight mechanisms exist and function?

Documentation and Process

  • Q7 Do we have technical documentation meeting Annex IV requirements?
  • Q8 What’s our process for reporting serious incidents within 15 days?
  • Q9 When was our risk management documentation last updated?

Red flags indicating compliance gaps

The patterns below are the recurring gaps spotted in CCO programmes during pre-audit reviews:

Critical red flags
  • No central AI inventory. If a complete list of AI systems can’t be produced within an hour — including third-party AI inside SaaS — the inventory artefact for Article 11 doesn’t yet exist.
  • Manual logging only. Spreadsheets and PDFs don’t satisfy Article 12; the Regulation contemplates automatic, tamper-evident logging produced by the system itself.
  • No documented risk classification. Every AI system needs a written classification (prohibited / high-risk / limited-risk / minimal-risk) with rationale signed off by the compliance lead.
  • Shadow AI. Business units deploying tools like ChatGPT, Copilot or in-house copilots without compliance review — these still count toward the inventory.
  • No incident playbook. The 15-day Article 62 clock starts at detection; programmes without a working escalation procedure miss the deadline by default.
  • Third-party AI blind spots. Using AI features in SaaS products without an Article 25 vendor disclosure or a GPAI Code signature record.

Audit-readiness checklist

Use this checklist to prepare for regulatory inspection or third-party audit:

EU AI Act Audit Readiness Checklist

Inventory and Classification

Complete AI system inventory with unique identifiers
Risk classification documentation for each system
Provider/deployer role determination
Third-party AI components identified and assessed

Technical Documentation (Article 11, Annex IV)

System description and intended purpose
Development methodology documentation
Training data descriptions and data governance
Testing and validation procedures
Performance metrics and accuracy measures

Logging Infrastructure (Article 12)

Automated logging capability demonstrated
Log retention policy (10+ years)
Tamper-evidence mechanisms
Access controls and audit trail

Risk Management (Article 9)

Risk management system documentation
Identified risks and mitigation measures
Residual risk assessment and acceptance
Regular review and update evidence

Quality Management System (Article 17)

QMS documentation and procedures
Roles and responsibilities defined
Change management procedures
Internal audit records

Conformity and Declarations

EU Declaration of Conformity for each high-risk system
CE marking applied where required
Notified body certificates (if applicable)
Registration in EU database (when available)

Working with other stakeholders

EU AI Act compliance requires cross-functional collaboration. Here’s how to work effectively with key stakeholders:

Stakeholder Key Collaboration Areas What the compliance function needs from them
CISO Logging infrastructure, access controls, incident response, cybersecurity requirements Technical logging architecture, security assessments, incident detection capabilities
General Counsel Regulatory interpretation, liability analysis, contract requirements, enforcement monitoring Legal opinions on classification, contract language for AI vendors, regulatory updates
CTO/Engineering Technical documentation, system architecture, logging implementation, human oversight AI system inventory, technical specifications, Article 12 logging implementation
Data/AI Team Model documentation, training data governance, performance monitoring, bias testing Training data descriptions, model cards, validation results, fairness assessments
Business Units Use case identification, risk assessment input, user requirements, incident reporting AI usage disclosure, intended purpose documentation, operational risk input
Procurement Vendor AI assessment, contract requirements, third-party compliance AI vendor inventory, contract amendments, provider compliance attestations

Board and executive reporting on compliance status

Your board needs clear, actionable information about AI compliance status. Structure your reports around these elements:

Quarterly board report framework

1. Compliance Status Dashboard

Traffic-light summary of compliance across all high-risk AI systems. Include systems count, deadline proximity, and overall program health score.

2. Risk Exposure Summary

Quantify potential penalty exposure (€ amount), identify highest-risk systems, and summarize mitigation progress.

3. Incident Report

Summary of any AI-related incidents, near-misses, and corrective actions taken. Include trends analysis.

4. Resource and Investment Needs

Budget requirements for compliance infrastructure, staffing gaps, and third-party assessment costs.

5. Regulatory Developments

Updates on implementing acts, guidance from the AI Office, and enforcement actions in your sector.

Certification pathways

Two certification pathways support EU AI Act compliance:

ISO 42001 Certification

AI Management System standard providing systematic framework for AI governance. Substantially overlaps with EU AI Act QMS requirements.

  • + Demonstrates governance maturity
  • + Supports internal control assessment
  • + Market credibility signal

Timeline: 6-12 months | Cost: €50,000-€200,000

Conformity Assessment (Articles 43-44)

Mandatory assessment pathway for placing high-risk AI systems on the EU market. Internal control or notified body assessment.

  • + Legally required for market access
  • + Enables CE marking
  • + EU Declaration of Conformity

Notified Body: €10,000-€100,000 | Timeline: 3-12 months

Article

Article 12 Logging Requirements—GLACIS Core Relevance

nbsp;12 logging — what GLACIS attests

Article 12 is among the most technically demanding requirements of the EU AI Act. It mandates that high-risk AI systems be designed with automatic logging capabilities that:

  • Record events throughout the system’s lifetime
  • Enable traceability of the system’s functioning
  • Facilitate post-market monitoring
  • Support identification of risks and serious incidents

The regulation requires logs be retained for a period appropriate to the intended purpose—typically the system’s lifetime or a minimum of 10 years for high-risk systems. Logs must be accessible to competent authorities upon request.

Critical CCO consideration

Article 12 contemplates automatic logging — not manual documentation after the fact. The system architecture has to generate tamper-evident logs in real time. Retrofitting legacy AI systems to meet this requirement is the single most common compliance gap surfaced in pre-audit reviews.

Logs must capture sufficient detail to demonstrate that controls actually execute. This means recording:

  • Input data characteristics (without storing personal data unnecessarily)
  • System outputs and decisions
  • Human oversight interventions
  • System anomalies and error conditions
  • Control execution evidence

How GLACIS helps CCOs maintain audit readiness

GLACIS was built specifically to solve the Article 12 challenge. Our platform provides:

Continuous Attestation

Cryptographic proof that your AI controls execute correctly—not just documentation that they exist. Evidence generated automatically in real-time.

Framework Mapping

Evidence automatically mapped to EU AI Act Articles 9-15, ISO 42001 controls, and NIST AI RMF. One evidence set, multiple frameworks.

Audit-Ready Packages

Generate compliance evidence packages on demand for regulators, auditors, or enterprise customers. No scrambling before inspections.

Tamper-Evident Logging

Immutable audit trail that can’t be modified after the fact. Meets Article 12 requirements for traceability and evidence integrity.

Frequently asked questions

What are the CCO’s primary responsibilities under the EU AI Act?

CCOs are responsible for building and overseeing the AI compliance program including: maintaining the AI system inventory with risk classifications, establishing quality management systems per Article 17, coordinating conformity assessments, ensuring Article 12 logging requirements are met, managing incident reporting under Article 62, implementing post-market monitoring per Article 72, and providing compliance status reports to the board.

What documentation must CCOs maintain for EU AI Act compliance?

CCOs must ensure maintenance of technical documentation per Article 11 and Annex IV (covering system design, development methodology, training data, and performance metrics), automatically generated logs per Article 12 retained for the system’s lifetime, quality management system documentation per Article 17, risk management documentation per Article 9, and records of conformity assessments and EU declarations of conformity.

How should CCOs prepare for EU AI Act audits?

CCOs should establish a complete AI system inventory with risk classifications, implement automated logging infrastructure meeting Article 12 requirements, document all conformity assessment evidence, create audit-ready packages with technical documentation and risk assessments, establish clear audit trails for all AI-related decisions, and conduct regular internal audits to identify gaps before regulatory inspection.

What incident reporting requirements does Article 62 impose?

Article 62 requires providers to report serious incidents to national competent authorities within 15 days. Serious incidents include those causing death, serious health damage, serious disruption of critical infrastructure, serious property damage, or serious harm to fundamental rights. CCOs must establish incident detection mechanisms, escalation procedures, and reporting workflows to meet this obligation.

How does ISO 42001 certification relate to EU AI Act compliance?

ISO 42001 provides an AI management system framework that substantially overlaps with EU AI Act requirements. Achieving ISO 42001 certification demonstrates systematic AI governance and can serve as evidence of conformity for internal control assessments under Article 43. However, ISO 42001 alone doesn’t satisfy all EU AI Act obligations—CCOs must map specific Article requirements to their management system.

What should CCOs report to the board about AI compliance?

Board reports should cover: AI system inventory summary with risk classifications, conformity assessment status and upcoming deadlines, incident reports and remediation actions, compliance gap analysis and remediation roadmap, resource requirements for compliance infrastructure, regulatory developments and their business impact, and third-party audit findings. Reports should use clear risk metrics and traffic-light status indicators.

References

  1. European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, 12 July 2024. EUR-Lex 32024R1689
  2. European Parliament. "Artificial Intelligence Act: delayed application, ban on nudifier apps." Press release, 23 March 2026. europarl.europa.eu
  3. European Parliament Legislative Train. "Digital Omnibus on AI." April 2026. europarl.europa.eu
  4. European Commission. "Contents of the GPAI Code of Practice." Digital Strategy, 2026. digital-strategy.ec.europa.eu
  5. European Commission. "Signatories of the GPAI Code of Practice." Digital Strategy, 2026. digital-strategy.ec.europa.eu
  6. CEN-CENELEC. "AI standardisation update — accelerated work programme." 23 October 2025. cencenelec.eu
  7. European Commission. "AI Office." Digital Strategy, 2026. digital-strategy.ec.europa.eu
  8. ISO/IEC. "ISO/IEC 42001:2023 Information technology — Artificial intelligence — Management system." December 2023. iso.org

Make the receipts

Build the Article 12 evidence trail before the audit, not during it.

The Glacis Agent Runtime Security & Evidence Sprint produces the Article 9–15 evidence the QMS expects: signed evidence receipts from runtime behaviour, Annex IV technical documentation pulled from live system telemetry, and tamper-evident decision logs — runtime controls run inside your infrastructure with zero sensitive-data egress. 10 business days to a signed evidence pack on day ten.

Book the Agent Runtime Security Sprint See a sample evidence pack →

Related guides

EU AI Act series hubArticles, penalty structure, GLACIS coverage map.
Full compliance guideRisk categories, Articles 9–15 in detail, GPAI, conformity-assessment paths, the Omnibus status.
For CISOsArticle 12 logging architecture, Article 15 robustness, sec-eng integration.
For General CounselLiability allocation, vendor and deployer contracts, extraterritorial scope.
ISO 42001 guideAI management-system standard that maps to Article 17.
AI audit guidePreparing for an AI-system audit.