GLACIS·EU AI Act series·Updated April 2026

Runtime evidence infrastructure for EU AI Act-relevant deployments.

The Act applies to any provider or deployer whose AI outputs are used in the EU — with prohibited-practice fines up to €35M or 7% of global turnover. High-risk obligations are scheduled for 2 August 2026 (Digital Omnibus on AI proposes a delay to 2 December 2027 stand-alone or 2 August 2028 embedded). GLACIS gives high-risk and GPAI-with-systemic-risk operators a runtime evidence layer: signed receipts from local controls inside your infrastructure, continuously assembled into the Article 12 logs and Annex IV documentation a notified body actually reviews.

Scope an EU AI Act evidence sprint See a sample evidence pack →
General Counsel CCO CISO DPO
Feb 2025
Prohibited practices in force
Aug 2025
GPAI obligations live; Code of Practice signed by ~24 providers
Aug 2026
High-risk obligations scheduled; under Omnibus review
2027 → 2028
Proposed delayed dates: 2 Dec 2027 stand-alone, 2 Aug 2028 embedded
What changed in April 2026

The Digital Omnibus on AI moved into trilogue on 23 March 2026. Both Parliament (IMCO/LIBE) and Council favour fixed deadlines for delayed high-risk obligations: 2 December 2027 for stand-alone systems and 2 August 2028 for systems embedded in regulated products. Until adoption, 2 August 2026 remains the working baseline.

The GPAI Code of Practice was finalised in July 2025 and is now signed by roughly two dozen providers (Anthropic, Google, IBM, Microsoft, Mistral, OpenAI, Cohere, Aleph Alpha, Almawave and others); Meta has not signed and xAI signed only the Safety and Security chapter. CEN-CENELEC accelerated the harmonised-standards programme (target Q4 2026), which is the principal driver of the Omnibus delay proposal.

High-risk systems under Annex III

Annex III lists the eight domains where an AI system is classified high-risk if it materially affects natural persons. Crossing into one of these categories triggers Articles 9–15 (risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity), plus Article 17 quality management.

DomainTypical systems in scope
BiometricsRemote identification, categorisation, emotion recognition (outside law-enforcement carve-outs)
Critical infrastructureSafety components for water, gas, electricity, traffic management, digital networks
Education & vocational trainingAdmissions scoring, exam evaluation, attainment-level assignment, prohibited-behaviour detection
EmploymentRecruitment, selection, performance evaluation, termination, work allocation
Essential servicesCreditworthiness, life and health insurance pricing, public-benefit access decisions, emergency triage
Law enforcementRisk assessment of natural persons, polygraphs, evidence reliability, profiling
Migration, asylum & borderRisk assessment, document verification, application examination support
Justice & democratic processesJudicial-decision support, alternative dispute resolution, election influence systems

What Articles 9–15 actually require

ArticleRequirement
Art. 9Risk management system across the lifecycle: identify, evaluate, mitigate, monitor.
Art. 10Data governance for training, validation, testing — relevance, representativeness, error checks.
Art. 11Technical documentation per Annex IV (nine substantive sections).
Art. 12Automatic event logging for traceability — the line GLACIS attests.
Art. 13Transparency and instructions for downstream deployers.
Art. 14Effective human oversight measures.
Art. 15Accuracy, robustness, cybersecurity — including resilience to adversarial input.
Art. 17Quality management system covering compliance, post-market monitoring, incident reporting.

Penalty structure under Article 99

Three penalty bands. Whichever amount is higher applies. National competent authorities set the actual fine within these ceilings; the AI Office handles GPAI providers directly.

ViolationMaximum fineOr % of global turnover
Prohibited practices (Article 5)€35,000,0007%
Other non-compliance (Articles 9–15, 17, etc.)€15,000,0003%
Incorrect information to authorities€7,500,0001%
Enforcement status

No public enforcement actions against prohibited practices have been confirmed as of April 2026. Several member states are still finalising their market surveillance authorities; the recognised "enforcement gap" is one reason the Omnibus is reshaping the high-risk timeline.

How GLACIS fits the obligations

GLACIS runs inside your infrastructure. Local runtime controls emit signed evidence receipts for every consequential decision — continuously assembled into Annex IV technical documentation and a tamper-evident Article 12 log. Conformity assessors and notified bodies receive verifiable evidence packs rather than written assertions, and sensitive data never leaves your environment.

ArticleWhat GLACIS produces
Art. 9 Risk managementContinuous risk-posture telemetry with evidence of control execution at every inference.
Art. 11 Technical docsAuto-generated Annex IV sections sourced from live system behaviour and configuration.
Art. 12 LoggingSigned evidence receipts for every consequential decision, full provenance, zero sensitive-data egress.
Art. 14 Human oversightOperator-action receipts and override traces tied to the decisions they applied to.
Art. 15 RobustnessResilience evidence: adversarial-input behaviour, drift detection, recovery actions.
Art. 17 QMSContinuous post-market monitoring with serious-incident escalation triggers.
Scope an EU AI Act evidence sprint See a sample evidence pack →

$48k · ten business days · one named workflow. Signed evidence pack ready to share with your DPO or notified body.

Go deeper

Full compliance guide Risk categories, Articles 9–15 in detail, GPAI obligations, conformity assessment paths, the Omnibus status.
For Chief Compliance Officers Programme architecture, audit-readiness checklist, board reporting, certification routes.
For CISOs Article 12 logging architecture, Article 15 robustness, sec-eng integration.
For General Counsel Liability allocation, vendor and deployer contracts, extraterritorial scope.
EU AI Act vs HIPAA Crosswalk for healthcare and life-sciences operators with US obligations.
Colorado AI Act $20K-per-violation US analogue; what stacks with the EU regime.

By member state

GermanyBNetzA designated as main market surveillance authority under draft KI-MIG; BaFin for financial-sector high-risk AI; KoKIVO coordination centre planned.
FranceDecentralised model: CNIL on workplace/education emotion-recognition; ANSSI on cybersecurity; PEReN technical support; multi-authority bill still pending.
ItalyNational AI Law No. 132/2025 in force October 2025; AgID notifying authority, ACN market surveillance, Garante on data; implementing decrees due October 2026.
SpainAESIA operational since June 2024; 16 detailed compliance guides published December 2025; regulatory sandbox; draft national AI Law (March 2025).
NetherlandsHybrid 10-authority model led by AP; AP+RDI co-coordinate; public consultation on Implementation Act open 20 April – 1 June 2026.
BelgiumBIPT designated main market surveillance authority (2025-2029 Federal Government Agreement); 21 fundamental-rights bodies under Article 77.
PolandNew body KRiBSI under construction (single-authority model); operational support nested in Ministry of Digital Affairs; UODO disputing advisory-only role.

Get started

Start with one high‑risk AI workflow.

Book a focused Agent Runtime Security & Evidence Sprint, then deploy runtime assurance where the risk is real.

From assessment to platform deployment. See pricing →