GLACIS·EU AI Act series·Compliance guide·Updated April 2026

EU AI Act compliance, the working playbook for April 2026.

Regulation (EU) 2024/1689 in plain English: how risk classification works, what Articles 9–15 actually require, where the GPAI Code of Practice landed, and how the Digital Omnibus on AI is now reshaping the August 2026 high-risk deadline. With the citations regulators expect.

By Joe Braidwood, CEO GLACIS·26 min read·Updated 24 April 2026

Feb 2025
Prohibited practices in force; AI literacy obligations apply
Aug 2025
GPAI obligations live; Code of Practice signed by ~24 providers
Aug 2026
High-risk obligations scheduled; under Omnibus review
2027 → 2028
Proposed delayed dates: 2 Dec 2027 stand-alone, 2 Aug 2028 embedded

Executive summary

The EU AI Act (Regulation 2024/1689) entered into force on 1 August 2024 and is the world’s first horizontal AI regulation. It classifies AI systems into prohibited, high-risk, limited-risk and minimal-risk categories, with penalties reaching €35 million or 7% of global annual turnover, whichever is higher.[1]

The phased calendar: prohibited practices have been banned since 2 February 2025, GPAI obligations since 2 August 2025. High-risk obligations are scheduled for 2 August 2026, but the Digital Omnibus on AI — proposed by the Commission on 19 November 2025 and adopted by the European Parliament on 23 March 2026 — would push them to 2 December 2027 (stand-alone systems) and 2 August 2028 (systems embedded in regulated products).[12][13][14] Until trilogue closes, organisations are continuing to prepare for the original August 2026 baseline.

Where we land in April 2026. The GPAI Code of Practice is finalised and signed by roughly two dozen providers — including Anthropic, Google, IBM, Microsoft, Mistral, OpenAI and Cohere — though Meta has not signed and xAI signed only the Safety and Security chapter.[15] CEN-CENELEC harmonised standards are tracking to Q4 2026 (the principal driver of the delay proposal).[16] Member states are still finalising their competent authorities, which is the working "enforcement gap" recognised across the practitioner community.

€35M
Maximum fine[1]
Aug 2026
High-risk deadline (under review)
~24
GPAI Code signatories[15]
27
EU member states

What changed in April 2026

Q1 → Q2 2026 update brief

Digital Omnibus on AI is in trilogue. The European Parliament adopted its position on 23 March 2026, including a ban on AI nudification apps and fixed deadlines for the delayed application of high-risk rules. The Council aligns on fixed deadlines (2 December 2027 and 2 August 2028).[12][13]

GPAI Code of Practice signatories grew through Q1 2026 to roughly 24 organisations. Notable absences: Meta (not signed), xAI (Safety/Security chapter only).[15]

CEN-CENELEC harmonised standards are now tracking to Q4 2026 delivery after the October 2025 acceleration measures. The first standard targets quality management (prEN 18286).[16]

Member-state competent authorities: Belgium designated BIPT (Federal Government Agreement, 2025-2029); Germany’s draft KI-MIG names BNetzA (with KoKIVO coordination centre); the Netherlands opened public consultation on its Implementation Act on 20 April 2026; Poland is constructing a new single authority (KRiBSI); France has tasked ANSSI with cybersecurity competences; Italy’s national AI Law No. 132/2025 entered into force in October 2025 with implementing decrees due by 10 October 2026.[17][18][19][20][21][22]

AI Office: staffing has crossed 125, with a target of around 140. Five-unit structure under Lucilla Sioli.[23]

What is the EU AI Act?

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world’s first comprehensive legal framework regulating artificial intelligence systems. Adopted by the European Parliament on March 13, 2024, and entering into force August 1, 2024, it establishes harmonized rules for AI development, deployment, and use across all 27 EU member states.[1]

History and legislative process

The European Commission proposed the AI Act on April 21, 2021, as part of its digital strategy. After three years of trilogue negotiations between the Commission, Parliament, and Council, political agreement was reached December 9, 2023. The final text passed with 523 votes in favor, 46 against, and 49 abstentions.[4]

The regulation was published in the Official Journal of the European Union (EUR-Lex) on July 12, 2024, as Regulation (EU) 2024/1689, comprising 180 articles and 13 annexes spanning 144 pages.[1]

Scope and applicability

The AI Act applies to:

The regulation defines an "AI system" per Article 3(1) as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."[1]

Key objectives

The regulation balances promoting AI innovation with protecting fundamental rights, health, safety, and democratic values. Article 1 establishes objectives including:

Risk categories

The AI Act employs a risk-based approach, classifying AI systems into four tiers with escalating regulatory requirements based on potential harm to health, safety, and fundamental rights.

Prohibited AI systems

Article 5 — Banned outright due to unacceptable risks to fundamental rights and human dignity. Effective February 2, 2025.

Examples:

  • Social scoring systems evaluating or classifying people based on behavior, socio-economic status, or personal characteristics
  • Untargeted scraping of facial images from internet or CCTV for facial recognition databases
  • Emotion recognition in workplace and educational settings (with limited exceptions)
  • Manipulative AI exploiting vulnerabilities of age, disability, or socio-economic circumstances
  • Real-time remote biometric identification in public spaces by law enforcement (narrow exceptions)

High-risk AI systems

Articles 6-7, Annex III — Pose significant risks to health, safety, or fundamental rights. Subject to strict requirements before and during market placement. Full compliance by August 2, 2026.

Eight Categories (Annex III):

  • Biometric identification and categorization: Real-time/post remote biometric ID, emotion recognition (limited contexts)
  • Critical infrastructure: Safety components in road traffic, water, gas, heating, electricity management
  • Education and training: Determining educational institution access, evaluation of learning outcomes, exam supervision
  • Employment: Recruitment, task allocation, promotion decisions, performance monitoring, termination decisions
  • Essential services: Creditworthiness assessment, insurance pricing/underwriting, emergency dispatch prioritization
  • Law enforcement: Individual risk assessment, polygraphs, emotion detection, deep fake detection, evidence evaluation
  • Migration and asylum: Application examination, risk assessment, verification of authenticity of travel documents
  • Justice and democratic processes: Assisting judicial authorities in researching and interpreting facts and law

Limited-risk AI systems

Article 50 — Specific transparency obligations to ensure users are aware they are interacting with AI. Minimal regulatory burden.

Examples:

  • Chatbots and conversational agents (must disclose they are AI)
  • Emotion recognition systems (limited contexts, must inform users)
  • Biometric categorization systems (must inform data subjects)
  • Deep fakes and AI-generated content (must be labeled as synthetic)

Minimal-risk AI systems

No regulatory obligations beyond existing product safety and liability rules. Voluntary codes of conduct encouraged (Article 95).

Examples:

  • AI-enabled video games and spam filters
  • Inventory management and process optimization systems
  • Recommendation engines (non-manipulative)
  • Most enterprise productivity and automation tools

Timeline and deadlines

The AI Act follows a staggered enforcement calendar. As of April 2026, prohibited practices and GPAI obligations are in force; high-risk obligations are scheduled for August 2026 but reshaped by the Digital Omnibus on AI now in trilogue.[12]

Date Milestone Requirements Status (April 2026)
1 Aug 2024 Entry into force Regulation published, legally effective In force
2 Feb 2025 Prohibited practices Article 5 prohibitions, AI literacy obligations In force
2 Aug 2025 GPAI obligations Article 53–55 model-provider obligations; Code of Practice published 10 Jul 2025 In force
2 Aug 2026 High-risk obligations Articles 9–15, 17 conformity for stand-alone Annex III systems Scheduled; under Omnibus review
2 Dec 2027 High-risk (proposed) Stand-alone Annex III systems — proposed delayed application date Trilogue, Apr 2026
2 Aug 2027 Embedded high-risk (Annex I) High-risk AI as safety component of regulated products (current Act) Scheduled
2 Aug 2028 Embedded high-risk (proposed) Annex I embedded systems — proposed delayed application date Trilogue, Apr 2026
Working baseline

Until the Digital Omnibus is adopted, the original 2 August 2026 deadline remains the prudent target for in-progress programmes. Conformity assessments via notified bodies typically take 3–12 months and cost €10,000–€100,000; the workstream cannot be compressed retroactively if the Omnibus pulls the date back.

Requirements by category

Prohibited AI systems (Article 5)

Prohibited practices became illegal February 2, 2025. Organizations must immediately cease any:

Penalty: Up to €35 million or 7% of total worldwide annual turnover, whichever is higher (Article 99).[1]

High-risk AI systems (Articles 8–15)

High-risk AI systems face comprehensive requirements across the entire lifecycle. Providers must implement:

Article 9: risk management system

Continuous iterative process throughout the AI system lifecycle comprising:

  • Identification and analysis of known and foreseeable risks
  • Estimation and evaluation of risks that may emerge during use
  • Evaluation of other possibly arising risks based on post-market monitoring
  • Adoption of suitable risk management measures

Article 10: data and data governance

Training, validation, and testing datasets must be subject to appropriate data governance and management practices:

  • Relevant, sufficiently representative, and free of errors
  • Consideration of geographic, contextual, behavioral, or functional settings
  • Examination for possible biases and mitigation where appropriate
  • Completeness considering intended purpose and reasonably foreseeable misuse

Article 11: technical documentation

Documentation prepared before placing on market and kept up to date, including:

  • General description of the AI system (intended purpose, developer, version)
  • Detailed description of system elements and development process
  • Detailed information about monitoring, functioning, and control
  • Risk management system description per Article 9
  • Validation and testing procedures, results, and reports

Article 12: record-keeping (logging)

Automatic recording of events (logs) throughout the AI system operation:

  • Logging capabilities ensuring traceability throughout the system lifecycle
  • Logging level appropriate to intended purpose of high-risk system
  • Records including input data period, reference database, persons involved in verification
  • Logs protected by appropriate security measures and retained for period appropriate to purpose

Article 13: transparency for deployers

High-risk systems must be designed with sufficient transparency to enable deployers to:

  • Interpret system output and use it appropriately
  • Understand system capabilities and limitations
  • Instructions for use in appropriate digital or non-digital format
  • Information on human oversight measures per Article 14

Article 14: human oversight

High-risk systems shall be designed to enable effective oversight by natural persons:

  • Fully understand capacities and limitations and monitor operation
  • Remain aware of possible tendency to automatically rely on output (automation bias)
  • Correctly interpret system output considering system characteristics
  • Decide to not use the system or override output in any particular situation

Article 15: accuracy, robustness and cybersecurity

High-risk systems must achieve appropriate levels of:

  • Accuracy: Ability to provide correct output or actions
  • Robustness: Reliability against errors, faults, inconsistencies, and unexpected situations
  • Cybersecurity: Resilience against attempts to alter use, behavior, or performance through exploitation
  • Technical solutions to address AI-specific vulnerabilities including data poisoning and model evasion

Article 17: quality management system

Providers of high-risk systems must establish and maintain a documented quality management system ensuring:

  • Compliance strategy for regulatory requirements
  • Design, control, and quality assurance techniques and procedures
  • Post-market monitoring, reporting, and corrective action procedures
  • Examination, test, and validation procedures at design and throughout development

Penalty for high-risk non-compliance: Up to €15 million or 3% of total worldwide annual turnover (Article 99).[1]

Limited-risk AI systems (Article 50)

Limited-risk systems face only transparency obligations:

Penalty: Up to €7.5 million or 1% of total worldwide annual turnover (Article 99).[1]

High-risk AI systems in detail

Classification criteria (Article 6)

An AI system is considered high-risk if:

  1. The AI system is intended to be used as a safety component of a product covered by EU harmonization legislation (Annex I), OR
  2. The AI system itself is a product covered by EU harmonization legislation (Annex I) and requires third-party conformity assessment, OR
  3. The AI system falls under one of the eight high-risk use cases listed in Annex III

Annex III high-risk use cases

High-risk AI categories (Annex III)

Category Specific Use Cases Examples
1. Biometrics Remote biometric identification (real-time/post), biometric categorization Airport facial recognition, emotion detection at borders
2. Critical Infrastructure Safety components managing road traffic, water, gas, heating, electricity Traffic signal AI, power grid management systems
3. Education Determining access, assessing students, detecting cheating Automated admissions, AI exam proctoring, grading systems
4. Employment Recruitment, promotion, task allocation, monitoring, termination Resume screening AI, performance monitoring, layoff decisions
5. Essential Services Creditworthiness, insurance pricing, emergency dispatch Loan approval AI, health insurance underwriting
6. Law Enforcement Risk assessment, polygraphs, emotion detection, evidence evaluation Recidivism prediction, crime forecasting, lie detection
7. Migration/Asylum Examination of applications, risk assessment, travel document verification Automated visa screening, asylum claim evaluation
8. Justice Assisting judicial authorities in researching/interpreting facts and law Legal research AI, case outcome prediction

Conformity assessment procedures (Articles 43–44)

Before placing high-risk AI systems on the market, providers must undergo conformity assessment to demonstrate compliance. Two pathways exist:

Internal control (Article 43)

Provider conducts self-assessment based on:

  • Technical documentation (Annex IV)
  • Quality management system implementation
  • Post-market monitoring plan
  • Drawing up EU declaration of conformity

Available for most high-risk systems

Notified body assessment (Article 43)

Third-party assessment required for:

  • Biometric identification systems
  • AI systems covered by other EU regulations requiring notified body involvement
  • Medical AI devices (the vast majority of FDA-authorized AI/ML devices are Class II)

Cost: €10,000-€100,000 | Timeline: 3-12 months[2][3]

Healthcare AI Under the EU AI Act

The EU AI Act doesn’t create one single healthcare deadline. Annex III high-risk systems start applying on August 2, 2026, while many medical-device systems linked to MDR or IVDR follow on August 2, 2027. Healthcare teams need to classify first and calendar second.

Healthcare deployments often land in high-risk categories because they either:

Not every healthcare feature is automatically high-risk. Ambient documentation, clinical decision support, diagnostics, utilization management, and access workflows can land in different buckets depending on product classification, intended purpose, and how the output is used. For an operational breakdown by clinical-AI type, see our dedicated guides on ambient AI scribe, CDSS high-risk classification, and AI diagnosis high-risk classification.

The Article 12 operational test: Article 12 logging isn’t satisfied by vague promises that logging exists somewhere. High-risk healthcare teams need records they can actually retrieve and explain when regulators, customers, or conformity assessors ask how a system operated in a specific patient encounter.

GPAI obligations and Code of Practice

The AI Act sets specific obligations for general-purpose AI models — the foundation models that providers like OpenAI, Anthropic, Google, Microsoft, Mistral, Cohere and IBM place on the EU market. These obligations have been live since 2 August 2025; the Commission’s enforcement actions (requests for information, model access, recall) start on 2 August 2026.[15]

GPAI Code of Practice — April 2026 status

The Code, finalised on 10 July 2025, is the practical implementation pathway. As of April 2026 it has been signed by approximately 24 providers — including Anthropic, Google, IBM, Microsoft, Mistral AI, OpenAI, Cohere, Aleph Alpha, Almawave, Amazon, Black Forest Labs, ServiceNow and WRITER.[15]

Notable absences: Meta did not sign. xAI signed only the Safety and Security chapter and elects "alternative adequate means" for transparency and copyright.

Definition and classification (Article 3)

A general-purpose AI model is defined as an AI model "trained on large amounts of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market."[1]

GPAI models are further classified into two tiers based on compute thresholds:

Standard GPAI models

Article 53: Models that do not meet the systemic risk threshold (compute < 10^25 FLOPs). Subject to baseline transparency requirements.

Requirements:

  • Technical documentation per Annex XI (architecture, training data, compute resources)
  • Information and documentation to downstream providers to enable compliance
  • Copyright policy including sufficiently detailed summary of training data content
  • Publicly available summary of training data subject to copyright protection

GPAI models with systemic risk

Article 55: Models with high impact capabilities (compute ≥ 10^25 FLOPs or equivalent) posing systemic risks due to reach and capabilities. Enhanced obligations.

Additional Requirements:

  • Model evaluation per standardized protocols including adversarial testing
  • Assessment and mitigation of systemic risks (including cybersecurity threats)
  • Tracking, documenting, and reporting serious incidents to AI Office and national authorities
  • Ensuring adequate cybersecurity protection for model and physical infrastructure

Examples likely meeting threshold (April 2026): GPT-5 series, Claude Opus 4.6/4.7, Gemini 3 Pro, Llama 4. Final classification rests with the AI Office.

Compliance deadline

GPAI obligations have been enforceable since 2 August 2025. The Commission’s first enforcement actions — requests for information, model access, and recalls — are scheduled for 2 August 2026, giving providers a 12-month working period with the AI Office.[15]

Penalties and enforcement

The AI Act establishes one of the most stringent penalty regimes in technology regulation, mirroring GDPR’s structure with fines tied to global annual turnover.

Penalty tiers (Article 99)

EU AI Act penalty structure

Violation Type Maximum Fine Articles
Prohibited AI practices €35M or 7% global revenue Article 5
Non-compliance with high-risk requirements €15M or 3% global revenue Articles 8-15, 17, 26
Non-compliance with GPAI obligations €15M or 3% global revenue Articles 53, 55
Providing incorrect information €7.5M or 1% global revenue Article 71 (authority requests)
Non-compliance with transparency obligations €7.5M or 1% global revenue Article 50 (limited-risk AI)

Important: "Global annual turnover" means worldwide revenue for the preceding financial year. For multinational corporations, 7% could reach billions of euros. Whichever amount is higher applies—meaning even startups face €35M maximum fines for prohibited AI practices.[1]

Enforcement structure

The AI Act establishes a multi-layered enforcement architecture:

EU AI Office (Article 64)

Central coordination body within the European Commission responsible for GPAI model oversight, implementing acts, and cross-border enforcement coordination. Exclusive competence over systemic-risk GPAI models.

National competent authorities (Article 70)

Each member state must designate at least one authority to enforce the AI Act within its territory. National authorities have investigatory powers including access to training data, source code, and algorithms. May impose penalties per Article 99.

Notified bodies (Articles 31–39)

Independent third-party conformity assessment bodies designated by member states to conduct assessments of high-risk AI systems requiring external certification (e.g., biometric systems, medical devices). Must be accredited per ISO 17065.

European AI Board (Article 65)

Expert group consisting of national authorities promoting consistent application across member states, advising the Commission, and contributing to international AI governance cooperation.

Market surveillance powers (Article 74)

National authorities have extensive investigatory powers including:

Member-state implementation, April 2026

The Act applies directly across the 27 member states, but each must designate national competent authorities (Article 70) and a single point of contact. As of April 2026 the picture is uneven; below is the practitioner-confirmed status, with country deep dives linked.[17][18][19][20][21][22]

Country Competent authority April 2026 status
Germany BNetzA (main MSA); BaFin (financial-sector high-risk); KoKIVO coordination centre Draft KI-MIG explicitly excludes BfDI. BfDI publishes AI/GDPR-interplay guidance.
France Decentralised: CNIL, ANSSI, sectoral regulators; PEReN technical support Multi-authority bill pending Parliament. ANSSI tasked with AI Act cybersecurity competences (Apr 2026).
Italy AgID (notifying); ACN (market surveillance, EU SPOC); Garante (data overlap) National AI Law No. 132/2025 in force October 2025. Implementing decrees due 10 Oct 2026.
Spain AESIA (national MSA, operational since June 2024) 16 detailed compliance guides published December 2025; sandbox running; draft national AI Law (March 2025).
Netherlands Hybrid 10-authority model led by AP; AP+RDI co-coordinate Public consultation on Implementation Act open 20 April – 1 June 2026. Public bodies must register high-risk AI from 2 Aug 2026.
Belgium BIPT (main MSA); GBA/APD on data; FAMHP, FSMA sectoral; 21 fundamental-rights bodies BIPT designated by 2025-2029 Federal Government Agreement; missed Aug 2025 governance deadline.
Poland KRiBSI (Commission for AI Development and Security) — single MSA model February 2026 draft confirms KRiBSI; operational support nested in Ministry of Digital Affairs. UODO disputes advisory-only role.

Compliance roadmap

Organizations should implement a phased approach aligned with regulatory deadlines and system risk classification. The August 2026 high-risk deadline leaves minimal margin for delay.

GLACIS logoGLACIS
GLACIS framework

EU AI Act compliance roadmap

1

AI system inventory and risk classification (month 1)

Catalog all AI systems across the organization. Classify each system per AI Act risk categories (prohibited, high-risk, limited-risk, minimal-risk) using Annex III criteria. Identify systems requiring immediate action (prohibited) vs. August 2026 deadline (high-risk). Document intended purpose, deployment context, and affected populations.

2

High-risk system prioritisation (months 1–2)

For high-risk systems, assess current state against Articles 9-15 requirements. Identify gaps in risk management, data governance, logging, transparency, human oversight, and cybersecurity. Prioritize systems by business criticality and compliance gap severity. Determine which systems require notified body assessment vs. internal control.

3

Risk management system implementation (months 2–4)

Establish continuous risk management per Article 9. Implement processes for identifying foreseeable risks, estimating harm likelihood and severity, evaluating post-market monitoring findings, and adopting mitigation measures. Document risk management activities per Annex IV technical documentation requirements. Integrate with existing ISO 42001 or NIST AI RMF frameworks where implemented.

4

Technical documentation and logging (months 3–6)

Prepare technical documentation per Annex IV covering system description, development process, data governance, monitoring procedures, validation results, and risk management. Implement automated logging capabilities per Article 12 ensuring traceability of inputs, outputs, and decisions. Ensure logs are tamper-evident and retained appropriately. Generate evidence that controls execute—not just policies documenting intent.

5

Quality management system and conformity assessment (months 4–9)

Establish quality management system per Article 17 covering compliance strategy, design controls, post-market monitoring, and corrective actions. For systems requiring notified body assessment, initiate engagement 6-9 months before August 2026 deadline (assessments take 3-12 months). For internal control pathway, prepare EU declaration of conformity and affix CE marking.

6

Post-market monitoring and continuous compliance (ongoing)

Implement post-market monitoring systems tracking performance, incidents, and user feedback. Establish serious incident reporting procedures per Article 73. Maintain technical documentation and update as systems evolve. Conduct periodic reviews ensuring ongoing compliance with Articles 9-15. Prepare for market surveillance authority inspections and information requests per Article 74.

Critical insight: Organizations waiting until 2026 will face notified body capacity constraints, rushed implementations prone to defects, and potential enforcement actions for non-compliance. Start now—the deadline is closer than it appears.

GPAI provider roadmap

Foundation model providers faced the August 2, 2025 deadline. Immediate priorities include:

All GPAI models (Article 53)

  • Prepare technical documentation (Annex XI)
  • Document training data sources and compute
  • Publish copyright policy and training data summary
  • Provide downstream compliance documentation

Systemic-risk GPAI (Article 55)

  • Conduct model evaluation with adversarial testing
  • Assess and document systemic risks
  • Implement incident tracking and reporting
  • Establish cybersecurity protections

Frequently asked questions

Does the EU AI Act apply to US companies?

Yes. The AI Act has extraterritorial reach similar to GDPR. It applies to providers placing AI systems on the EU market or putting them into service in the EU, regardless of the provider’s location. It also applies where AI output is used in the EU, even if the provider and deployer are both located outside the EU. US companies serving EU customers or processing EU data must comply.

How do I know if my AI system is high-risk?

Check if your system falls under Annex III categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/asylum, or justice. Also check if it’s a safety component of a product covered by Annex I harmonization legislation (medical devices, machinery, etc.). If uncertain, document your risk assessment rationale—regulators may disagree with your classification.

What is a notified body and when do I need one?

Notified bodies are independent third-party organizations designated by member states to conduct conformity assessments. You need one if your high-risk AI system is: (1) a biometric identification or categorization system, or (2) a product covered by EU harmonization legislation requiring third-party assessment (e.g., most medical devices). Notified body assessments cost €10,000-€100,000 and take 3-12 months.

Can I use ChatGPT or Claude in my high-risk AI system?

Yes, but you bear compliance responsibility as the deployer. General-purpose AI models (like GPT-5.2, Claude Opus 4.5) are subject to GPAI obligations (Articles 53-55), but if you integrate them into a high-risk use case (e.g., employment decisions, creditworthiness), you become the "provider" of the high-risk system and must ensure full compliance with Articles 8-15 including risk management, logging, human oversight, and conformity assessment.

How does the EU AI Act interact with GDPR?

The AI Act and GDPR are complementary. GDPR governs personal data processing; the AI Act governs AI systems. AI systems processing personal data must comply with both. Key overlaps: data governance (Article 10 AI Act, Articles 5-6 GDPR), transparency (Article 13 AI Act, Articles 13-14 GDPR), and automated decision-making (Article 14 AI Act, Article 22 GDPR). Non-compliance can trigger penalties under both regulations.

What should I do if my AI system causes harm after August 2026?

Report serious incidents to national competent authorities within 15 days per Article 73. A serious incident is any incident leading to death, serious health damage, serious fundamental rights disruption, or serious property/environmental damage. Implement corrective actions, update risk management documentation, and notify affected deployers. Failure to report can result in penalties. Incident response planning should be part of your quality management system (Article 17).

References

  1. [1] European Union. "Regulation (EU) 2024/1689 of the European Parliament and of the Council." Official Journal of the European Union, July 12, 2024. EUR-Lex 32024R1689
  2. [2] European Commission. "Questions and Answers: Artificial Intelligence Act." March 13, 2024. europa.eu
  3. [3] European Parliament. "EU AI Act: First Regulation on Artificial Intelligence." News release, March 13, 2024. europarl.europa.eu
  4. [4] European Parliament. "Artificial Intelligence Act: MEPs Adopt Landmark Law." Press release, March 13, 2024. europarl.europa.eu
  5. [5] European AI Office. "AI Office Governance Structure." European Commission, 2024. ec.europa.eu
  6. [6] NIST. "Artificial Intelligence Risk Management Framework (AI RMF 1.0)." January 2023. nist.gov
  7. [7] ISO/IEC. "ISO/IEC 42001:2023 Information Technology — Artificial Intelligence — Management System." December 2023. iso.org
  8. [8] European Commission. "Annexes to Regulation (EU) 2024/1689." EUR-Lex, July 12, 2024. EUR-Lex Annexes
  9. [9] Future of Life Institute. "EU Artificial Intelligence Act: Analysis and Recommendations." Policy report, 2024. futureoflife.org
  10. [10] Stanford HAI. "AI Index Report 2025." Stanford Human-Centered AI, March 2025. hai.stanford.edu
  11. [11] European Commission. "EU AI Act: Implementation Timeline and Milestones." Digital Strategy Portal, 2024. ec.europa.eu
  12. [12] European Parliament. "Artificial Intelligence Act: delayed application, ban on nudifier apps." Press release, 23 March 2026. europarl.europa.eu
  13. [13] European Parliament. "Digital Omnibus on AI." Legislative Train Schedule, 2026. europarl.europa.eu
  14. [14] Morrison Foerster. "EU Digital Omnibus on AI: What Is in It and What Is Not?" December 2025. mofo.com
  15. [15] European Commission. "The General-Purpose AI Code of Practice." Digital Strategy Portal, accessed April 2026. digital-strategy.ec.europa.eu
  16. [16] CEN-CENELEC. "Update on CEN and CENELEC’s Decision to Accelerate the Development of Standards for Artificial Intelligence." 23 October 2025. cencenelec.eu
  17. [17] Simmons & Simmons. "Germany’s Implementation Act for the EU AI Act (KI-MIG)." 2025. simmons-simmons.com
  18. [18] AI Regulation. "EU AI Act Implementation: France Still Without Designated National Competent Authorities." 2026. ai-regulation.com
  19. [19] IAPP. "Italy becomes first EU member state to pass an AI law." 2025. iapp.org
  20. [20] Stibbe. "Dutch proposal for AI supervision: hybrid cooperation between market supervisory authorities." 2026. stibbe.com
  21. [21] BIPT. "Application of the AI Act." Belgian Institute for Postal Services and Telecommunications. bipt.be
  22. [22] Blavatnik School of Government. "The AI Act’s enforcement gap: what Poland’s new regulator reveals about Europe’s challenge." 2026. bsg.ox.ac.uk
  23. [23] European Commission. "European AI Office." Digital Strategy Portal, accessed April 2026. digital-strategy.ec.europa.eu
  24. [24] Inside Privacy. "Spain Issues Guidance Under the EU AI Act." December 2025. insideprivacy.com

Ready to make the receipts

EU AI Act compliance in days, not months.

GLACIS produces cryptographic evidence that your AI controls execute correctly — mapped to Articles 9–15, ISO 42001 and NIST AI RMF. Get audit-ready documentation before the August 2026 baseline (or whatever the Omnibus settles on).

Start your compliance sprint Runtime security assessment →

Related guides

Country implementation guides

Each EU member state is establishing its own national competent authority and implementation approach. These guides cover country-specific requirements:

Role-specific guides

EU AI Act compliance requires cross-functional collaboration. These guides provide tailored action plans for key stakeholders:

High-risk classification guides

Annex III of the EU AI Act lists specific high-risk use cases with enhanced requirements. These guides explain how to classify and comply:

Framework crosswalks

Map EU AI Act requirements against other compliance frameworks to identify overlaps and reduce duplicate effort: