UK / EU · GLACIS guides · Updated April 2026

UK vs EU AI Act: which rules apply to which AI

A working guide for organisations operating in both markets — refreshed for April 2026 with the EU Omnibus on AI delaying high-risk obligations to December 2027 (Annex III) and August 2028 (Annex I product-embedded), and the UK’s DSIT Blueprint replacing a near-term AI Bill.

By Joe Braidwood Updated 24 April 2026
Oct 2025
UK DSIT Blueprint; EU Omnibus on AI proposed
28 Apr 2026
EU Omnibus political-agreement target — trilogue
2 Dec 2027
Proposed EU high-risk Annex III hard date
2 Aug 2028
Proposed EU Annex I product-embedded hard date
What changed since January 2026
The bottom line

The UK and EU have taken fundamentally different approaches to AI regulation. The EU AI Act is comprehensive, horizontal regulation with prescriptive requirements across risk tiers. The UK’s pro-innovation framework relies on existing sectoral regulators to apply five non-statutory principles, with the AI Growth Lab as the live legislative vehicle.

Critical point: UK companies placing AI systems on the EU market, or whose AI outputs are used by EU recipients, must comply with the EU AI Act regardless of UK rules. The UK’s lighter-touch approach offers no exemption. The Omnibus delay reshapes when high-risk obligations bite — not whether they apply.

UK

UK approach

  • · Principles-based, sectoral regulation
  • · Five non-statutory principles
  • · No central AI authority — DRCF coordinates
  • · Outcome-focused, flexible
  • · Prioritises innovation and growth (DSIT Blueprint, AI Growth Lab)
EU

EU AI Act

  • · Horizontal, prescriptive regulation
  • · Four risk tiers with specific requirements
  • · European AI Office + national authorities
  • · Process-focused, compliance-driven
  • · Prioritises safety and rights — Omnibus delay reshapes timing

Detailed comparison

Aspect UK EU AI Act
Regulatory structure Principles-based, sectoral. Existing regulators (FCA, MHRA, ICO, Ofcom, CMA) apply principles within their domains. AI Growth Lab introduces sandboxed flexibility. Horizontal regulation. Single legal framework applies across all sectors with uniform requirements.
Central authority None. AI Security Institute evaluates frontier AI but does not regulate. DRCF coordinates regulators. European AI Office at EU level. Each member state designates national competent authorities.
Risk classification No formal tiers. Risk assessment left to individual regulators and organisations. Four tiers: unacceptable (banned), high-risk (strict requirements), limited risk (transparency), minimal (no requirements).
Prohibited practices No AI-specific prohibitions in law. Existing laws (Equality Act, UK GDPR) apply. Explicit bans: social scoring, real-time remote biometric ID (exceptions), manipulation, emotion recognition in workplaces and schools.
High-risk requirements Depends on sector. FCA: Consumer Duty, SM&CR. MHRA: medical device rules. New MHRA AI medical device framework due later 2026. No unified AI-specific requirements. Conformity assessment, risk management, data governance, logging, human oversight, transparency, accuracy / robustness testing, registration.
Documentation Existing sectoral requirements apply. No AI-specific documentation mandates. Extensive: technical documentation, quality management system, instructions for use, conformity declaration, EU registration.
Penalties Vary by regulator. FCA can impose unlimited fines. ICO up to £17.5M / 4% turnover. Up to €35M or 7% global turnover (prohibited), €15M or 3% (high-risk), €7.5M or 1% (transparency).
Timeline Ongoing. No comprehensive AI law. DSIT Blueprint + AI Growth Lab; primary legislation paused. Prohibitions in force Feb 2025. GPAI Aug 2025. Omnibus proposes hard delays: Annex III high-risk to 2 Dec 2027; Annex I product-embedded to 2 Aug 2028.
Legal basis Non-statutory principles. Relies on existing legislation (UK GDPR, DUAA 2025, sectoral statute). Directly applicable EU regulation with legal force in all member states.

Extraterritorial impact: when EU rules apply to UK companies

Critical for UK organisations

The EU AI Act applies to UK companies when they place AI systems on the EU market or when their AI outputs are used in the EU. This includes SaaS products accessible to EU customers and AI embedded in products sold in the EU. The Omnibus delay shifts when obligations bite — not whether they apply.

The EU AI Act has broad extraterritorial reach. Article 2 specifies it applies to:

  • Providers placing AI systems on the EU market — regardless of where they are established
  • Deployers of AI systems located within the EU
  • Providers and deployers in third countries where AI output is used in the EU
  • Importers and distributors of AI systems in the EU

Practical implications

  • Selling AI software to EU customers triggers EU AI Act compliance
  • EU subsidiaries using UK-developed AI must ensure compliance
  • AI outputs affecting EU recipients (credit decisions, content moderation, employment screening) may trigger obligations
  • Products containing AI sold in the EU must meet EU AI Act requirements
  • Under the Omnibus, the practical planning window on Annex III high-risk systems extends to 2 December 2027 — but technical standards remain incomplete; build expectations into 2026 governance now

Dual compliance strategy

Organisations operating in both markets should consider a "highest common denominator" approach — building systems that meet EU AI Act requirements, which will inherently satisfy UK expectations. The Omnibus delay buys time on EU enforcement; it doesn’t reduce the bar.

Recommended approach

1

Classify AI systems under EU AI Act

Determine the risk tier (unacceptable, high, limited, minimal) for each AI system. This provides a structured framework even for UK-only operations and pre-positions for the 2027 / 2028 Omnibus dates.

2

Build to EU standards

Implement EU AI Act requirements (documentation, risk management, human oversight) which exceed UK expectations and pre-position for the new MHRA AI medical device framework, FCA Mills Review outcomes, and any future UK legislation.

3

Layer UK sectoral requirements

Add UK-specific obligations from relevant regulators (FCA Consumer Duty, PRA SS1/23, MHRA medical device rules, ICO ADM requirements, DUAA section 103 from 19 June 2026) on top of EU compliance.

4

Maintain dual documentation

EU requires specific documentation formats. UK regulators accept different formats. A continuous evidence layer that produces both — without parallel data collection — is the operational answer.

Key differences in practice

Risk assessment

UK

No prescribed methodology. Organisations determine approach. Regulators expect "proportionate" risk consideration aligned with the five principles.

EU

Article 9 mandates risk management systems for high-risk AI: identification, analysis, evaluation and mitigation throughout the lifecycle.

Human oversight

UK

DUAA requires meaningful human intervention for ADM (in force from 5 February 2026). ICO ADM and profiling guidance consultation closes 29 May 2026. Sectoral expectations layer on top — FCA SM&CR, MHRA clinical oversight.

EU

Article 14 mandates human oversight for high-risk AI with specific capabilities: understanding, monitoring, interpreting, deciding to override, and stopping the system.

Transparency

UK

"Appropriate transparency" principle. ICO guidance on explaining AI decisions. No mandatory disclosures for AI interaction (unlike EU chatbot rules).

EU

Article 50: users must be informed when interacting with AI (chatbots), viewing synthetic content, or subject to emotion recognition or biometric categorisation.

Timeline comparison

Date UK development EU AI Act milestone
Feb 2025 AI Safety → AI Security Institute rename Prohibited AI practices banned
Jun 2025 DUAA Royal Assent
Aug 2025 DUAA stage 1 effective GPAI model obligations apply
Sep 2025 PRA SS1/23 honeymoon over — supervisory tightening
Oct 2025 DSIT Blueprint for AI regulation; AI Growth Lab call for evidence opens EU Commission proposes Digital Omnibus on AI
Dec 2025 AISI Frontier AI Trends Report
27 Jan 2026 FCA Mills Review launched
5 Feb 2026 DUAA bulk data-protection provisions in force
Mar 2026 ICO AI & biometrics strategy update Council and Parliament adopt Omnibus positions
Apr 2026 FCA AI Live Testing cohort 2; AI Airlock phase 2 closes Omnibus political-agreement target trilogue (28 Apr)
19 Jun 2026 DUAA right to complain (section 103) in force
Jul 2026 Targeted Omnibus Official Journal publication
Summer 2026 FCA Mills Review recommendations to Board; FCA good-and-poor practice report
Autumn 2026 MHRA International Reliance Framework; new AI medical device framework expected
2 Dec 2027 Proposed Annex III high-risk hard date
2 Aug 2028 Proposed Annex I product-embedded high-risk hard date

Sector-specific considerations

Financial services

UK (FCA / PRA)
  • · Consumer Duty applies to AI outcomes (under Mills Review)
  • · SM&CR accountability for AI decisions
  • · SS1/23 Model Risk Management — automated monitoring expected
  • · No AI-specific rules (Rathi Dec 2025; still holds Apr 2026)
  • · AI Live Testing cohort 2 running through end-2026
  • · BoE FPC priority: agentic AI in payments and markets
EU AI Act
  • · Credit scoring AI is high-risk (Annex III)
  • · Insurance underwriting AI is high-risk
  • · Full conformity assessment required
  • · Mandatory registration in EU database
  • · Annex III hard date proposed 2 December 2027 (Omnibus)

Healthcare

UK (MHRA)
  • · AI Airlock phase 2 closes April 2026
  • · New MHRA AI medical device framework due later 2026
  • · International Reliance Framework Autumn 2026
  • · CE marking recognition until 30 June 2030; UKCA mandatory thereafter
  • · Post-market surveillance in force since June 2025
  • · NHS AI & Digital Regulations Service consolidates guidance
EU AI Act + MDR
  • · AI medical devices are high-risk
  • · Dual compliance: AI Act + MDR / IVDR
  • · Conformity assessment via notified body
  • · Annex I product-embedded hard date proposed 2 August 2028 (Omnibus)

How GLACIS supports dual UK / EU compliance

Operating in both markets means meeting two standards — the EU’s prescriptive requirements and the UK’s principle-based expectations. The Omnibus delay reshapes the EU clock; the UK clock keeps ticking via Mills, SS1/23 and the new MHRA framework. GLACIS provides a single evidence infrastructure that satisfies both, avoiding parallel compliance programmes.

Build once, prove to both

GLACIS attestation records are structured to meet EU AI Act documentation requirements (Article 11) while also satisfying UK sectoral regulator expectations. One evidence infrastructure, two compliance outcomes.

EU AI Act technical documentation

High-risk AI systems need extensive technical files under the EU AI Act. GLACIS generates continuous evidence of risk management, data governance, human oversight and accuracy — the core Annex IV requirements that the Omnibus delay does not relax.

UK principles evidence

UK regulators want proof of outcomes, not process checklists. GLACIS generates signed evidence of the runtime events and control decisions that matter, giving the FCA, PRA, MHRA, ICO or Bank of England the proof they need without prescriptive formats.

Mapping GLACIS to dual compliance

Requirement EU AI Act UK approach GLACIS evidence
Risk management Article 9 RMS Sectoral guidance Continuous risk attestation with timestamped controls
Human oversight Article 14 DUAA meaningful intervention (5 Feb 2026) Override and escalation records with operator context
Transparency Article 13 / Article 50 Principle 2 Full audit trail exportable in multiple formats
Accuracy and robustness Article 15 Principle 1 Performance metrics and guardrail trigger records
Post-market monitoring Article 72 Sectoral PMS — MHRA from June 2025; new framework due later 2026 Continuous production attestation for incident correlation
Individual rights / contestation Article 86 right to explanation DUAA section 103 (19 Jun 2026) Per-decision retrieval for DSAR, complaints and override review

Build the receipt layer that satisfies both regulators

The Omnibus delay buys time on EU enforcement; the UK clock keeps running via Mills, SS1/23 and the new MHRA framework. Get a dual-compliance assessment that maps gaps in both frameworks at once.

Get a Runtime Security Assessment →

Related guides