UK · GLACIS guides · Updated April 2026

UK AI regulation: the pro-innovation approach

A working guide to the UK’s principles-based, sectoral framework — refreshed for April 2026 with the DSIT Blueprint, AI Growth Lab, the FCA Mills Review, PRA SS1/23 supervisory reset, and the Data (Use and Access) Act 2025 in force.

By Joe Braidwood 22 min read Updated 24 April 2026
Oct 2025
DSIT Blueprint for AI regulation published
5 Feb 2026
DUAA data-protection provisions in force
Apr 2026
AISI 2026 paper stream live; AI Airlock phase 2 closes
19 Jun 2026
DUAA right to complain and section 103 in force
What changed since January 2026
Executive summary

The UK has charted a distinctly different path from the EU. While the EU AI Act sets horizontal, prescriptive requirements across risk tiers, the UK relies on existing sectoral regulators to interpret and apply five core principles within their domains.

In February 2025 the AI Safety Institute was renamed the AI Security Institute, narrowing its focus to national security threats. The Data (Use and Access) Act 2025 received Royal Assent in June 2025 and most data-protection provisions came into force on 5 February 2026, easing constraints on automated decision-making while preserving safeguards.

Key finding: UK organisations navigate a patchwork of sectoral guidance from the FCA, PRA, MHRA, ICO and Ofcom. Less prescriptive than the EU AI Act, but more complex for firms operating across multiple regulated sectors or in both UK and EU markets — particularly given the EU Omnibus delay reshaping high-risk timelines.

5
Core principles
6+
Sectoral regulators in play
75%
FCA-regulated firms using AI
No Bill
Primary AI legislation paused

The pro-innovation framework

The UK’s approach was formally established in the March 2023 White Paper "AI regulation: a pro-innovation approach" (Command Paper 815) and reaffirmed through the government’s February 2024 response. It explicitly prioritises flexibility and outcomes over prescriptive compliance.

Core philosophy

Unlike the EU AI Act’s horizontal regulation with risk-based classifications, the UK framework:

  • Empowers existing regulators to interpret and apply AI principles within their domains
  • Avoids statutory requirements — the five principles remain non-binding guidance
  • Prioritises outcomes over processes — regulators focus on results rather than mandating specific technical measures
  • Maintains flexibility as AI technology evolves rapidly

Government response (February 2024)

  • Reaffirmed the "agile and principles-based" approach
  • Committed £10 million to boost regulators’ AI expertise
  • Required the FCA, ICO, MHRA, Ofcom and CMA to publish AI strategic approaches by 30 April 2024
  • Flagged potential future binding requirements on developers of the "most powerful" AI systems

AI Opportunities Action Plan and 2026 progress

The AI Opportunities Action Plan was launched on 13 January 2025, endorsing all 50 recommendations from the Matt Clifford review. The DSIT progress report of 29 January 2026 — One Year On — reports 38 of 50 commitments met. Headline elements include:

  • £14 billion in private investment commitments
  • Creation of a National Data Library
  • New AI Energy Council to address compute infrastructure
  • Proposed UK Sovereign AI unit
  • Continued emphasis on growth and opportunity over restrictive regulation

DSIT Blueprint for AI regulation (October 2025)

On 21 October 2025 DSIT published its Blueprint for AI regulation, replacing "AI Bill" as the immediate legislative vehicle. The centrepiece is the AI Growth Lab — a national programme of issue-specific regulatory sandboxes in which specific regulations can be temporarily relaxed for licensed pilots. Target sectors named: professional services, healthcare, transport, advanced manufacturing.

A call for evidence on the Lab’s design closed on 2 January 2026; sandboxes are expected to spin up sector-by-sector through 2026 and 2027 with strict licensing and time-limited terms. The government’s preference for principles-based regulation over horizontal statute remains the live policy.

The five core principles

The UK’s AI governance framework centres on five cross-sectoral principles that regulators are expected to interpret and apply within their domains:

1. Safety, security and robustness

AI systems should function securely, safely, and robustly throughout their lifecycle. This includes protection against cyber-attacks, adversarial inputs, and unexpected failures.

2. Appropriate transparency and explainability

Organisations should provide appropriate information about AI systems. The level of transparency should be proportionate to the context and potential impact of decisions.

3. Fairness

AI systems should not produce discriminatory or unfair outcomes. This aligns with existing equality legislation including the Equality Act 2010.

4. Accountability and governance

Clear accountability structures should exist for AI systems. Organisations should have governance frameworks ensuring responsible development and deployment.

5. Contestability and redress

Individuals should be able to challenge AI decisions and seek appropriate remedies when harmed. This includes access to human review of automated decisions.

Important: These principles are currently non-statutory. While regulators are expected to incorporate them into their guidance, there is no legal requirement for organisations to demonstrate compliance with the principles themselves—only with existing sectoral regulations as interpreted through the lens of these principles.

AI Security Institute (AISI)

The UK’s AI Safety Institute was established in November 2023 as the world’s first state-backed AI evaluation body, initially funded with £100 million from the Frontier AI Taskforce.

Rename to AI Security Institute (February 2025)

On 14 February 2025 Technology Secretary Peter Kyle announced the renaming to the AI Security Institute. Speaking at the Munich Security Conference, Kyle explained the change reflects a "renewed focus" on national security and protecting citizens from crime.

Change in focus

The Institute will not concentrate on bias or freedom of speech, but on serious AI risks with security implications — including cyber-attacks, chemical and biological weapons development, and criminal misuse such as fraud and child sexual abuse material generation.

Frontier AI Trends Report and 2026 publications stream

AISI published its inaugural Frontier AI Trends Report on 18 December 2025, drawing on two years of evaluations across more than thirty frontier models. Headline findings: universal jailbreaks were found in every system tested (although the expert time required is rising for some models), and open-source models now trail closed-frontier systems by roughly four to eight months.

Through Q1 and Q2 2026 the Institute has run a steady research cadence. Recent papers include:

  • Propensity inference — environmental contributors to LLM behaviour (24 April 2026)
  • Infusion — shaping model behaviour by editing training data via influence functions (10 April 2026)
  • How are AI agents used? — evidence from 177,000 MCP tools (26 March 2026)
  • Quantifying frontier LLM capabilities for container sandbox escape (23 March 2026)
  • Measuring AI agents’ progress on multi-step cyber attack scenarios (16 March 2026)

Activities and partnerships

  • Model evaluations: pre-deployment testing of frontier models in partnership with the US AI Safety Institute, Anthropic, and others — 30+ systems to date
  • Open-source tooling: Inspect, InspectSandbox, InspectCyber and ControlArena evaluation frameworks
  • Funding: £15 million Alignment Project, £8 million Systemic Safety Grants, £5 million Challenge Fund
  • Partnerships: criminal-misuse team with the Home Office; research partnership with Google DeepMind; San Francisco office

Sectoral regulators

Unlike the EU’s centralised AI Office, the UK relies on existing regulators to govern AI within their domains. Each published a strategic AI approach in 2024 and has continued to refine guidance through 2025–26.

Regulator Sector Latest AI guidance
FCA Financial services AI Update (April 2024); AI Lab; Mills Review (Jan 2026); AI Live Testing cohort 2 (April 2026)
PRA Banks and insurers SS1/23 Model Risk Management (effective May 2024); honeymoon-over signal Sept 2025
MHRA Medical devices AI Airlock phase 2 (closes April 2026); new AI medical device framework due later 2026; International Reliance Framework Autumn 2026
ICO Data protection AI & biometrics strategy update (March 2026); ADM and profiling consultation closes 29 May 2026; AI & ADM code of practice in development
Ofcom Communications Online Safety Act AI implications; synthetic media guidance
CMA Competition Foundation models review; AI partnership monitoring
Bank of England Financial stability AI roundtables summary (Feb 2026); FPC record (April 2026) flagged agentic AI in payments and markets as the next focus

Digital Regulation Cooperation Forum (DRCF)

The FCA, CMA, ICO and Ofcom coordinate through the DRCF. The DRCF AI and Digital Hub continues to provide joint guidance for organisations navigating multiple frameworks, and in 2026 has been a key channel for cross-regulator alignment around the AI Growth Lab.

UK GDPR and automated decision-making

Until the Data (Use and Access) Act 2025, Article 22 of the UK GDPR was the primary legal framework for automated decision-making (ADM) in the UK. Reformed Article 22 obligations now sit alongside the DUAA’s expanded lawful bases.

Article 22 core rights

Individuals have the right not to be subject to decisions based solely on automated processing (including profiling) that produce:

  • Legal effects on them (for example legal status, entitlement to benefits)
  • Similarly significant effects (job offers, mortgage applications, insurance terms)

ICO AI and biometrics strategy update (March 2026)

The ICO’s March 2026 strategy update names three priority areas: foundation models, ADM in recruitment and public services, and police use of facial recognition. Recent ICO outputs include:

  • Updated ADM and profiling guidance — open for consultation until 29 May 2026; will inform a forthcoming AI and ADM code of practice
  • Emerging-tech report on agentic AI (January 2026)
  • Response to the Home Office consultation on biometrics, FRT and similar technologies (February 2026)
  • Recruitment Rewired — updated guidance on automation in recruitment with the March 2026 blog "Automated decisions can streamline hiring with the right safeguards"
  • Explaining decisions made with AI — joint guidance with the Alan Turing Institute, still the canonical reference

Data (Use and Access) Act 2025

The Data (Use and Access) Act 2025 received Royal Assent on 19 June 2025, introducing significant changes to UK data protection law and its interaction with AI. Most data-protection provisions came into force on 5 February 2026; the remaining individual-rights provisions commence on 19 June 2026.

Key changes for automated decision-making

In force from 5 February 2026
  • New lawful basis: recognised legitimate interests for ADM — covering crime prevention, safeguarding vulnerable people, and emergency response
  • Meaningful human intervention: a "competent person" must be able to review automated decisions
  • Required safeguards: individuals must be informed, able to contest decisions, and access human review
  • Special category data: the stricter regime continues to apply where sensitive data is involved
  • Reformed adequacy test for international transfers — protection in third countries must be "not materially lower" than the UK standard

Implementation timeline

  • Stage 1 (20 August 2025): initial provisions in effect
  • Stage 2 (30 September 2025): additional changes effective
  • 5 February 2026: bulk of data-protection provisions in force, including new ADM lawful basis and reformed children’s protections
  • 19 June 2026: new individual right to complain and section 103 mandatory complaints procedure

Regulatory timeline

Already in effect

Date Development
April 2024 FCA AI Update published
17 May 2024 PRA SS1/23 Model Risk Management effective
May 2024 MHRA AI Airlock phase 1 launched
14 February 2025 AI Safety Institute renamed AI Security Institute
19 June 2025 Data (Use and Access) Act 2025 Royal Assent
21 October 2025 DSIT Blueprint for AI regulation; AI Growth Lab call for evidence opens
18 December 2025 AISI Frontier AI Trends Report published
27 January 2026 FCA Mills Review launched
5 February 2026 DUAA bulk data-protection provisions in force, including new ADM lawful basis
February 2026 Bank of England summary of AI roundtables published
March 2026 ICO AI & biometrics strategy update
April 2026 FCA AI Live Testing cohort 2 selected; AI Airlock phase 2 closes

Coming in 2026 and beyond

Date Development
29 May 2026 ICO ADM and profiling guidance consultation closes
19 June 2026 DUAA right to complain and section 103 in force
Summer 2026 FCA Mills Review recommendations to Board; FCA good-and-poor practice report on AI
Autumn 2026 MHRA International Reliance Framework; new AI medical device framework expected later 2026
Q1 2027 FCA AI Live Testing evaluation report
2 December 2027 EU AI Act high-risk obligations (Annex III) — proposed hard date under Omnibus delay
2 August 2028 EU AI Act product-embedded high-risk (Annex I) — proposed hard date under Omnibus delay
30 June 2030 MHRA: UKCA mark required for medical devices (CE marking recognition ends)

UK vs EU AI Act: key differences

For organisations operating in both UK and EU markets, the differences between these frameworks are material.

Aspect UK approach EU AI Act
Regulatory structure Principles-based, sectoral Horizontal legislation
Central authority None — AISI evaluates only; DRCF coordinates European AI Office + national authorities
Risk classification No formal tiers Four tiers: unacceptable, high, limited, minimal
Prohibited practices None specified in AI law (existing laws apply) Explicit bans — social scoring, certain biometrics, manipulation
Compliance obligations Flexible, outcome-focused Prescriptive requirements per risk tier
Statutory basis Non-statutory principles; sectoral statute (DUAA, MDR, etc.) Directly applicable EU regulation
Current focus (2026) Security and growth — DSIT Blueprint, AI Growth Lab Safety and fundamental rights — Omnibus delay reshaping high-risk timeline
Extraterritorial impact

UK companies placing AI systems on the EU market, or whose AI outputs are used by EU recipients, remain in scope of the EU AI Act regardless of UK rules. Under the proposed Digital Omnibus on AI, EU high-risk obligations move to 2 December 2027 (Annex III stand-alone) and 2 August 2028 (Annex I product-embedded). UK firms gain a longer planning window — but only if technical standards land in time.

UK AI compliance checklist

Even without prescriptive AI-specific requirements, the following remains the working list:

Identify applicable sectoral regulators

Determine which regulators (FCA, MHRA, ICO, etc.) have jurisdiction over your AI use cases

Review regulator-specific AI guidance

Each regulator has published its strategic approach—ensure your practices align

Assess ADM under UK GDPR/DUAA

Ensure automated decisions have appropriate safeguards and human review mechanisms

Document accountability structures

Designate accountable individuals for AI governance (84% of FCA-regulated firms have done this)

Consider EU AI Act obligations

If operating in EU markets, ensure compliance with EU requirements regardless of UK rules

How GLACIS supports UK AI compliance

The UK’s principles-based approach gives organisations flexibility — and asks them to show, on demand, that the principles were applied. When the FCA asks how accountability is enforced, the ICO asks for ADM records, or the MHRA asks for post-market evidence, the answer needs to be made of receipts, not policy documents.

Continuous attestation → accountability and governance

Real-time evidence collection with cryptographic proofs. Every AI interaction is logged with tamper-evident records — demonstrating the accountability principle to the FCA, MHRA or ICO without manual audit trails.

Evidence pack → regulator inquiries

When the FCA asks how AI decisions are made, or the ICO requests ADM documentation, audit-ready evidence packages are already assembled — structured records showing what the AI did, why, and with what safeguards.

AI Readiness Score → gap assessment

Measure alignment against the five UK principles and the relevant sectoral requirements. Identify gaps before regulators find them, with prioritised remediation steps.

Mapping GLACIS to UK principles

UK principle GLACIS capability
Safety, security, robustness Continuous monitoring detects anomalies and drift. Evidence of guardrails in action.
Transparency and explainability Full audit trail of AI inputs, outputs and decision factors. Exportable for DUAA / ADM individual requests.
Fairness Sampling and attestation across user cohorts enables bias-detection evidence.
Accountability and governance Cryptographic receipts prove controls were active at time of decision. Links to SM&CR responsibilities.
Contestability and redress Retrieval of specific decision records for individual complaints or subject access requests — including DUAA section 103 complaints from 19 June 2026.

Make the receipts before the regulator asks for them

A targeted assessment of UK AI governance gaps — across the FCA, PRA, MHRA, ICO and DUAA — and a roadmap to close them with continuous attestation, not paperwork.

Get a Runtime Security Assessment →

Related guides