UK · Healthcare · GLACIS guides · Updated April 2026

UK healthcare AI regulation: MHRA, CQC, NHS, NICE

A working guide to the UK’s healthcare AI regulatory landscape — refreshed for April 2026, with MHRA SaMD classification, AI Airlock phase 2 closing, the new MHRA AI medical device framework due later in 2026, the National Commission into the Regulation of AI in Healthcare, and the consolidated NHS AI & Digital Regulations Service.

By Joe Braidwood 15 min read Updated 24 April 2026
Apr 2026
AI Airlock phase 2 closes; outputs feed new MHRA framework
Q1 2026
National Commission into Regulation of AI in Healthcare — call for evidence
Autumn 2026
MHRA International Reliance Framework
Later 2026
New MHRA AI medical device framework expected
What changed since January 2026
  • AI Airlock phase 2 closes April 2026. Outputs feed directly into a new MHRA AI medical device framework due later in 2026, informed by the recently created National Commission into the Regulation of AI in Healthcare.
  • MHRA International Reliance Framework launches Autumn 2026 — enabling greater MHRA reliance on approvals from comparator regulators (FDA, Health Canada). Expected to compress UKCA timelines for SaMD already cleared abroad.
  • NHS AI & Digital Regulations Service (digitalregulations.innovation.nhs.uk) is now the consolidated entry point bringing together MHRA, NICE, NHS England and CQC guidance. Replaces reliance on the older NHS AI Lab single-source view.
  • Joint MHRA–FDA–Health Canada principles on predetermined change control plans (PCCPs) and ML-enabled device transparency remain the international anchor — useful pre-positioning for the new UK framework.
  • NHS AI strategic roadmap (2025–2028): NHS-wide rollout of validated AI diagnostic tools and AI scribes targeted from 2027; investment in AI infrastructure runs through 2028.
  • DUAA in force from 5 February 2026 — new lawful basis of recognised legitimate interests for ADM, with safeguards. Section 103 right to complain commences 19 June 2026 and applies to ADM in clinical settings.
Executive summary

The UK’s healthcare AI framework operates through multiple specialised bodies. MHRA regulates AI as medical devices under UK MDR 2002 — classification drives the obligations. CQC oversees AI use within healthcare providers through its fundamental standards. NICE sets evidence standards for digital health technology adoption. The new NHS AI & Digital Regulations Service consolidates guidance for buyers and developers; the older NHS AI Lab brand sits behind it.

The AI Airlock regulatory sandbox, launched by MHRA in 2024, is now in phase 2 — closing April 2026 and feeding into a new framework due later in 2026. It provides a controlled pathway for novel AI medical devices to demonstrate safety and effectiveness with real patients, while informing regulatory policy development.

The path to compliance: (1) determine whether the AI qualifies as a medical device; (2) achieve appropriate MHRA classification and registration; (3) meet NICE evidence standards for NHS adoption; (4) ensure deploying organisations satisfy CQC governance requirements; (5) maintain post-market surveillance and continuous evidence. AI is in the clinical environment whether or not anyone is watching it — the framework is asking who is watching, with what record.

UK healthcare AI regulatory landscape

Unlike the EU’s horizontal AI Act, the UK regulates healthcare AI through existing sectoral frameworks — primarily medical device regulation for AI products and healthcare provider oversight for deployment settings. This approach sits inside the UK’s broader pro-innovation AI regulatory strategy.

MHRA

Medicines and Healthcare products Regulatory Agency

Regulates AI as medical devices under UK MDR 2002. Responsible for UKCA marking, classification, market surveillance, and the AI Airlock sandbox. Key authority for pre-market AI medical device approval.

Medical Devices UKCA Marking AI Airlock

CQC

Care Quality Commission

Inspects and rates healthcare providers in England. Assesses AI use through fundamental standards framework covering safe care, governance, and staffing. Evaluates whether providers properly govern and monitor AI tools.

Provider Oversight Governance England Only

NICE

National Institute for Health and Care Excellence

Sets evidence standards through the Evidence Standards Framework for Digital Health Technologies. Evaluates clinical and economic evidence for NHS adoption. Guidance informs commissioning decisions.

Evidence Standards HTA NHS Adoption

NHS AI & Digital Regulations Service

Consolidated entry point — formerly NHS AI Lab

Consolidated guidance for AI and digital tech in health and social care, bringing together MHRA, NICE, NHS England and CQC. Replaces the older NHS AI Lab single-source view and runs alongside the NHS AI strategic roadmap (2025–2028) and the AI in Health and Care Award.

NHS guidance Procurement Cross-regulator

Additional Oversight Bodies

ICO

Data protection and UK GDPR compliance for health data processing. Automated decision-making requirements under Article 22.

HRA

Health Research Authority oversees AI research involving NHS patients. Ethics approval for clinical studies.

AI Security Institute

Evaluates frontier AI safety, though healthcare-specific guidance remains with MHRA and NHS bodies.

MHRA medical device classification for AI

Under UK MDR 2002, software qualifies as a medical device if it has a medical intended purpose. AI used for diagnosis, monitoring, prediction or treatment recommendation is typically classified as Software as a Medical Device (SaMD). Classification determines regulatory requirements, from self-declaration for Class I to full conformity assessment for Class III.

Is the AI a medical device?

Likely yes, if it:

  • Diagnoses, prevents, monitors, predicts or treats disease
  • Provides clinical decision support that influences treatment
  • Analyses medical images, pathology slides or clinical data for diagnosis
  • Monitors physiological parameters with clinical implications

Probably not, if it:

  • Provides general health and wellness information only
  • Performs purely administrative functions (scheduling, billing)
  • Acts as a simple data repository without clinical analysis
  • Supports research without direct clinical application

SaMD classification under UK MDR 2002

Class Risk Level AI Examples Requirements
Class I Low Wellness apps, symptom checkers providing general info only Self-declaration, UKCA marking, register with MHRA
Class IIa Medium-Low Clinical decision support, triage tools, non-critical monitoring Approved Body audit, QMS, clinical evidence
Class IIb Medium-High Diagnostic imaging AI, cancer detection, treatment planning Full Approved Body review, clinical trials may be required
Class III High AI driving life-sustaining decisions, autonomous treatment Stringent Approved Body review, prospective clinical studies

Clinical Evidence Requirements

  • Analytical validation: Accuracy, sensitivity, specificity on representative data
  • Clinical validation: Performance in intended clinical setting with UK population
  • Real-world performance: Post-market surveillance and monitoring
  • Algorithm change protocol: Re-validation requirements for model updates

Technical Documentation

  • Intended purpose: Clear statement of clinical use and user population
  • Training data: Description of data sources, quality, and representativeness
  • Risk analysis: FMEA or equivalent covering AI-specific failure modes
  • Cybersecurity: Threat model and security controls for connected devices

AI Airlock and the new MHRA framework

The AI Airlock is MHRA’s regulatory sandbox for AI medical devices, launched in 2024. Phase 2 runs to April 2026, with outputs feeding directly into a new MHRA AI medical device framework due later in 2026 — informed by the recently created National Commission into the Regulation of AI in Healthcare. The Airlock generates real-world evidence on safety, performance and adaptive-algorithm behaviour, and shapes the regulatory policy that follows.

How the AI Airlock works

Phased testing for AI medical devices, with phase 2 closing April 2026

1

Application and assessment

Developers apply with details of the AI device, intended use, and preliminary safety evidence. MHRA assesses suitability for sandbox participation.

2

Controlled testing

Approved devices enter controlled testing with real patients in selected NHS sites. MHRA provides ongoing oversight and tailored regulatory advice.

3

Evidence generation

Real-world evidence collected on safety, effectiveness and AI behaviour. Continuous monitoring identifies issues early.

4

Regulatory pathway

Successful sandbox participants receive an expedited pathway to full market authorisation. Evidence informs broader regulatory policy and the new framework due later in 2026.

Benefits for developers

  • Direct engagement with MHRA on regulatory requirements
  • Real-world evidence generation in NHS settings
  • Faster pathway to market for successful devices
  • Regulatory certainty and reduced development risk

Benefits for the NHS

  • Early access to promising AI innovations
  • Evidence on AI performance in UK clinical settings
  • Protected testing environment with MHRA oversight
  • Influence on regulatory standards development

AI Airlock eligibility

Suitable candidates:

  • Novel AI/ML medical devices
  • Adaptive or continuously learning algorithms
  • AI with limited real-world evidence
  • Innovative intended uses

Requirements:

  • Preliminary safety and performance data
  • Clear intended purpose and user population
  • Commitment to transparency with MHRA
  • NHS partner site for testing
  • PCCP-aligned change-control approach (per joint MHRA-FDA-Health Canada principles)

NHS adoption pathway

MHRA clearance is necessary but not sufficient for NHS adoption. AI vendors must also meet NICE evidence standards, NHS data and security requirements, and demonstrate value to commissioners. The NHS AI & Digital Regulations Service consolidates the cross-regulator guidance for the journey.

NICE evidence standards framework for digital health technologies

1

Functional evidence

The technology works as intended. Technical performance, usability, accessibility, and integration capabilities.

2

Clinical evidence

Clinical outcomes improve. Comparative effectiveness, safety profile, and benefits across patient populations.

3

Economic evidence

Cost-effectiveness demonstrated. Resource impact, value for money, and budget impact analysis.

AI-specific NICE considerations

  • Algorithmic transparency and explainability
  • Training data quality and representativeness
  • Ongoing performance monitoring plans
  • Generalizability across settings and populations

NHS data security requirements

  • DSPT Compliance: Data Security and Protection Toolkit assessment required
  • DCB0129/DCB0160: Clinical risk management standards for health IT
  • Cyber Essentials Plus: Cybersecurity certification typically required
  • UK GDPR: Lawful basis for health data processing

NHS AI & Digital Regulations Service resources

  • Buyer’s Guide: Procurement guidance for AI in health and care
  • Algorithm Assurance: Framework for AI governance in NHS
  • AI Ethics Initiative: Ethical guidance for health AI development
  • AI Award: Funding for promising AI health innovations

CQC oversight of healthcare AI

CQC assesses AI use through its fundamental standards, evaluating whether providers have appropriate governance, staff training, and monitoring in place. Key inspection focus areas:

  • AI governance structure and accountability
  • Staff training and competency assessment
  • Clinical validation and ongoing monitoring
  • Human oversight of AI-informed decisions
  • Incident reporting and response procedures
  • Patient information and consent processes
  • Integration with clinical workflows
  • Performance monitoring and audit trails

UK vs EU healthcare AI requirements

Aspect UK Approach EU Approach
Regulatory framework Sectoral (medical devices, data protection) Horizontal AI Act + MDR / IVDR
High-risk classification Based on medical device class AI Act Annex III + MDR class
Conformity marking UKCA — MHRA International Reliance Framework Autumn 2026 CE + AI Act compliance
Regulatory sandbox AI Airlock (MHRA) — phase 2 closes April 2026 AI Act regulatory sandboxes (member states)
Fundamental rights UK GDPR, DUAA 2025, Human Rights Act AI Act FRIA, EU Charter, GDPR
High-risk timing New MHRA AI medical device framework due later 2026 Omnibus proposes 2 Dec 2027 (Annex III) and 2 Aug 2028 (Annex I product-embedded)
Market access CE marking recognition until 30 June 2030; UKCA mandatory thereafter Single market access

Dual compliance strategy: organisations seeking access to both UK and EU markets should plan for compliance with both frameworks. The EU Omnibus on AI is on track to push high-risk obligations to December 2027 (Annex III) and August 2028 (Annex I), giving more planning time — but only if the technical standards land. The new MHRA AI medical device framework due later in 2026 is the parallel UK clock. See our detailed UK vs EU AI Act comparison →

Key takeaways

For AI developers

  • 1 Determine early whether your AI qualifies as a medical device and its likely classification
  • 2 Consider the AI Airlock for novel or adaptive AI devices requiring real-world evidence
  • 3 Build NICE evidence requirements into development from the start
  • 4 Ensure robust documentation of training data, validation, and performance monitoring

For healthcare providers

  • 1 Verify UKCA marking and MHRA registration before procuring AI medical devices
  • 2 Establish AI governance structures that meet CQC expectations
  • 3 Ensure clinical validation in your specific patient population
  • 4 Implement ongoing performance monitoring and incident reporting

How GLACIS supports UK healthcare AI compliance

AI is in the clinical environment whether or not anyone is watching it. The MHRA, CQC, NHS and NICE each ask the same underlying question — is the AI safe, effective and well-governed — and each wants the answer in a different form. GLACIS provides one continuous evidence layer that feeds all of them, so the same observation produces the right artefact for the right regulator.

Post-market surveillance

MHRA post-market surveillance, in force since June 2025, requires continuous monitoring of AI medical device performance — and the new framework due later in 2026 is expected to tighten lifecycle and PCCP expectations. GLACIS continuously attests AI behaviour with timestamped, tamper-evident records, feeding vigilance reporting and trend analysis.

CQC inspection evidence

When CQC inspectors ask how AI is integrated safely into clinical workflows, evidence packs show what controls were active, when they triggered, and how outcomes were monitored — supporting fundamental-standards compliance.

NICE evidence requirements

The NICE evidence standards framework asks for ongoing performance monitoring. GLACIS generates cryptographically verifiable real-world performance evidence, supporting the functional, clinical and economic evidence dossier.

Algorithm assurance

The NHS AI & Digital Regulations Service surfaces consolidated assurance expectations — drift, bias, lifecycle. GLACIS samples AI outputs across patient cohorts, creating attestation records that demonstrate ongoing algorithmic fairness and stability without retrofitting governance after the fact.

Mapping GLACIS to UK healthcare regulatory requirements

Regulatory Requirement GLACIS Capability
MHRA Post-Market Surveillance Continuous monitoring with incident-correlated evidence. Trend analysis data for periodic safety reports.
CQC Safe Care & Treatment Audit trail of AI recommendations, clinician overrides, and outcome tracking. Evidence of human oversight.
NICE Performance Monitoring Real-world accuracy metrics with cryptographic integrity. Exportable for HTAs and procurement evaluations.
NHS algorithm assurance Cohort-stratified sampling for bias detection. Model version tracking and drift alerts.
ICO ADM and DUAA rights Individual decision retrieval for patient access requests. Meaningful human-involvement records. Section 103 complaints from 19 June 2026.

Stand-up the evidence layer before the framework lands

The new MHRA AI medical device framework is due later in 2026. The CQC, NHS and NICE all want continuous, attributable evidence. GLACIS provides one underlying receipt layer that feeds each.

Talk to us → Healthcare AI assessment →