UK · Financial services · GLACIS guides · Updated April 2026

UK financial services AI: FCA and PRA, April 2026

How Consumer Duty, SM&CR and PRA SS1/23 Model Risk Management apply to AI in banking, insurance and investment management — refreshed for April 2026 with the FCA Mills Review, AI Live Testing cohort 2, the SS1/23 supervisory reset, and the Bank of England’s agentic-AI focus.

By Joe Braidwood 15 min read Updated 24 April 2026
Sep 2025
PRA SS1/23 honeymoon over — supervisory tightening
27 Jan 2026
FCA Mills Review launched (recommendations summer 2026)
Feb 2026
Bank of England AI roundtables summary published
Apr 2026
FCA AI Live Testing cohort 2 selected (8 firms)
What changed since January 2026
Executive summary

The FCA and PRA regulate AI in UK financial services through existing frameworks rather than AI-specific rules. In December 2025 FCA CEO Nikhil Rathi confirmed no AI-specific regulations are planned, citing the technology’s rapid evolution "every three to six months." That position still holds at April 2026.

Key frameworks: Consumer Duty (good customer outcomes), SM&CR (senior management accountability), and PRA SS1/23 (model risk management for banks using internal models). The FCA’s 2024 survey found 75% of firms already using AI, with 84% having an accountable individual for their AI approach.

Key finding: while no prescriptive AI rules exist, firms must demonstrate that AI-driven outcomes meet existing regulatory expectations — particularly around fairness, transparency and consumer protection. The FCA will "intervene in cases of egregious failures." The Mills Review and the SS1/23 supervisory reset are the two stories that move the needle in 2026.

75%
Firms using AI (FCA 2024 survey)
84%
Have AI accountable person
17%
Using foundation models
8
Firms in AI Live Testing cohort 2

FCA approach to AI

The FCA published its AI Update in April 2024, setting out how it expects firms to manage AI within existing regulatory frameworks. The core message: outcomes-focused regulation applies equally to AI.

No AI-specific rules — Mills Review now in flight

In December 2025 FCA CEO Nikhil Rathi confirmed the FCA will not introduce AI-specific rules:

"We do not plan to introduce extra regulations for AI. Instead, we’ll rely on existing frameworks… The technology evolves every three to six months, making prescriptive rules impractical."

On 27 January 2026 the FCA launched the Mills Review, led by Executive Director Sheldon Mills — a long-look review of how AI may reshape retail financial services to 2030 and beyond. Engagement-paper deadline 24 Feb 2026; recommendations to FCA Board summer 2026. The Review will assess whether Consumer Duty expectations should be revised to account for AI. The "no AI-specific rules" stance still holds — but the substance of Consumer Duty under AI is now under live review.

Key regulatory frameworks

  • Threshold conditions: firms must remain fit, proper, and capable of being effectively supervised
  • Consumer Duty: firms must deliver good outcomes for retail customers
  • SM&CR: senior managers are accountable for AI governance within their responsibilities
  • Principles for businesses: including Principle 6 (customers’ interests) and Principle 7 (communications)
  • DUAA 2025: new ADM lawful basis in force from 5 February 2026; section 103 right to complain commences 19 June 2026
Enforcement approach

The FCA will "intervene in cases of egregious failures that are not dealt with." There is no prescriptive compliance checklist. Firms must be able to demonstrate that AI systems produce fair, transparent outcomes — and the bar for that demonstration is rising as the Mills Review unfolds.

Consumer Duty and AI

The Consumer Duty (in force since July 2023) is the FCA’s primary lens for assessing AI in retail financial services. It requires firms to act to deliver good outcomes across four areas:

1

Products and services

AI used in product design, recommendation engines, or personalisation must produce products that meet customer needs. Algorithmic bias that leads to unsuitable recommendations violates this outcome.

2

Price and value

AI pricing algorithms must deliver fair value. Dynamic pricing or personalised offers must not exploit behavioural biases or create unfair outcomes for vulnerable customers.

3

Consumer understanding

AI-generated communications must be clear and understandable. LLM-drafted content must meet the same standards as human-written materials.

4

Consumer support

AI chatbots and automated support must provide equivalent quality to human support. Customers must be able to access human assistance when needed.

Practical implications

  • Test AI systems for discriminatory outcomes before deployment
  • Monitor AI-driven customer outcomes on an ongoing basis
  • Document how AI contributes to (or risks undermining) good outcomes
  • Ensure human oversight of AI decisions affecting customers

SM&CR accountability for AI

The Senior Managers and Certification Regime (SM&CR) drives individual accountability for AI governance. The FCA’s 2024 survey found 72% of firms report executive leadership as accountable for AI use cases.

Accountability expectations

Firms should consider which Senior Management Functions (SMFs) are accountable for:

  • AI strategy and governance: often the CEO (SMF1) or a designated SMF
  • AI risk management: Chief Risk Officer (SMF4)
  • AI in customer outcomes: relevant business line SMFs
  • AI model risk: SMF responsible for internal models (PRA-regulated firms)
  • AI data governance: often linked to operations or technology SMFs
FCA finding

84% of surveyed firms have an accountable individual for their AI approach. Accountability is often split — most firms report three or more accountable persons or bodies, which can create gaps. The Mills Review and the SS1/23 supervisory reset will both push toward sharper, demonstrable lines of accountability through 2026.

PRA SS1/23: model risk management

Supervisory Statement 1/23, effective from 17 May 2024, sets out the PRA’s expectations for model risk management at banks using internal models for regulatory capital. It explicitly covers AI and machine learning models.

September 2025 — honeymoon over

The PRA put firms on notice that the SS1/23 implementation honeymoon is over. For the 2026 supervisory cycle, auditors are looking for proof of progress in automated monitoring, data aggregation and operating boundaries. Boards and senior management are expected to understand aggregate model risk — the "big picture" of how inter-related models and data structures impact safety and soundness. The persistent finding: a "technology gap" between policy work and monitoring tooling. Paper-based compliance is no longer sufficient.

Scope

SS1/23 applies to UK-incorporated banks, building societies, and PRA-designated investment firms with internal model approval for:

  • Credit risk (IRB approach)
  • Market risk (IMA approach)
  • Counterparty credit risk (IMM approach)

The five principles

Principle AI / ML implications
1. Model identification and classification All AI / ML models must be in the model inventory. Foundation models (LLMs) require documented use cases.
2. Governance Clear ownership and accountability for AI models. Board oversight of material model risks; the 2026 PRA cycle is asking for aggregate, not just per-model, understanding.
3. Development, implementation and use AI model development must follow documented standards. Explainability requirements for complex models.
4. Independent validation AI models require validation proportionate to risk. Generative AI may need tailored validation approaches.
5. Risk mitigants Fallback mechanisms for AI model failures. Real-time, automated monitoring is now the supervisory expectation, not a "nice to have."

Foundation models and LLMs

  • Must be included in model inventories with documented use cases
  • Risk classification should reflect downstream applications
  • Validation may require novel approaches due to model complexity
  • Third-party LLMs (GPT, Claude, Gemini) still require oversight

Bank of England agentic-AI focus

The Bank of England’s Financial Policy Committee record (April 2026) concluded that financial-system participants have not yet adopted advanced or agentic AI in ways that pose systemic risk — but flagged that risks are likely to increase, potentially rapidly, as deployment expands. The FPC asked the Bank and the FCA to do further work on agentic AI in payments and financial markets. Expect this to flow into PRA supervisory expectations and SS1/23 interpretation through 2026 and 2027.

FCA AI Lab and the Mills Review

The FCA launched its AI Lab in October 2024 to help firms develop AI safely and responsibly. It comprises five initiatives — and is now sitting alongside the longer-horizon Mills Review.

Supercharged Sandbox

Test AI innovations with real consumers in a controlled regulatory environment.

AI Live Testing — cohort 2

Eight firms selected April 2026: Aereve, Coadjute, Barclays, Experian, GoCardless, Lloyds (Scottish Widows), UBS, Palindrome. Use cases include AI-enabled targeted support for investments, credit-score insights, agentic payments, AML, KYC. Testing through end-2026; FCA evaluation report Q1 2027.

AI Live Testing — cohort 1

Confirmed September 2025; participating from October 2025. Feedback statement FS25/5 published.

AI Spotlight

Analysis of emerging AI trends and their regulatory implications.

AI Sprint

Time-limited initiatives addressing sector-wide AI challenges. Feedback published April 2025.

AI Input Zone

Channel for industry feedback on AI challenges. Open November 2024–January 2025.

Mills Review (January 2026 onwards)

The Mills Review, led by Executive Director Sheldon Mills, was launched on 27 January 2026. Its remit: how AI may reshape retail financial services to 2030 and beyond. The engagement paper covered four themes:

  • Future evolution of AI — including more powerful, autonomous and agentic systems
  • Effects on markets and firms — competition, market structure, UK competitiveness
  • Effects on consumers — how consumers will be influenced by AI and influence financial markets back
  • How financial regulators may need to evolve — to continue ensuring retail financial markets work well

Engagement-paper deadline 24 February 2026; recommendations to FCA Board summer 2026. The Review will assess whether Consumer Duty expectations should be revised to account for AI. Pair with the FCA’s planned good-and-poor practice report on AI (also due later in 2026) for the next phase of supervisory direction.

AI use cases and regulatory risks

The FCA’s 2024 survey, AI Live Testing cohort 2 use cases, and the BoE FPC’s April 2026 record together flag where deployment is concentrated:

Use case Key regulatory considerations
Credit decisioning Consumer Duty fair value, explainability, bias testing, DUAA ADM rights
Fraud detection False-positive rates, customer impact, operational resilience
Customer service chatbots Consumer understanding, access to human support, complaint handling, section 103 from 19 June 2026
Robo-advice and AI-enabled targeted support Suitability, disclosure, human oversight, Consumer Duty (revisions under Mills Review)
Claims processing Fair treatment, explanation of decisions, escalation to humans
Risk modelling SS1/23 model risk management, validation, documentation, automated monitoring
AML / KYC Effectiveness, false-positive management, human review
Agentic payments and markets BoE FPC priority — further work commissioned (April 2026); systemic-risk focus tightening through 2026

Top perceived constraints

According to the FCA survey, firms identify these as the largest constraints on AI adoption:

  1. Data protection and privacy (regulatory)
  2. Resilience, cybersecurity and third-party rules (regulatory)
  3. Consumer Duty (regulatory)
  4. Safety, security and robustness of AI models (non-regulatory)
  5. Insufficient talent and skills (non-regulatory)

How GLACIS supports FCA and PRA compliance

Without AI-specific rules, financial services firms must prove their AI delivers good outcomes through existing regulatory frameworks. The FCA expects evidence of Consumer Duty compliance and is now reviewing it under Mills. The PRA expects SS1/23 model documentation backed by automated monitoring, not paper. GLACIS provides the infrastructure to generate that evidence continuously.

Consumer Duty evidence

Continuous attestation captures AI outputs across all four Consumer Duty outcomes. When the FCA asks how good customer outcomes are achieved through AI-driven decisions, the answer is a timestamped record of what the AI recommended and what safeguards triggered — already prepared for the Mills Review’s expected sharpening of expectations.

SM&CR accountability records

Link AI governance to Senior Management Functions. Evidence packs show which controls were in place when decisions were made — supporting reasonable-steps defence and the 2026 supervisory tightening on aggregate model-risk understanding.

SS1/23 model monitoring

PRA SS1/23 Principle 5 requires real-time monitoring and risk mitigants. The September 2025 supervisory reset turned that into a hard expectation: automated monitoring, not paper. GLACIS provides continuous observation of AI model behaviour with cryptographic evidence of when guardrails engaged — closing the "technology gap" the PRA is calling out.

Mapping GLACIS to FCA and PRA requirements

Regulatory requirement GLACIS capability
Consumer Duty: good outcomes Audit trail of AI recommendations vs actual customer outcomes. Evidence for MI reporting and Mills Review-aligned narrative.
SS1/23: model inventory Automatic cataloguing of AI models in scope with metadata, version history and use cases.
SS1/23: independent validation Evidence packages structured for internal model validation teams or external reviewers.
SS1/23: automated monitoring (Principle 5, post-Sept 2025) Continuous observation of AI behaviour with cryptographic evidence of guardrail triggers. Closes the technology gap the PRA flagged.
SM&CR: reasonable steps Demonstrate senior managers took reasonable steps via control attestation records.
DUAA: ADM rights Individual decision retrieval for DSAR and contestation requests. Meaningful human review evidence. Section 103 from 19 June 2026.
BoE FPC agentic-AI focus Receipt-grade record of agentic AI behaviour in payments and markets — pre-positioning for the BoE’s further work.

Build the receipts before Mills lands and SS1/23 audits

A tailored assessment of AI governance against FCA Consumer Duty, SM&CR and PRA SS1/23 — with the Mills Review and the September 2025 supervisory reset already in scope.

Get a Runtime Security Assessment →

Related guides