Skip to content
The Algorithm
InsightsAI in Regulated Industries
AI in Regulated IndustriesFinancial Services13 min read · 2026-02-26

Building AI Systems for FCA-Regulated Financial Services: The Engineering Checklist

SS1/23
PRA Supervisory Statement on model risk management — published April 2023
The FCA's Consumer Duty (PS22/9) and its model risk management guidance (SS1/23) create specific obligations for AI systems deployed in financial services — and those obligations are not satisfied by the kinds of explainability methods that appear in ML research. A feature importance score from SHAP does not satisfy a regulatory requirement to explain why a customer was denied credit in terms a customer can understand and act upon. The engineering checklist for building AI systems that satisfy both FCA and PRA expectations.

The FCA's Consumer Duty (PS22/9, effective July 2023) and the PRA's model risk management Supervisory Statement (SS1/23, effective May 2024) together define the regulatory expectations for AI systems in UK financial services. These two documents are not fully aligned — they approach AI governance from different angles, and satisfying both requires understanding the distinct obligations each creates.

Consumer Duty focuses on outcomes: firms must ensure that their AI systems deliver good outcomes for retail customers, that customers can understand why decisions were made, and that firms can evidence how their AI systems comply with the duty. SS1/23 focuses on model risk: firms must identify, measure, manage, and monitor model risk, with specific obligations for model development, validation, and deployment lifecycle management.

What "Explainability" Actually Means to the FCA

Consumer Duty's Consumer Understanding outcome (Chapter 8 of the FCA's PS22/9) requires that firms communicate in a way that customers can understand, and that the information they receive enables them to make informed decisions. For an AI system that makes or informs credit decisions, insurance pricing, or investment recommendations, explainability means: a customer who receives an adverse outcome must receive an explanation that (a) is accurate (reflects why the model actually made the decision), (b) is actionable (tells the customer what they could do differently), and (c) is understandable (doesn't require ML literacy to interpret).

SHAP values, LIME explanations, and feature importance scores satisfy the first criterion but typically fail the second and third. An explanation that says "your credit score was the most important factor in this decision" is accurate (if it reflects the model's actual reasoning) but not actionable (the customer can't change their credit score in the short term) and not understandable (what specifically about the credit score was determinative?). The FCA's supervisory expectation is for explanations that address the specific circumstances of the specific customer, not generic feature importance summaries.

The Engineering Reality

The FCA's Consumer Duty creates a "reason why" obligation for adverse AI decisions that is qualitatively different from GDPR's "meaningful information" obligation under Article 22. GDPR requires information about the logic involved; Consumer Duty requires an explanation that enables the customer to understand and potentially challenge the decision. These are different technical requirements — GDPR can be satisfied with a model documentation approach, Consumer Duty requires a customer-facing explanation system.

SS1/23: The Model Risk Management Framework

The PRA's SS1/23 requires a proportionate model risk management framework with five components: model identification and classification, governance, model development, independent model validation, and model use and performance monitoring. The implementation gap most firms face is in independent validation: SS1/23 requires that model validation is performed by personnel independent of model development, with documented challenge of model design, assumptions, and outputs.

For AI/ML models, the independence requirement means the validation function cannot be the same team that built the model. For smaller firms that don't have a separate model risk function, this creates an organizational challenge. The PRA's solution: proportionate application of the framework, with smaller firms able to use simpler validation approaches — but the validation must still be independent and documented.

The Engineering Checklist

  1. Implement a model inventory covering all models in scope for SS1/23 — including models used in customer-facing decisions and those used in risk management
  2. Build customer-facing explanation generation into the decision workflow — not as a post-hoc documentation exercise, but as a system output
  3. Implement model performance monitoring with business metric drift detection — not just statistical drift metrics
  4. Establish model change governance: what constitutes a "material change" requiring re-validation, and what is the governance process for approval?
  5. Document model limitations and out-of-scope use cases — SS1/23 requires that model users understand the conditions under which the model is valid
  6. Build an audit trail for model decisions that satisfies both FCA Consumer Duty evidence requirements and PRA model risk documentation requirements
  7. Implement human override capabilities for AI decisions with a documented governance process — Consumer Duty and SS1/23 both require human oversight of automated decisions

Our AI platform engineering practice has built FCA-compliant AI decision systems for consumer credit and insurance pricing. The architecture pattern: decision logging at the model inference layer, explanation generation as a separate service, and model performance monitoring integrated with the firm's conduct risk framework.

Related Articles
Compliance Engineering

DORA Is Live. Here's What 'Operational Resilience' Means for Your Codebase

Read →
AI in Regulated Industries

Agentic AI in Healthcare: The HIPAA Problems Nobody Is Talking About

Read →
AI in Regulated Industries

The LLM Hallucination Problem in Regulated Environments: What 'Acceptable Error Rate' Actually Means

Read →
Facing This?

The engineering behind this article is available as a service.

We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.

Talk to an EngineerSee Case Studies →
Engage Us