Skip to content
The Algorithm
InsightsFinancial Services Engineering
Financial Services Engineeringfinancial-services12 min read · 2025-01-02

AI in Fraud Detection: Model Risk Management Under SR 11-7

AI-based fraud detection models at US banks and credit unions are subject to the Federal Reserve SR 11-7 model risk management guidance. Fraud models that influence account decisions — blocking transactions, flagging accounts, triggering reviews — are models in scope for SR 11-7 regardless of whether the model risk management team has classified them as such. Model validation requirements include conceptual soundness review, back-testing against historical fraud patterns, sensitivity analysis, and ongoing performance monitoring including population stability analysis. The challenge for fraud ML teams is that the development cycle is fast — adversarial fraud patterns shift constantly — and SR 11-7 documentation requirements are designed for slow-moving statistical models, not iterative ML systems.

Fraud detection models at US banks are models in scope for SR 11-7 model risk management guidance, whether or not the model risk management team has classified them as such. The Federal Reserve defines a model as a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates. A gradient-boosted classifier that scores transactions for fraud probability satisfies this definition. SR 11-7 applies.

Why Fraud ML Teams Often Operate Outside SR 11-7

Fraud ML teams at banks frequently operate with engineering-team velocity norms that are inconsistent with SR 11-7 documentation and validation requirements. Fraud patterns shift constantly as adversaries adapt to countermeasures; retraining and redeploying models on weekly or monthly cycles is standard practice in fraud prevention. SR 11-7 was designed with slower-moving statistical models in mind -- credit scoring models that change infrequently, market risk models that are reviewed annually. The documentation and validation cycle that SR 11-7 implies does not map naturally onto the fraud ML development cadence.

The mismatch has led many fraud ML teams to either ignore SR 11-7 entirely or maintain documentation that is technically present but substantively incomplete. Bank examiners have become increasingly aware of this gap. Model risk examination findings related to fraud models are increasing as examiners extend their focus beyond the traditional scope of credit and market risk models.

Adapting SR 11-7 for High-Velocity Fraud Models

The practical reconciliation of SR 11-7 requirements with fraud ML development velocity requires tiered documentation and validation intensity based on model risk tier. Not every fraud model retrain requires a full independent validation. A model risk management framework for fraud ML should distinguish between: major model changes that require full SR 11-7 documentation and independent validation; minor model changes such as retraining on updated data with the same architecture that require expedited validation focused on performance stability; and hot fixes such as threshold adjustments within pre-approved ranges that require documentation but not independent validation.

The pre-approved ranges for hot fixes must be established during the initial model validation. The validator must review and approve the change control framework, not just the initial model. This approach satisfies the SR 11-7 requirement for independent validation while enabling the operational velocity that fraud prevention requires.

Conceptual Soundness for Fraud ML

SR 11-7's conceptual soundness requirement for fraud models must address the adversarial nature of the fraud domain. Unlike credit risk models where underlying causal relationships are relatively stable, fraud models are deployed against an adversary that actively adapts to the model. Conceptual soundness documentation must explain why the chosen features and architecture are expected to detect current fraud patterns, what the model's known blind spots are, and what monitoring indicators will signal that the adversary has adapted in ways that reduce model effectiveness.

Disparate Impact in Fraud Detection

Fraud detection models that influence account decisions -- blocking transactions, restricting account access, triggering enhanced verification -- create a fair lending and consumer protection compliance obligation that sits alongside the SR 11-7 requirement. A fraud model that flags transactions from certain demographic groups at materially higher rates than would be expected given actual fraud rates creates a disparate impact that may violate ECOA, the Fair Housing Act, or UDAP standards. Disparate impact testing must be part of the fraud model validation and ongoing monitoring framework.

Audit Trail for Fraud Decisions

Every fraud model decision that affects a consumer -- a blocked transaction, a declined application, a triggered review -- must be logged with the model version, the input features, the score, and the decision rule that translated the score into an action. This audit trail satisfies both SR 11-7 outcomes analysis requirements and the adverse action notice requirements that apply when a fraud-related decision harms a consumer. The outcomes analysis connection -- linking fraud model scores to subsequent confirmed fraud or false positive resolution -- is what enables the ongoing performance monitoring that SR 11-7 requires for high-risk models.

Related Articles
Data Engineering

Real-Time Streaming Compliance: Kafka Governance at Scale

Read →
Data Engineering

Data Mesh Governance: Domain Ownership in Regulated Enterprises

Read →
Data Engineering

Time-Series Data Management for Financial and Operational Data

Read →
Facing This?

The engineering behind this article is available as a service.

We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.

Talk to an EngineerSee Case Studies →
Engage Us