Skip to content
The Algorithm
InsightsCompliance Engineering
Compliance EngineeringCross-Industry11 min read · 2024-12-29

Explainable AI for Regulatory Submissions: What Regulators Actually Require

Explainable AI for regulatory submissions is a different engineering problem than explainable AI for model debugging. FDA SaMD submissions for AI-based medical devices require a description of the algorithm architecture, training methodology, validation datasets, and performance characteristics that a reviewer without ML expertise can evaluate. SR 11-7 model documentation requires conceptual soundness evidence that demonstrates the model is appropriate for its intended use. EU AI Act Article 13 transparency requirements for high-risk AI systems require documentation sufficient for users to exercise meaningful oversight. SHAP plots and attention visualisations serve engineering teams; regulatory explainability documentation requires structured technical writing that connects model behaviour to regulatory risk.

Explainable AI in the regulatory context is not the same problem as explainable AI in the model development context. When a model developer uses SHAP or LIME to debug feature contributions, they are solving an engineering problem. When a regulated organisation must explain an AI system to the FDA, the CFPB, a banking examiner, or the EU AI Act conformity assessment body, they are solving a documentation and communication problem. The tools overlap but the requirements are different.

What Different Regulators Actually Require

FDA SaMD submissions for AI-based medical devices require a description of the algorithm architecture, the training methodology, the validation dataset, and the performance characteristics that a non-specialist reviewer can evaluate to determine whether the device is safe and effective for its intended use. FDA reviewers are not expected to run SHAP analyses. The submission must explain the model in a structured document using the terminology of the FDA's AI/ML SaMD action plan and the device-specific predicate or De Novo record.

SR 11-7 model documentation requires conceptual soundness evidence: a demonstration that the model's theoretical basis is appropriate for the intended use, that the variables are relevant and supported by business logic, and that the modelling methodology is consistent with accepted practice for the application. A conceptual soundness memorandum for a gradient-boosted fraud detection model must explain why the chosen features predict fraud, why gradient boosting is an appropriate method for this problem, and what the model's known limitations are. Feature importance plots may illustrate the explanation but do not substitute for the written analysis.

EU AI Act Transparency Requirements

EU AI Act Article 13 requires that high-risk AI systems be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Article 14 requires human oversight measures that enable the natural persons overseeing the system to understand the capabilities and limitations of the AI system. These requirements are operational, not just documentary: the deployed system must produce interpretable outputs in real time.

The technical documentation required for EU AI Act conformity assessments includes a general description of the AI system and its intended purpose, a description of the design and development process including training data, a description of the monitoring and control measures, and test and validation results. This documentation must be kept up to date throughout the system's lifecycle and made available to national competent authorities on request.

Intrinsically Interpretable Models vs Post-Hoc Explanation

The clearest path to regulatory explainability is choosing an intrinsically interpretable model architecture where the regulatory risk is high and performance requirements can be met by a simpler model. Logistic regression with documented feature engineering, decision trees with explicit depth constraints, and rule-based models all produce explanations that are inherent to the model rather than approximate. For credit scoring, clinical risk stratification, and insurance pricing in jurisdictions with strict explainability requirements, intrinsically interpretable models may be the only architecture that satisfies regulatory documentation standards without additional explanation infrastructure.

Where complex models are required, post-hoc explanation must be documented carefully. The documentation must acknowledge that SHAP, LIME, or other post-hoc methods produce approximations, describe the methodology and its limitations, and demonstrate that the approximation is sufficiently accurate for the intended compliance use. Regulators are increasingly sophisticated about post-hoc explanation limitations and will probe the quality of the approximation during examination.

Building the Regulatory Documentation Package

A regulatory submission package for an AI system requires a structured technical document that addresses: the problem the model is solving and why AI is appropriate; the data used for training and validation including provenance and representativeness; the model architecture and its suitability for the application; the evaluation methodology and results including edge case performance; the explanation methodology and its limitations; and the ongoing monitoring plan. This document is written for regulators, not engineers. The language, structure, and level of technical detail must be calibrated to the audience. Most engineering teams require support from regulatory affairs professionals to produce this document correctly.

Related Articles
Compliance Engineering

EU AI Act: What CTOs Actually Need to Do Before August 2026

Read →
Compliance Engineering

DORA Is Live. Here's What 'Operational Resilience' Means for Your Codebase

Read →
Vendor Recovery

The Vendor Rescue Pattern: How to Recover a Failed Implementation in 12 Weeks

Read →
Facing This?

The engineering behind this article is available as a service.

We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.

Talk to an EngineerSee Case Studies →
Engage Us