Explainable AI in the regulatory context is not the same problem as explainable AI in the model development context. When a model developer uses SHAP or LIME to debug feature contributions, they are solving an engineering problem. When a regulated organisation must explain an AI system to the FDA, the CFPB, a banking examiner, or the EU AI Act conformity assessment body, they are solving a documentation and communication problem. The tools overlap but the requirements are different.
What Different Regulators Actually Require
FDA SaMD submissions for AI-based medical devices require a description of the algorithm architecture, the training methodology, the validation dataset, and the performance characteristics that a non-specialist reviewer can evaluate to determine whether the device is safe and effective for its intended use. FDA reviewers are not expected to run SHAP analyses. The submission must explain the model in a structured document using the terminology of the FDA's AI/ML SaMD action plan and the device-specific predicate or De Novo record.
SR 11-7 model documentation requires conceptual soundness evidence: a demonstration that the model's theoretical basis is appropriate for the intended use, that the variables are relevant and supported by business logic, and that the modelling methodology is consistent with accepted practice for the application. A conceptual soundness memorandum for a gradient-boosted fraud detection model must explain why the chosen features predict fraud, why gradient boosting is an appropriate method for this problem, and what the model's known limitations are. Feature importance plots may illustrate the explanation but do not substitute for the written analysis.
EU AI Act Transparency Requirements
EU AI Act Article 13 requires that high-risk AI systems be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Article 14 requires human oversight measures that enable the natural persons overseeing the system to understand the capabilities and limitations of the AI system. These requirements are operational, not just documentary: the deployed system must produce interpretable outputs in real time.
The technical documentation required for EU AI Act conformity assessments includes a general description of the AI system and its intended purpose, a description of the design and development process including training data, a description of the monitoring and control measures, and test and validation results. This documentation must be kept up to date throughout the system's lifecycle and made available to national competent authorities on request.
Intrinsically Interpretable Models vs Post-Hoc Explanation
The clearest path to regulatory explainability is choosing an intrinsically interpretable model architecture where the regulatory risk is high and performance requirements can be met by a simpler model. Logistic regression with documented feature engineering, decision trees with explicit depth constraints, and rule-based models all produce explanations that are inherent to the model rather than approximate. For credit scoring, clinical risk stratification, and insurance pricing in jurisdictions with strict explainability requirements, intrinsically interpretable models may be the only architecture that satisfies regulatory documentation standards without additional explanation infrastructure.
Where complex models are required, post-hoc explanation must be documented carefully. The documentation must acknowledge that SHAP, LIME, or other post-hoc methods produce approximations, describe the methodology and its limitations, and demonstrate that the approximation is sufficiently accurate for the intended compliance use. Regulators are increasingly sophisticated about post-hoc explanation limitations and will probe the quality of the approximation during examination.
Building the Regulatory Documentation Package
A regulatory submission package for an AI system requires a structured technical document that addresses: the problem the model is solving and why AI is appropriate; the data used for training and validation including provenance and representativeness; the model architecture and its suitability for the application; the evaluation methodology and results including edge case performance; the explanation methodology and its limitations; and the ongoing monitoring plan. This document is written for regulators, not engineers. The language, structure, and level of technical detail must be calibrated to the audience. Most engineering teams require support from regulatory affairs professionals to produce this document correctly.
The engineering behind this article is available as a service.
We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.