Clinical AI is the most active area of healthcare technology investment, and the regulatory landscape around it is the most actively contested. The core question for every clinical AI product — whether the software is a regulated medical device or an exempt clinical decision support tool — determines whether the development process requires FDA engagement, and how extensive that engagement must be. Getting this classification wrong in either direction is costly: treating a regulated device as exempt risks enforcement action, while treating an exempt tool as a device wastes development resources and delays market entry.
The 21st Century Cures Act of 2016 amended the Federal Food, Drug, and Cosmetic Act to exclude certain CDS software from the definition of a medical device. FDA issued final guidance on this exclusion in September 2022. Understanding the four-factor test at the centre of that guidance — and engineering your product to satisfy it — is the entry point for every clinical AI development programme.
The Four-Factor CDS Exemption Test
Section 520(o)(1)(E) of the FDCA exempts software that meets all four of the following conditions: it is not intended to acquire, process, or analyse a medical image or signal from an in vitro diagnostic device; it is intended to display, analyse, or print medical information about a patient or other medical information; it is intended to support or provide recommendations to a health care professional about prevention, diagnosis, or treatment; and it enables the health care professional to independently review the basis for the recommendations so that the professional does not need to rely primarily on those recommendations to make a clinical diagnosis or treatment decision.
Factor four is where most clinical AI products fail the exemption test. A CDS tool qualifies only if the clinician can independently review the basis for its recommendations — meaning the software must present the underlying clinical logic, data sources, and reasoning in a form the clinician can evaluate, not just a recommendation or score. A clinical AI that outputs a high-risk score without showing the vital sign thresholds, lab values, and logic that produced that output does not satisfy factor four. The clinician cannot independently review what they cannot see.
When CDS Becomes SaMD: The IMDRF Risk Classification Framework
Software that does not satisfy all four CDS exemption conditions is a Software as a Medical Device. The FDA applies the IMDRF SaMD risk framework to classify the regulatory pathway. IMDRF classifies SaMD by two dimensions: the significance of the information provided (treat or diagnose, drive clinical management, or inform clinical management) and the healthcare situation (critical, serious, or non-serious condition).
A software function that drives clinical management of a critical condition — for example, an AI that generates a radiation therapy treatment plan for cancer — is IMDRF Category IV, the highest risk tier, requiring the most rigorous regulatory pathway. A software function that informs clinical management of a non-serious condition may be Category I, eligible for enforcement discretion. Most clinical AI products for acute care settings fall in Categories II through IV and require either 510(k) clearance or De Novo classification.
The Pre-Submission Programme allows sponsors to request FDA feedback on regulatory strategy, study design, and submission content before filing. Engaging FDA through a Pre-Sub before beginning pivotal clinical validation studies is the single most effective way to avoid a complete response letter requiring additional data.
Engineering the 510(k) Technical File
A 510(k) submission for a clinical AI product must demonstrate substantial equivalence to a predicate device. The technical file includes device description, substantial equivalence comparison, performance testing data, and — for AI/ML-enabled devices — the AI/ML transparency documentation described in FDA's 2021 action plan. This includes a description of the training data, model architecture, performance metrics across demographic subgroups, and plans for monitoring and updating the algorithm post-clearance.
Software documentation must conform to IEC 62304 software lifecycle requirements. IEC 62304 classifies software units as Safety Class A (no injury possible), B (non-serious injury possible), or C (serious injury or death possible) and requires commensurate documentation rigour for each class. Most clinical AI components are Class B or C, requiring unit-level test specifications, integration test protocols, and software hazard analysis documentation.
Cybersecurity documentation per FDA's 2023 guidance must accompany every premarket submission. This includes a Software Bill of Materials, security risk assessment, and description of the security architecture. The SBOM must be maintained post-clearance and updated when third-party components receive security patches.
The Predetermined Change Control Plan for Adaptive AI
AI models that learn continuously or are retrained on new data create a regulatory challenge: each significant change to a cleared device's algorithm may require a new 510(k) submission. FDA's 2021 action plan introduced the Predetermined Change Control Plan — a document submitted with the initial clearance that describes the types of algorithm changes the manufacturer plans to make and the performance testing protocols that will govern those changes. FDA reviews and approves the PCCP as part of the initial clearance, allowing subsequent changes within the PCCP's scope to proceed without a new submission.
Designing a PCCP requires engineering teams and regulatory affairs to collaborate on the algorithm update strategy before initial submission. Changes outside the PCCP require a 510(k) supplement or new submission. Understanding the boundary is a design constraint that must be addressed at the architecture level.
Real-World Performance Monitoring as a Regulatory Obligation
FDA's postmarket expectations for AI/ML-enabled devices include real-world performance monitoring — ongoing evaluation of whether the algorithm performs in clinical practice as it performed in validation studies. This is not an informal internal quality programme. It is an expectation that FDA will examine during postmarket surveillance inspections. The monitoring infrastructure — prospective performance data collection, demographic subgroup analysis, drift detection, and safety signal identification — must be designed before go-live.
The Algorithm Approach: Regulatory Engineering from Sprint Zero
The Algorithm embeds regulatory engineering into clinical AI development programmes from the first sprint. We conduct CDS exemption analysis and IMDRF risk classification before a single line of production code is written, so development teams understand their regulatory constraints as design inputs rather than post-development surprises. For products entering the SaMD pathway, we build IEC 62304 documentation artefacts in parallel with the software development process — producing the technical file as a by-product of good engineering practice, not as a separate documentation project that happens after the product is built.
The engineering behind this article is available as a service.
We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.