Skip to content
The Algorithm
InsightsAI in Regulated Industries
AI in Regulated IndustriesHealthcare12 min read · 2026-03-27

HIPAA, FDA SaMD, and AI: The Three-Way Compliance Collision

3
Regulatory frameworks that converge on clinical AI — HIPAA, FDA SaMD, and EU AI Act
AI-powered clinical decision support sits at the intersection of three compliance frameworks: HIPAA governs PHI handling, FDA SaMD classification applies if the system influences clinical decisions, and EU AI Act Annex III captures many clinical AI use cases. Most organizations address one framework and assume coverage of the others. The architecture that satisfies all three — without treating each as a separate compliance exercise.

AI-powered clinical decision support sits at the intersection of three compliance frameworks, each with different enforcement authorities, different technical requirements, and different penalties for non-compliance. HIPAA governs how the system handles protected health information — training data, inference inputs, and outputs containing PHI. FDA SaMD classification applies if the software is intended to diagnose, cure, mitigate, treat, or prevent disease. EU AI Act Annex III captures AI used for medical diagnoses and prognoses and AI used to assist in clinical decisions affecting patient safety.

The compliance collision occurs because organizations approach these frameworks sequentially. The architecture-first approach addresses all three simultaneously, before the first line of production code is written.

FDA SaMD Classification: Is Your System a Device?

The FDA's Digital Health Center of Excellence has published guidance on when clinical decision support software qualifies as SaMD. The critical distinction in the 21st Century Cures Act: CDS software is not a device if it provides recommendations where the basis for the recommendation is transparent to the provider and the provider can independently review it. If the system produces a clinical recommendation without providing the clinical rationale in a form the provider can independently review — it is likely to be classified as SaMD. An LLM-based CDS system that produces a differential diagnosis without an interpretable chain of clinical reasoning is SaMD-territory, regardless of how it is marketed.

The Engineering Reality

The FDA's enforcement discretion policy for low-risk CDS means that not every clinical AI deployment requires a 510(k) or De Novo submission. But the enforcement discretion boundary is not static — the FDA has been progressively narrowing it as AI-based CDS becomes more prevalent. Building an architecture that could support a 510(k) submission if required — complete training data documentation, validated performance metrics, documented intended use — is materially cheaper to do from the start than to retrofit after an FDA inquiry.

HIPAA PHI in AI Systems: Training and Inference

HIPAA's Privacy Rule minimum necessary standard applies to PHI used in AI training. Using patient records to train a clinical AI model requires either patient authorization or a waiver of authorization under the Privacy Rule research provisions (45 CFR 164.512(i)). De-identification must satisfy the HIPAA Safe Harbor standard (removing all 18 identifiers) or the Expert Determination method (statistical verification of re-identification risk below 0.09). The failure mode: using "de-identified" data for training where the de-identification was done informally, without Safe Harbor compliance or expert determination.

At inference time, the audit control requirement (§164.312(b)) applies to every PHI element the system accesses to produce its output. For an LLM-based system that retrieves patient records via RAG, the audit trail must capture which patient records were retrieved, what data elements were accessed, and what output was produced. The retrieval chain in a RAG architecture can access hundreds of PHI elements across multiple records in a single inference call — purpose-built audit logging is required.

EU AI Act Annex III for Clinical AI

Annex III Category 5(a) captures AI systems intended as a safety component of a medical device or as a medical device in itself. Category 5(b) captures AI systems intended for making treatment decisions. Both apply to clinical decision support AI. The Article 9-15 obligations apply in full: risk management system, data governance for training data, technical documentation, Article 12 logging, Article 14 human oversight, and accuracy/robustness requirements. The EU AI Act's accuracy requirement aligns with FDA's SaMD performance validation requirement — both require validation against a clinically representative dataset, not just a benchmark dataset.

  1. Conduct FDA CDS/SaMD classification analysis before architecture decisions — the outcome affects the entire system design
  2. If SaMD classification applies or is likely, architect for 510(k) readiness from the start — training data documentation, validated performance metrics
  3. Implement HIPAA-compliant de-identification for all training data using Safe Harbor or Expert Determination — not informal de-identification
  4. Build PHI access audit logging at the inference layer to capture retrieval chains in RAG architectures
  5. Implement Article 14-compliant human oversight UI — the clinical rationale must be transparent and independently reviewable
  6. Build Article 12 logging as a separate service from application logging — purpose-built, tamper-evident, sector-compliant retention
Related Articles
Architecture

What Happens to Your HIPAA BAAs When You Migrate to Cloud

Read →
AI in Regulated Industries

Agentic AI in Healthcare: The HIPAA Problems Nobody Is Talking About

Read →
Compliance Engineering

Why NHS DSPT Failures Are an Engineering Problem, Not a Policy Problem

Read →
Facing This?

The engineering behind this article is available as a service.

We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.

Talk to an EngineerSee Case Studies →
Engage Us