NIST AI RMF
The NIST Artificial Intelligence Risk Management Framework provides voluntary guidance for managing risks across the AI lifecycle — and is rapidly becoming the de facto standard for AI governance in regulated industries.
The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, provides a structured approach to managing the risks of AI systems throughout their lifecycle. The framework organizes around four core functions: Govern (establishing organizational accountability and culture for AI risk), Map (categorizing AI context and identifying risks), Measure (analyzing and assessing AI risks), and Manage (prioritizing and treating identified risks). Unlike the EU AI Act, the AI RMF is voluntary — but it is increasingly referenced by regulators, included in government contracts, and adopted internally by organizations seeking to demonstrate AI governance maturity.
The Govern function addresses organizational infrastructure for AI risk management: policies, roles, accountability structures, and risk tolerance definitions. The Map, Measure, and Manage functions are more technical — classifying AI systems by risk level, evaluating trustworthiness characteristics (accuracy, fairness, explainability, robustness, security, privacy), and implementing risk treatments. Together they create a lifecycle approach that applies from initial system conception through deployment and ongoing monitoring.
For organizations subject to the EU AI Act, financial services model risk management requirements (SR 11-7), or FDA guidance on AI/ML-based Software as a Medical Device (SaMD), the NIST AI RMF provides a compatible organizing structure. Building an AI governance program on the AI RMF's four-function structure creates a foundation that can be extended to satisfy jurisdiction-specific regulatory requirements without wholesale redesign.
We implement NIST AI RMF-aligned governance for organizations building or deploying AI systems in regulated industries — establishing AI risk categorization processes, implementing trustworthiness measurement into the ML development lifecycle, and building the documentation and monitoring capabilities that satisfy both the AI RMF and overlapping frameworks like the EU AI Act and FDA SaMD guidance.
Compliance-Native Architecture Guide
Design principles and a structured checklist for building software that is compliant by default — not compliant by retrofit. Covers data architecture, access controls, audit trails, and vendor due diligence.