Skip to content
The Algorithm
The Algorithm/Knowledge Base/NIST AI RMF
AI Risk Framework

NIST AI RMF

The NIST Artificial Intelligence Risk Management Framework provides voluntary guidance for managing risks across the AI lifecycle — and is rapidly becoming the de facto standard for AI governance in regulated industries.

What You Need to Know

The NIST AI Risk Management Framework (AI RMF 1.0), published in January 2023, provides a structured approach to managing the risks of AI systems throughout their lifecycle. The framework organizes around four core functions: Govern (establishing organizational accountability and culture for AI risk), Map (categorizing AI context and identifying risks), Measure (analyzing and assessing AI risks), and Manage (prioritizing and treating identified risks). Unlike the EU AI Act, the AI RMF is voluntary — but it is increasingly referenced by regulators, included in government contracts, and adopted internally by organizations seeking to demonstrate AI governance maturity.

The Govern function addresses organizational infrastructure for AI risk management: policies, roles, accountability structures, and risk tolerance definitions. The Map, Measure, and Manage functions are more technical — classifying AI systems by risk level, evaluating trustworthiness characteristics (accuracy, fairness, explainability, robustness, security, privacy), and implementing risk treatments. Together they create a lifecycle approach that applies from initial system conception through deployment and ongoing monitoring.

For organizations subject to the EU AI Act, financial services model risk management requirements (SR 11-7), or FDA guidance on AI/ML-based Software as a Medical Device (SaMD), the NIST AI RMF provides a compatible organizing structure. Building an AI governance program on the AI RMF's four-function structure creates a foundation that can be extended to satisfy jurisdiction-specific regulatory requirements without wholesale redesign.

How We Handle It

We implement NIST AI RMF-aligned governance for organizations building or deploying AI systems in regulated industries — establishing AI risk categorization processes, implementing trustworthiness measurement into the ML development lifecycle, and building the documentation and monitoring capabilities that satisfy both the AI RMF and overlapping frameworks like the EU AI Act and FDA SaMD guidance.

Services
Service
AI Platform Engineering
Service
Compliance Infrastructure
Service
Regulatory Intelligence
Related Frameworks
EU AI Act
NIST
ISO 27001FDA 21 CFR Part 11
DECISION GUIDE

Compliance-Native Architecture Guide

Design principles and a structured checklist for building software that is compliant by default — not compliant by retrofit. Covers data architecture, access controls, audit trails, and vendor due diligence.

§

Compliance built at the architecture level.

Deploy a team that knows your regulatory landscape before they write their first line of code.

Start the conversation
Related
Service
AI Platform Engineering
Service
Compliance Infrastructure
Service
Regulatory Intelligence
Related Framework
EU AI Act
Related Framework
NIST
Related Framework
ISO 27001
Platform
ALICE Compliance Engine
Service
Compliance Infrastructure
Engagement
Surgical Strike (Tier I)
Why Switch
vs. Accenture
Get Started
Start a Conversation
Engage Us