"AI governance" is on every board agenda. When a board resolution states that the organization will "implement AI governance," it requires someone to translate that resolution into an engineering program. The problem: there are three distinct AI governance frameworks that different stakeholders mean when they say "AI governance" — and they have different scopes, different obligations, and different engineering implications.
ISO/IEC 42001 (published December 2023) is an AI management system standard. The NIST AI Risk Management Framework (published January 2023) is a voluntary framework for managing AI risk. The EU AI Act is binding law with direct legal obligations. These are not interchangeable — and the board may have intended any of them, or a combination, or none of them specifically.
ISO/IEC 42001: The Management System Standard
ISO 42001 is structured like ISO 27001 (information security) and ISO 9001 (quality management): it specifies requirements for an AI management system — an organizational framework for managing the development and use of AI systems in a responsible manner. It is certifiable — organizations can achieve ISO 42001 certification through an accredited certification body.
The engineering implications of ISO 42001 are primarily process implications: the standard requires that organizations identify their AI systems, assess their impacts, establish policies and controls, and maintain an AI management system. Annex A of ISO 42001 provides a control set analogous to ISO 27001 Annex A. These controls include: establishing an AI policy, conducting AI impact assessments, implementing controls for AI system development, and maintaining records of AI system lifecycle activities.
ISO 42001 does not specify what the controls must technically implement — it specifies that controls must be established and that their effectiveness must be evaluated. It is a framework for how you manage AI, not a specification of what your AI systems must do.
ISO 42001 certification tells the market that your organization has a structured AI management system. It does not tell regulators that your AI systems satisfy legal requirements. An organization can be ISO 42001 certified and still have AI systems that violate the EU AI Act — because ISO 42001 doesn't address legal risk categorization, conformity assessment, or the specific technical requirements of the Act.
NIST AI RMF: The Risk Vocabulary
The NIST AI RMF is structured around four functions: GOVERN, MAP, MEASURE, and MANAGE. It is explicitly voluntary and is intended to provide a common vocabulary for AI risk management rather than specific compliance requirements. The framework describes practices for each function — GOVERN practices cover organizational culture and structure; MAP practices cover context, risk identification, and categorization; MEASURE practices cover evaluation and monitoring; MANAGE practices cover response, recovery, and improvement.
The engineering implications of NIST AI RMF are primarily analytical: the framework provides a structured way to identify, assess, and document AI risks. It does not specify technical controls. Organizations using NIST AI RMF typically produce AI risk assessments and risk management documentation, not specific technical implementations. It is most useful as a shared language for AI risk conversations between technical and non-technical stakeholders.
EU AI Act: The Binding Law
The EU AI Act has direct technical requirements for high-risk AI systems (Articles 9-15), transparency requirements for limited-risk systems (Article 50), and general-purpose AI model obligations (Articles 51-55). These are not management system requirements or risk vocabulary — they are legal requirements with specified technical implementations. An AI system in a high-risk category must have a risk management system per Article 9, data governance per Article 10, technical documentation per Article 11, logging per Article 12, transparency per Article 13, human oversight per Article 14, and accuracy and robustness per Article 15.
The Framework Mapping
- Identify which framework(s) your organization is actually obligated to follow — the EU AI Act if you have high-risk AI systems in EU scope; local equivalents if applicable
- Treat ISO 42001 as a management system framework — useful for organization and process, not for regulatory compliance
- Use NIST AI RMF as a risk vocabulary — useful for structuring AI risk assessment and board reporting, not for technical control implementation
- Map your EU AI Act obligations first — these are the legally binding requirements
- Align your ISO 42001 management system structure with your EU AI Act compliance program — the two are complementary, not duplicative
- Use NIST AI RMF MAP function outputs as input to your EU AI Act risk classification exercise
Our AI platform engineering, compliance infrastructure, and regulatory intelligence services provide integrated AI governance programs that address all three frameworks in a coherent architecture. The entry point is always the same: which framework creates legal obligations for this organization, and what does that require at the engineering level.
The engineering behind this article is available as a service.
We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.