The EU AI Act's enforcement of high-risk system obligations on August 2, 2026 has been known since the Act was published in the EU Official Journal in July 2024. What is new is that engineering teams are moving from reading summaries to implementing requirements — and the gap between the policy summary and the engineering requirement is larger than most teams expected.
Annex III defines eight categories of high-risk AI systems. The engineering obligation is not to read the categories and conclude that your system is or is not high-risk — it is to conduct a documented assessment with legal input that produces a defensible classification. The European AI Office has published guidance on the Annex III categories, but significant interpretation questions remain unresolved, particularly for AI systems at the boundary of categories.
Annex III: The Eight Categories in Engineering Terms
The eight Annex III categories in engineering terms: (1) biometric identification systems, including real-time remote biometric identification; (2) critical infrastructure management AI — systems managing road traffic, water, gas, heating, electricity supply; (3) education and vocational training AI that determines access or assesses learners; (4) employment AI used for recruitment, selection, promotion, termination, or performance monitoring; (5) essential public services AI — credit scoring, benefits eligibility, emergency response dispatch; (6) law enforcement AI — risk assessment of individuals, crime prediction; (7) migration and asylum AI — risk assessment, document verification; (8) justice and democratic processes AI — legal research, judicial decisions.
The categories most likely to affect our clients: employment AI (HR tech platforms), essential public services AI (fintech credit decisioning, insurtech underwriting), and the medical devices crossover for clinical AI — which sits at the intersection of AI Act Annex III and the Medical Device Regulation. Annex III was amended to add general-purpose AI used in high-risk contexts as a qualifying category — the GPAI provisions interact with the high-risk provisions in ways guidelines are still clarifying.
Article 12 of the EU AI Act requires that high-risk AI systems be designed and developed with capabilities enabling the automatic recording of events throughout the lifetime of the system. The recitals clarify that "events" means: the period of each use, the reference database used, the input data that led to the output, and the identity of the persons involved in verifying the output. This is not application logging. It is a purpose-built audit record with specific retention and tamper-evidence requirements.
Article 12 Logging: The Technical Specification
Article 12(1) requires high-risk AI systems to enable logging of events while operating. Article 12(2) specifies minimum logging capability: traceability of the AI system's operation over its lifetime; logging of: the period of each use; the reference database against which the input data was checked; input data that led to the output; the identity of the natural persons involved in verification. The tamper-evidence requirement from Article 12(3): logging facilities must be designed to ensure compliance with legal obligations for record keeping in the relevant sector — healthcare means HIPAA-level protections; financial services means MiFID II or EMIR record-keeping requirements.
Article 14: Human Oversight That Actually Works
Article 14 requires that high-risk AI systems be designed to be effectively overseen by natural persons during use. Article 14(4) specifies that oversight measures must enable persons to fully understand the capabilities and limitations of the system and monitor its operation. An override button that users click without understanding why they are overriding does not satisfy Article 14. The UI/UX design of human oversight mechanisms is a compliance requirement, not a product decision.
Technical Documentation: Article 11
Article 11 and Annex IV define technical documentation requirements. The documentation must include: a general description of the system; a detailed description of its elements and development process; information about monitoring, functioning, and control; a description of the risk management system (Article 9); information on data governance (Article 10); a description of human oversight measures (Article 14); a description of performance metrics, their limitations, and validation. This is a living document — it must be updated for each substantial modification.
- Run a documented Annex III classification assessment with legal input — not just an engineering judgment
- Design the Article 12 audit log as a separate service from application logging — purpose-built, tamper-evident, sector-compliant retention
- Build Article 14 human oversight into the system UX as a compliance requirement — not a product feature
- Draft Article 11 technical documentation before the system goes into production — it cannot be retrofitted accurately
- Implement a risk management system per Article 9 that runs as an ongoing process, not a pre-launch exercise
- Register the system in the EU AI Act database before August 2, 2026 for any new high-risk deployment
The engineering behind this article is available as a service.
We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.