The EU AI Act (Regulation (EU) 2024/1689) has a staggered enforcement timeline. General-purpose AI models faced obligations from August 2025. High-risk AI system obligations — the ones that matter most for engineering teams — become enforceable August 2, 2026. If your organisation deploys AI in any of the Annex III categories, you have roughly six months from publication of this article to have your conformity assessment process in place.
What "High-Risk" Actually Means
Annex III of the Act defines eight categories of high-risk AI systems. The ones most likely to affect our clients: AI systems used in employment and HR decisions (hiring, performance evaluation, task allocation), AI systems used in credit scoring and creditworthiness assessment, biometric identification systems, AI systems managing or operating critical infrastructure, and AI systems used in education and vocational training. Note what is not on the list: AI coding assistants, customer service chatbots, and recommendation engines are not high-risk by default.
The error most engineering teams make is reading a summary that says "AI used in healthcare is high-risk" and concluding that every healthcare AI deployment is covered. It isn't. The Act applies to specific use cases — clinical AI that assists in diagnosis or treatment decisions falls into Annex III's "medical devices" or "critical infrastructure" categories. An internal analytics tool that surfaces population health trends does not.
The conformity assessment for a high-risk AI system is not a one-time event. Article 9 requires a risk management system that is "a continuous iterative process run throughout the entire lifecycle" of the system. This means your CI/CD pipeline needs to include compliance checks, not just your annual audit calendar.
The Technical Obligations Under Article 9–15
Articles 9 through 15 contain the engineering requirements. Article 9 mandates a risk management system with documented identification and analysis of known and foreseeable risks. Article 10 requires data governance — training, validation, and testing data must be subject to data governance practices that address data collection, data preparation operations, examination for possible biases, and identification of relevant gaps or shortcomings. Article 11 requires technical documentation maintained throughout the system lifecycle. Article 12 requires logging capabilities that enable automatic recording of events throughout the system's lifetime.
Article 13 is the transparency obligation — high-risk systems must be designed so that their operation is sufficiently transparent that deployers can interpret the output. Article 14 mandates human oversight mechanisms: the system must be designed so a natural person can effectively oversee the system during its use. Article 15 requires accuracy, robustness, and cybersecurity.
The Logging Architecture Article 12 Requires
Article 12 specifies that high-risk AI systems must enable logging at a minimum of: the date and time of each use, the reference database against which the system was checked, the input data, and the identity of the natural persons involved in verification. For most current AI deployments, this requires a purpose-built audit log service — not application logs, not CloudWatch, not Splunk. The log must be tamper-evident and the retention period must meet the requirements of the applicable sector regulation (which, for healthcare or financial services, is likely 5-10 years).
- Classify your AI deployments against Annex III categories — get legal input on edge cases
- For each high-risk system, document the risk management process (not just the output)
- Implement Article 12-compliant logging: tamper-evident, structured, retained per sector rules
- Build human oversight mechanisms into the UI — Article 14 requires they be effective, not decorative
- Prepare technical documentation (Article 11) — this is a living document, not a one-time deliverable
- Establish a conformity assessment process — most high-risk systems require internal assessment, not notified body assessment
The Register Obligation
Article 49 requires deployers of high-risk AI systems to register the system in the EU database before placing it on the market or putting it into service. The EU AI Act database (managed by the European AI Office) became operational in 2025. You need a registration number before August 2026 for any new high-risk deployment.
The technical documentation required under Article 11 is not a Word document. It must include the general description of the system, a detailed description of the elements and the development process, information about the training methodology and training datasets, the monitoring, functioning and control of the system, and the risk management documentation. This requires engineering input, not just policy writing.
What to Prioritise in the Next 90 Days
If you have high-risk AI systems and have not started compliance work: prioritise the risk management system documentation and the logging architecture. The conformity assessment and the technical documentation can follow, but the logging infrastructure has to be built — it cannot be retrofit onto a production system without downtime and data migration risk.
Our compliance infrastructure practice has worked through the Article 9–15 requirements for clients in financial services, healthcare, and government. If you're starting from scratch on an August 2026 deadline, the assessment-to-architecture timeline is 6-8 weeks for a system of moderate complexity.
The full text of the EU AI Act is available at EUR-Lex. Read the recitals as well as the articles — the recitals explain the intent of requirements that the articles state tersely.
DORA Is Live. Here's What 'Operational Resilience' Means for Your Codebase
The Vendor Rescue Pattern: How to Recover a Failed Implementation in 12 Weeks
FedRAMP Rev 5: What Changed and Why Most Current ATO Holders Are Already Non-Compliant
The engineering behind this article is available as a service.
We have done this work — not advised on it, not reviewed documentation about it. If the problem in this article is your problem, the first call is with a senior engineer who has solved it.