Compliance Checklist · High-Risk AI

EU AI Act Compliance Checklist for August 2026

The EU AI Act high-risk obligations take effect on August 2, 2026. This is a practical, step-by-step checklist for US organizations that need to be compliant before that deadline — or face fines up to €35 million.

Published April 13, 2026 · By Constantin Razvan Gospodin, Legal AI Risk Manager

until EU AI Act high-risk obligations take effect

Regulation (EU) 2024/1689 — the EU AI Act — was published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024. The full high-risk obligations under Article 6 and Annex III become enforceable on August 2, 2026. The European Commission’s Digital Omnibus proposal may extend this deadline to December 2, 2027 for standalone high-risk systems, but the proposal still requires approval by the European Parliament and Council. Until it is formally adopted, the August 2026 deadline stands.

This checklist is designed for compliance officers, general counsel, and business leaders at US organizations whose AI systems fall within the scope of the EU AI Act under Article 2(1)(c). If your AI system produces output that is used within the EU, you are likely in scope. If you are unsure, start with our free assessment.

Phase 1: AI System Inventory and Classification

1. Inventory all AI systems. Before anything else, you need a complete inventory. Every AI system your organization develops, deploys, or procures must be catalogued. Include internal tools, third-party vendor systems, and embedded AI features in your products. Document the purpose, the data inputs, the output type, and the downstream decisions influenced by each system.

2. Classify each system by risk level. The EU AI Act uses a four-tier risk framework: unacceptable (prohibited under Article 5), high-risk (subject to the full compliance obligations under Articles 6–27 and Annex III), limited risk (transparency obligations under Article 50), and minimal risk. The critical question for most US companies is whether any of their systems fall into the eight high-risk categories defined in Annex III: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and justice.

3. Determine your role in the AI value chain. The EU AI Act assigns different obligations to different actors. Under Article 3, you may be a provider (you develop or commission the AI system), a deployer (you use the AI system under your authority), an importer (you bring the system into the EU market), or a distributor. Your compliance obligations differ based on your role. A single organization can hold multiple roles simultaneously.

Phase 2: Governance and Organizational Structure

4. Establish an AI governance framework. High-risk AI compliance under the EU AI Act requires documented governance. Under Article 17, providers must implement a quality management system that covers the entire AI lifecycle. This means defined roles and responsibilities, documented policies, and clear escalation procedures. If your governance structure only involves IT and legal, that is an auditable gap — the regulation expects cross-functional oversight including business units, ethics, and risk management.

5. Appoint an EU Authorized Representative. Under Article 22, non-EU providers placing a high-risk AI system on the EU market must appoint an authorized representative established in the EU. This representative acts as your compliance liaison with EU national authorities. Without one, you cannot legally place a high-risk AI system on the EU market. This appointment must be documented in a written mandate.

6. Implement AI literacy training. Article 4 of the EU AI Act requires that all providers and deployers ensure their staff have sufficient AI literacy. This obligation has been in force since February 2, 2025 — it is not a future requirement, it applies now. Training must be proportionate to the AI system’s risk level and the staff member’s role. If only your technical team has received AI training, that is a compliance gap.

Phase 3: Technical Documentation and Risk Management

7. Implement a risk management system. Article 9 requires a risk management system that operates throughout the entire AI system lifecycle. This is not a one-time risk assessment. The system must identify and analyze known and reasonably foreseeable risks, estimate risks based on the intended purpose and reasonably foreseeable misuse, adopt appropriate risk management measures, and evaluate residual risks after mitigation. The risk management system must be documented, regularly updated, and subject to systematic review.

8. Prepare technical documentation. Article 11 and Annex IV define the technical documentation requirements for high-risk AI systems. This documentation must be prepared before the system is placed on the market and must include: a general description of the AI system, a detailed description of the development process, information about monitoring and oversight, a description of the data governance and management practices, and detailed information about the system’s performance and limitations. This is not a retrofit — it requires documentation from the design phase onward.

9. Establish data governance practices. Article 10 imposes specific requirements on the data used to train, validate, and test high-risk AI systems. Training datasets must be relevant, sufficiently representative, and as free of errors as possible. You must document data collection processes, data preparation, the formulation of assumptions, and any data gaps or shortcomings. If your AI system processes personal data of EU individuals, remember that GDPR obligations apply simultaneously.

10. Ensure human oversight capabilities. Article 14 requires that high-risk AI systems are designed to allow effective human oversight. This means the system must enable the human overseer to fully understand the AI system’s capabilities and limitations, monitor its operation, interpret its outputs correctly, and decide not to use the system or override or reverse its output. Human oversight is not a checkbox — it requires documented procedures, trained personnel, and technical design features that make oversight practically possible.

Phase 4: Conformity Assessment and Registration

11. Conduct the conformity assessment. Before placing a high-risk AI system on the EU market, providers must complete a conformity assessment under Article 43. For most Annex III systems, this is a self-assessment based on internal control (Annex VI). However, for biometric identification systems and certain critical infrastructure applications, a third-party conformity assessment through a notified body is required (Annex VII). Document every step of this assessment.

12. Draw up the EU Declaration of Conformity. Under Article 47, after completing the conformity assessment, providers must draw up a written EU Declaration of Conformity for each high-risk AI system. This declaration must state that the system meets the requirements of the EU AI Act. It must be kept for 10 years after the system is placed on the market or put into service.

13. Register in the EU database. Article 71 requires providers and deployers of high-risk AI systems to register in the EU database before placing the system on the market or putting it into service. Registration requires specific information including the provider’s identity, the system’s intended purpose, its risk classification, and the conformity assessment procedure used. This is a legal obligation, not an optional step.

Phase 5: Ongoing Compliance and Monitoring

14. Implement post-market monitoring. Article 72 requires providers of high-risk AI systems to establish a post-market monitoring system proportionate to the nature of the AI system. This system must actively and systematically collect, document, and analyze data on the performance of the AI system throughout its lifetime. The results of post-market monitoring must feed back into the risk management system under Article 9.

15. Establish incident reporting procedures. Article 73 requires providers and deployers to report serious incidents to the relevant market surveillance authority. A serious incident includes any incident that directly or indirectly leads to death, serious damage to health, serious damage to property, or serious environmental damage, and any incident that constitutes a serious and irreversible breach of fundamental rights. Reports must be filed immediately after establishing a causal link, and no later than 15 days after becoming aware of the incident.

The Penalty Structure: What Non-Compliance Costs

Under Article 99 of the EU AI Act, the penalty framework has three tiers. Violations of the prohibited AI practices under Article 5 carry fines up to €35 million or 7% of total worldwide annual turnover, whichever is higher. Non-compliance with high-risk obligations (the requirements covered in this checklist) carries fines up to €15 million or 3% of worldwide annual turnover. Providing incorrect, incomplete, or misleading information to authorities carries fines up to €7.5 million or 1% of worldwide annual turnover. For SMEs and startups, Article 99(6) provides proportionality — the lower of the fixed amount or percentage applies, rather than the higher.

The Digital Omnibus: A Possible Delay, Not a Reprieve

The European Commission’s Digital Omnibus proposal, published on November 19, 2025, proposes extending the high-risk compliance deadline. For standalone high-risk AI systems under Annex III, the backstop date would move to December 2, 2027. For high-risk AI systems embedded in products already subject to EU product safety legislation, the proposed backstop is August 2, 2028. However, this proposal is still in the legislative process. The Council favored fixed deadlines, the European Parliament has the file in committee, and trilogue negotiations have not concluded. Relying on an unadopted proposal is a compliance risk. The prudent approach: prepare for August 2, 2026, and treat any extension as extra time to refine your compliance program, not as a reason to delay starting one.

NYC Companies: The Dual Compliance Reality

If your organization is based in New York City and uses AI for employment decisions, you face a dual compliance obligation. NYC Local Law 144 requires annual independent bias audits of automated employment decision tools (AEDTs), candidate notification at least 10 business days before use, and public disclosure of audit results. Penalties range from $500 to $1,500 per violation per day. A December 2025 audit by the New York State Comptroller found that enforcement has been ineffective, but DCWP has committed to strengthening oversight. For a detailed analysis of the overlap between these two frameworks, see our article on EU AI Act vs NYC Local Law 144.

Where to Start

If you are reading this checklist and realizing that your organization has not begun preparing, you are not alone. According to industry surveys, a significant majority of enterprises have not started EU AI Act compliance programs. But the deadline is approaching. The most common mistake organizations make is treating AI compliance as a future project rather than an active one.

Start with Phase 1. Get the inventory done. Know what you have, classify it, and understand your role in the AI value chain. Everything else flows from there.

If you need help determining where your organization stands and what specific steps to take, our free AI Regulatory Readiness Assessment covers 43 controls across 8 compliance domains. It takes five minutes and gives you a clear starting point.

Not sure where to start?

Take our free 5-minute assessment to determine your organization’s AI compliance exposure level.

Start the Free Assessment

Lexara Advisory LLC — AI governance consulting, not legal practice. Lexara Advisory does not provide legal advice and is not a law firm. This article is for informational purposes only and does not constitute legal advice. Organizations should consult qualified legal counsel for advice specific to their circumstances.

LA

Lexara Assistant

AI compliance guidance

AI assistant — not a lawyer, not legal advice