What Happens If You Don’t Comply With the EU AI Act
The EU AI Act carries the second-highest percentage-based penalty in EU digital regulation. Fines reach €35 million or 7% of global annual turnover. Here is exactly what non-compliance costs and how enforcement will work.
Published April 13, 2026 · By Constantin Razvan Gospodin, Legal AI Risk Manager
Regulation (EU) 2024/1689 — the EU AI Act — does not simply suggest compliance. It mandates it, and Article 99 defines the consequences for failure. With the high-risk obligations taking effect on August 2, 2026, this article explains the three tiers of administrative fines, the enforcement mechanisms, and the real-world costs that extend beyond the fine amount itself.
The Three Tiers of Administrative Fines
Article 99 of the EU AI Act establishes a tiered penalty framework. Each tier corresponds to a different category of violation, and each tier specifies both a fixed maximum amount and a percentage of worldwide annual turnover — whichever is higher applies.
Tier 1: Prohibited AI Practices — up to €35 million or 7% of worldwide annual turnover. This is the highest penalty tier, reserved for violations of Article 5. Prohibited practices include social scoring systems used by or on behalf of public authorities, real-time remote biometric identification in publicly accessible spaces for law enforcement (outside narrowly defined exceptions), AI systems deploying subliminal manipulation techniques that cause harm, AI exploiting vulnerabilities of specific groups, emotion recognition in the workplace or educational institutions, and untargeted scraping of facial images from the internet or CCTV to build recognition databases. Article 99(3) is unambiguous: the fine is the higher of €35 million or 7% of worldwide turnover.
Tier 2: High-Risk Non-Compliance — up to €15 million or 3% of worldwide annual turnover. Under Article 99(4), this tier applies to violations of the obligations for providers (Article 16), authorized representatives (Article 22), importers (Article 23), distributors (Article 24), deployers (Article 26), notified bodies (Articles 31, 33, 34), and transparency obligations (Article 50). In practical terms, this covers failure to implement a risk management system, failure to prepare technical documentation, inadequate data governance, insufficient human oversight measures, failure to complete a conformity assessment, and failure to register in the EU database. These are the obligations detailed in our compliance checklist.
Tier 3: Incorrect Information — up to €7.5 million or 1% of worldwide annual turnover. Under Article 99(5), this tier applies specifically to supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request. This may sound minor, but it is a trap for companies that attempt to demonstrate compliance through incomplete or overstated documentation.
How Fines Are Calculated
Article 99(7) specifies that authorities must consider all relevant circumstances when determining the fine amount. The factors include the nature, gravity, and duration of the infringement and its consequences, the number of affected persons and the level of damage suffered, whether other authorities have already fined the same operator for the same infringement, the size, annual turnover, and market share of the operator, any previous infringements, the degree of cooperation with authorities, and the manner in which the infringement became known (self-reported vs. discovered by regulators).
These criteria closely mirror GDPR’s penalty calculation framework under Article 83(2), which means EU data protection authorities already have experience applying this type of analysis. The intentional or negligent character of the infringement is explicitly considered — meaning that knowingly ignoring EU AI Act requirements will be treated more severely than unintentional gaps.
The SME Proportionality Rule
Article 99(6) provides a proportionality adjustment for small and medium-sized enterprises, including startups. For SMEs, the fine is the lower of the two amounts (percentage of turnover vs. fixed amount), rather than the higher. This means a startup with €2 million in annual turnover faces a maximum Tier 1 fine of €140,000 (7% of €2M) rather than €35 million. The protection is significant, but a €140,000 fine for a seed-stage startup can still be existential.
GDPR Enforcement: The Precedent for US Companies
The EU AI Act is often described as following the GDPR model. For enforcement, this comparison is instructive. GDPR has been actively enforced against non-EU companies since 2018. EU data protection authorities have imposed billions of euros in fines, including against major US technology companies, for violations ranging from inadequate consent mechanisms to insufficient data protection measures. These fines were levied against companies without a physical EU presence, using the same extraterritorial jurisdiction mechanism that the EU AI Act adopts under Article 2.
The enforcement infrastructure already exists. National market surveillance authorities across 27 Member States will enforce the EU AI Act, coordinated by the AI Office at the European Commission. The AI Office is operational and already exercising oversight functions for general-purpose AI models. The question is not whether enforcement will happen, but when and against whom.
Beyond Fines: The Real Cost of Non-Compliance
Administrative fines are the most visible consequence, but they are not the only one. Non-compliance with the EU AI Act creates additional costs that compound over time.
Market exclusion. Under Article 6, a high-risk AI system that has not completed the conformity assessment cannot be legally placed on the EU market or put into service within the EU. This is not a fine — it is a market access prohibition. For US companies that sell AI-powered products or services to EU customers, non-compliance means losing access to a market of 450 million people.
Supply chain disruption. The EU AI Act creates obligations throughout the AI value chain. Under Articles 23 and 24, importers and distributors must verify that the AI systems they handle have undergone conformity assessment and bear the required CE marking. EU-based business partners will increasingly require compliance documentation from their US AI vendors. Non-compliance will result in lost contracts and damaged business relationships.
Reputational damage. GDPR enforcement demonstrated that regulatory action generates significant media coverage, particularly when the target is a well-known company. An EU AI Act enforcement action against a US company will be covered by international media, industry publications, and compliance analysts. The reputational cost can exceed the fine amount, particularly for companies in regulated industries such as financial services and healthcare.
No double jeopardy, but cumulative risk. Article 99(8) states that when a violation also constitutes an infringement under other EU legislation (such as GDPR), only the higher of the applicable fines is imposed for the same factual violation. However, different violations can be penalized separately. An AI system that violates both the EU AI Act’s data governance requirements and the GDPR’s data protection principles could trigger separate penalties for each distinct violation. For organizations subject to multiple EU frameworks, the cumulative regulatory risk is substantial. The EU AI Act’s 7% turnover ceiling makes it the second-highest percentage-based penalty in EU digital regulation, behind only the Digital Markets Act.
What Happens After August 2, 2026
On August 2, 2026 — unless the Digital Omnibus is formally adopted — the full high-risk obligations of the EU AI Act become enforceable. From that date, national market surveillance authorities can begin investigating non-compliance, requesting information from providers and deployers, ordering corrective measures (including withdrawal of non-compliant AI systems from the market), and imposing administrative fines under Article 99.
For US companies, enforcement will primarily flow through their EU authorized representative (if appointed), through their EU business partners (importers and distributors), or through direct investigation triggered by incidents or complaints. The EU AI Act also establishes reporting obligations: under Article 73, providers and deployers must report serious incidents to the relevant authority. Failure to report is itself an enforcement trigger.
The Cost of Waiting
The most expensive compliance strategy is delay. Every month of inaction narrows the window for implementation. Technical documentation cannot be retroactively created. Risk management systems cannot be established overnight. Conformity assessments require preparation, execution, and documentation. EU authorized representatives need to be identified, vetted, and formally appointed.
The Digital Omnibus may extend the deadline to December 2, 2027 for standalone high-risk AI systems. But the proposal is still in the legislative process, and relying on it is a compliance gamble. The asymmetry is clear: if you prepare now and the deadline is extended, you have extra time to refine. If you wait and the deadline holds, you face enforcement with no compliance program in place.
If you have not started, our free AI Regulatory Readiness Assessment will show you exactly where your gaps are across 43 controls and 8 compliance domains. Five minutes now can prevent millions in exposure later.
Assess Your Compliance Gaps
Take the free 5-minute assessment to understand your organization’s exposure under the EU AI Act penalty framework.
Start the Free AssessmentLexara Advisory LLC — AI governance consulting, not legal practice. Lexara Advisory does not provide legal advice and is not a law firm. This article is for informational purposes only and does not constitute legal advice. Organizations should consult qualified legal counsel for advice specific to their circumstances.