EU AI Act · Extraterritorial Scope

Does the EU AI Act Apply to US Companies?

Yes. Under Regulation (EU) 2024/1689, the EU AI Act applies to any organization whose AI system outputs are used within the EU — regardless of where the organization is headquartered. Here is what that means for US businesses.

Published April 13, 2026 · By Constantin Razvan Gospodin, Legal AI Risk Manager

until EU AI Act high-risk obligations take effect

The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024. It is the first comprehensive AI regulation in the world, and its extraterritorial scope means it reaches far beyond the borders of the European Union. If you are a CEO, general counsel, or compliance officer at a US company, this article explains whether the Act applies to you and what you need to do.

The Output Rule: Article 2(1)

Article 2 of the EU AI Act defines who falls within its scope. The critical provision for US companies is Article 2(1)(c): the Act applies to providers and deployers of AI systems that are established in a third country (like the United States), where the output produced by the AI system is used in the Union.

This is what makes the EU AI Act fundamentally different from regulations that only apply to companies with a physical EU presence. The scope trigger follows the output, not your office address. If your AI system produces results that are used by, or affect, anyone located in the EU, you are likely in scope.

In practical terms, this means a New York fintech whose credit scoring algorithm evaluates EU applicants is in scope. A SaaS company in San Francisco whose AI-powered hiring tool screens candidates for EU-based clients is in scope. A healthcare AI company whose diagnostic tool is used by hospitals in Germany is in scope.

Who Exactly Is Covered?

The EU AI Act applies to several categories of actors, regardless of where they are established:

Providers — entities that develop an AI system or have one developed on their behalf and place it on the market or put it into service under their own name or trademark. Under Article 2(1)(a), providers are covered irrespective of whether they are established in the EU or in a third country.

Deployers — entities that use an AI system under their authority. Deployers established in the EU are covered under Article 2(1)(b). Deployers outside the EU are covered under Article 2(1)(c) if the AI system's output is used in the EU.

Importers and distributors — entities in the AI supply chain that bring AI systems into the EU market or make them available.

A critical point many US companies overlook: deployer obligations exist independently of provider obligations. Even if your AI vendor claims their system is compliant, you remain responsible for how you deploy and use it. Under Articles 26 and 27 of the EU AI Act, deployers of high-risk AI systems must implement human oversight, conduct fundamental rights impact assessments, and ensure proper use in accordance with instructions.

What Makes an AI System High-Risk?

The EU AI Act uses a risk-based classification system defined in Article 6. An AI system is classified as high-risk if it falls under one of the use cases listed in Annex III of the Regulation. These eight categories are:

1. Biometrics — remote biometric identification and categorization by sensitive attributes.

2. Critical infrastructure — AI managing energy, transport, water, gas, heating, and digital infrastructure.

3. Education and vocational training — AI determining access to education, evaluating learning outcomes, or monitoring students.

4. Employment and worker management — AI used in recruitment, screening, hiring, promotion, or termination decisions. This is particularly relevant for US companies because AI hiring tools trigger high-risk classification under both the EU AI Act and NYC Local Law 144.

5. Access to essential services — AI used in credit scoring, insurance risk assessment, emergency dispatch, and social benefit eligibility.

6. Law enforcement — AI used in crime risk assessment, evidence analysis, or profiling.

7. Migration and border control — AI used in visa, asylum, or residence permit processing.

8. Justice and democracy — AI assisting in legal interpretation, fact assessment, or dispute resolution.

For most US companies, the highest-risk areas are employment (Area 4) and essential services (Area 5), particularly credit scoring and insurance underwriting. NYC-based financial services and HR technology companies are especially exposed.

The Timeline: What Is Already Enforceable

The EU AI Act does not become enforceable all at once. The timeline is phased:

February 2, 2025 — Prohibitions on unacceptable-risk AI practices took effect. AI literacy requirements under Article 4 also became applicable. Both are already enforceable today.

August 2, 2025 — Obligations for providers of general-purpose AI (GPAI) models became applicable, including transparency and documentation requirements.

August 2, 2026 — The majority of the Act becomes fully applicable, including all obligations for high-risk AI systems under Annex III: conformity assessments, technical documentation, risk management systems, human oversight, EU database registration, and transparency requirements under Article 50.

Note: The European Commission proposed the Digital Omnibus package in November 2025, which could delay certain high-risk enforcement dates to December 2027. However, this proposal must still pass the European Parliament and Council. As of April 2026, the original August 2026 deadline remains the binding date. Compliance professionals should not treat the Omnibus as a guaranteed extension.

Penalties for Non-Compliance

The penalty structure is defined in Article 99 of the EU AI Act and follows three tiers:

Tier 1 — Violations of prohibited AI practices (Article 5): fines up to €35 million or 7% of total worldwide annual turnover, whichever is higher.

Tier 2 — Non-compliance with high-risk system obligations, deployer obligations, or transparency requirements: fines up to €15 million or 3% of worldwide annual turnover.

Tier 3 — Providing incorrect, incomplete, or misleading information to authorities: fines up to €7.5 million or 1% of worldwide annual turnover.

For SMEs and startups, fines are capped at the lower of the two amounts (percentage vs. fixed), providing some proportionality. However, even for smaller companies, the fines are material.

GDPR enforcement provides the precedent. EU regulators have already demonstrated willingness to impose fines on non-EU companies, including Meta (€1.2 billion, 2023) and Amazon (€746 million, 2021).

What US Companies Should Do Now

With the August 2, 2026 deadline approaching, US companies in scope should take these steps:

Inventory your AI systems. Create a comprehensive list of every AI system you develop, deploy, or procure from vendors. Include embedded AI in third-party tools.

Classify each system by risk level. Map each system against the Annex III categories to determine whether it qualifies as high-risk. Document your classification rationale.

Assess your EU nexus. Determine whether any of your AI systems produce outputs that are used within the EU, affect EU individuals, or are deployed by EU-based clients.

Identify your role. Are you a provider, a deployer, or both? Your compliance obligations differ significantly depending on your role in the AI value chain.

Begin documentation. High-risk systems require a risk management system (Article 9), technical documentation (Article 11), record-keeping (Article 12), and human oversight procedures (Article 14). These cannot be built overnight.

Consider an authorized representative. Under Article 22, providers of high-risk AI systems not established in the EU must appoint an authorized representative located in the EU before placing their system on the EU market.

Unsure whether the EU AI Act applies to your organization?

Take our free five-minute assessment to determine your exposure, risk classification, and next steps.

Free EU AI Act Assessment

Lexara Advisory LLC — AI governance consulting, not legal practice. This article provides general compliance information and does not constitute legal advice. Regulation references: EU AI Act (Regulation (EU) 2024/1689), published in the Official Journal of the EU on July 12, 2024.

LA

Lexara Assistant

AI compliance guidance

AI assistant — not a lawyer, not legal advice