Extraterritorial Scope

EU AI Act Article 2 — Does It Apply to US Companies?

until EU AI Act high-risk obligations take effect

Written for US compliance officers, legal teams, and business leaders navigating the extraterritorial reach of the EU AI Act.

The scope trigger follows the output, not your address

Article 2(1) of the EU AI Act (Regulation 2024/1689) establishes three categories of non-EU entities that fall within its scope. The critical principle: jurisdiction follows where the AI system's output is used, not where the system is built, hosted, or where the company is headquartered.

The three triggers for US companies are: first, providers placing AI systems on the EU market, which covers any US company selling an AI product to EU customers; second, providers whose AI system outputs are used within the EU, capturing US companies whose AI makes decisions affecting EU residents even if the sale happens outside Europe; and third, importers and distributors handling AI systems in the EU market.

Why this is broader than GDPR

Under GDPR, extraterritorial application required that a company either offer goods or services to EU individuals or monitor their behavior. The EU AI Act requires neither targeting nor monitoring. If your AI output reaches an EU individual — a job applicant screened by your algorithm, a customer scored by your credit model, a student assessed by your platform — you are in scope. There is no intent test, no targeting requirement, and no data processing connection needed.

The IAPP confirmed in August 2025 that this extraterritorial reach is broader than that of GDPR. Most US compliance teams initially modeled the AI Act as the narrower obligation. That assumption needs to be revisited.

Practical scenarios for US organizations

SaaS with global customers. A US company develops a recommendation engine used by thousands of customers worldwide. The moment one EU-based customer starts using that engine for high-risk purposes, the provider is in scope — potentially without knowing it.

Financial services. A credit scoring system hosted in Virginia that scores EU counterparties is in scope. The question is where the output is used, not where the system sits.

HR technology. A US hiring platform that screens applications from EU job candidates triggers both EU AI Act Annex III (Area 4: employment) and potentially NYC LL144 if candidates reside in New York City.

Higher education. A New York university using AI-powered proctoring or adaptive learning tools for EU exchange students or joint-degree programs is deploying high-risk AI under Annex III (Area 3: education).

What is already enforceable

The EU AI Act's obligations are phased in over time, but two categories are already active. Prohibited AI practices under Article 5 have been enforceable since February 2, 2025. These include real-time biometric identification in public spaces, emotion recognition in workplaces and schools, social scoring, and manipulative AI. The AI literacy obligation under Article 4, requiring providers and deployers to ensure their staff has sufficient AI literacy, has also been in force since February 2, 2025.

GPAI model obligations under Chapter V became applicable on August 2, 2025. High-risk system obligations under Annex III take effect August 2, 2026.

The authorized representative requirement

Non-EU providers of high-risk AI systems and GPAI models must appoint an authorized representative within the EU before placing their systems on the market. Without an authorized representative, you cannot legally offer your AI product in Europe. This is a distinct role from GDPR representatives and requires specific AI Act expertise.

What US companies should do now

First, map your AI output flows to identify any EU nexus. Second, classify each in-scope system against the risk framework. Third, assess whether any of your current AI practices fall under the already-enforceable prohibited categories. Fourth, begin AI literacy training for staff who operate or use AI systems. Fifth, for high-risk systems, start the conformity assessment process — August 2, 2026 is months away, not years.

The organizations that begin compliance now have the advantage of time. Those that wait face compressed timelines, higher costs, and enforcement risk when national market surveillance authorities begin active supervision from August 2026.

Related reading

EU AI Act Fines: €35M and 7% Turnover · EU AI Act Timeline for US Organizations · Article 4 AI Literacy Obligation

Assess your exposure

Take our free 5-minute assessment to determine how these obligations apply to your organization.

Start the assessment

This article provides general information about AI regulation. It does not constitute legal advice. Lexara Advisory LLC is an AI governance consulting firm, not a law firm. Published April 2026. About the author.

LA

Lexara Assistant

AI compliance guidance

AI assistant — not a lawyer, not legal advice