Financial Services ยท Annex III

Annex III High-Risk Classification for New York Financial Services

until EU AI Act high-risk obligations take effect

Classification guidance for New York financial institutions using AI in credit, insurance, and trading operations.

Which financial AI systems are high-risk?

Annex III of the EU AI Act (Area 5) classifies certain financial AI applications as high-risk. For New York financial services firms with EU customers or counterparties, understanding exactly which systems trigger high-risk obligations is critical — because the distinction between covered and exempt systems is narrower than many assume.

High-risk under Annex III Area 5

Credit scoring and creditworthiness assessment. AI systems used to evaluate the creditworthiness of natural persons are explicitly classified as high-risk. This includes any model that scores, ranks, or assesses an individual's likelihood to repay debt. If your credit model evaluates EU individuals — even from a Virginia data center — it is in scope.

Life and health insurance pricing. AI systems used for risk assessment and pricing in life and health insurance are high-risk under Area 5(c). This covers underwriting algorithms that set premiums based on individual risk profiles, claims triage systems, and any AI that influences coverage decisions for natural persons.

Access to essential services. AI used by public authorities to evaluate eligibility for benefits and services, including allocation, reduction, or revocation of those benefits, falls under Area 5(a).

What is explicitly NOT high-risk

Fraud detection. The EU AI Act explicitly exempts fraud detection AI from the high-risk credit scoring category. Systems designed solely to detect financial fraud do not trigger Annex III Area 5 obligations, though they may still be subject to transparency requirements under Article 50 if they interact with individuals.

Property and casualty insurance. Only life and health insurance AI pricing is classified as high-risk. Property, casualty, auto, and commercial insurance pricing algorithms do not fall under Area 5(c), though they may be captured under other provisions if they involve profiling of individuals.

Algorithmic trading. Pure market-making and trading algorithms that do not evaluate or make decisions about natural persons are generally not high-risk under Annex III, as they do not affect individuals' fundamental rights.

The 40% gray zone

Industry analysis suggests roughly 40% of enterprise financial AI systems fall into neither a clearly high-risk nor clearly exempt category. Systems that combine fraud detection with credit assessment, or that use customer behavior data for both marketing and creditworthiness evaluation, require careful analysis. For these hybrid systems, proving a valid exemption under Article 6(3) often costs more than building to the higher compliance standard.

Obligations for high-risk financial AI

High-risk financial AI systems must meet the full requirements of Chapter III: risk management system (Article 9) covering the entire lifecycle, data governance ensuring training data is representative and free of bias (Article 10), technical documentation sufficient for authorities to assess compliance (Article 11), automatic logging of events for traceability (Article 12), transparency to deployers (Article 13), human oversight mechanisms (Article 14), accuracy, robustness, and cybersecurity standards (Article 15), quality management system (Article 17), and registration in the EU database (Article 71). Conformity assessments must be completed before August 2, 2026.

Dual compliance with NYC LL144

New York financial institutions using AI for employment decisions face dual obligations: LL144 for AEDT bias audits and the EU AI Act for cross-border credit and insurance AI. A unified compliance approach can reduce duplication by designing bias assessment methodologies that satisfy both frameworks simultaneously.

Related reading

EU AI Act vs NYC Local Law 144 · EU AI Act Fines for US Companies · NIST AI RMF vs EU AI Act

Assess your exposure

Take our free 5-minute assessment to determine how these obligations apply to your organization.

Start the assessment

This article provides general information about AI regulation. It does not constitute legal advice. Lexara Advisory LLC is an AI governance consulting firm, not a law firm. Published April 2026. About the author.

LA

Lexara Assistant

AI compliance guidance

AI assistant โ€” not a lawyer, not legal advice