GDPR · Double Compliance

GDPR Article 22 + EU AI Act — Double Compliance for Automated Decisions

until EU AI Act high-risk obligations take effect

Understanding the double compliance burden when your AI makes automated decisions about EU individuals.

Two frameworks, one decision

When an AI system makes or substantially influences decisions about EU individuals, two regulatory frameworks apply simultaneously. GDPR Article 22 provides individuals with the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. The EU AI Act adds model-specific obligations on top: risk management, technical documentation, human oversight, and conformity assessments for high-risk systems.

These are not alternative compliance paths. They are cumulative. Meeting one does not exempt you from the other.

What GDPR Article 22 requires

Article 22 establishes a default prohibition on purely automated decision-making with legal or significant effects, with three exceptions: explicit consent, contractual necessity, or authorization by EU or Member State law. When automated decisions are made under these exceptions, organizations must implement suitable safeguards including the right to obtain human intervention, express a point of view, and contest the decision. The data subject must be informed about the existence of automated decision-making, the logic involved, and the significance and envisaged consequences.

What the EU AI Act adds

Beyond GDPR's data protection requirements, the EU AI Act imposes system-level obligations: a continuous risk management system throughout the lifecycle (Article 9), data governance ensuring training data quality and representativeness (Article 10), technical documentation detailed enough for authorities to assess compliance (Article 11), automatic logging of events for traceability (Article 12), transparency sufficient for deployers to understand and supervise the system (Article 13), human oversight enabling effective supervision and intervention (Article 14), and documentation retention for 10 years (Article 18). These are architectural requirements, not just procedural ones.

Where they intersect for US companies

A US fintech company using AI to assess EU loan applicants must satisfy GDPR Article 22 (right to human intervention, transparency about logic, ability to contest), GDPR Articles 13-14 (privacy notices about automated processing), and EU AI Act Annex III Area 5 (full high-risk compliance for credit scoring AI). The practical impact: you need both a GDPR-compliant data protection framework AND an EU AI Act-compliant risk management and conformity system.

Building unified compliance

The efficient approach is to treat GDPR as the data protection layer and the EU AI Act as the system governance layer. GDPR protects the rights of individuals whose data is processed. The AI Act governs how the system processes that data. A unified Data Protection Impact Assessment (DPIA) under GDPR and Fundamental Rights Impact Assessment (FRIA) under EU AI Act Article 27 can address both frameworks with one analytical exercise.

Related reading

Article 2 Scope · Cross-Border AI Compliance · Annex III Financial Services

Assess your exposure

Take our free 5-minute assessment to determine how these obligations apply to your organization.

Start the assessment

This article provides general information about AI regulation. It does not constitute legal advice. Lexara Advisory LLC is an AI governance consulting firm, not a law firm. Published April 2026. About the author.

LA

Lexara Assistant

AI compliance guidance

AI assistant — not a lawyer, not legal advice