Higher Education ยท Annex III

EU AI Act for New York Higher Education Institutions

until EU AI Act high-risk obligations take effect

How the EU AI Act affects New York universities with European partnerships, exchange programs, and cross-border research collaborations.

Education AI is high-risk under the EU AI Act

Annex III (Area 3) of the EU AI Act classifies several categories of educational AI as high-risk: systems that determine access or admission to educational institutions, systems that evaluate learning outcomes, systems that assess appropriate education levels for individuals, and systems that monitor or detect prohibited student behavior during tests. This classification applies regardless of the institution's location when the AI system's output reaches EU individuals.

How New York institutions are affected

The Rockefeller Institute of Government's November 2025 policy brief identified four common activities that trigger EU AI Act obligations for New York higher education institutions. First, enrolling EU nationals in distance or hybrid programs that rely on AI-powered adaptive learning or assessment platforms. Second, operating EU-based study-abroad centers that use home-campus chatbots, learning analytics, or proctoring software. Third, licensing educational technology tools to European partner campuses. Fourth, running US-hosted AI systems whose outputs are viewed or applied inside the EU for research collaboration.

New York's public and private universities maintain extensive European partnerships. SUNY, CUNY, Columbia, NYU, and Cornell all operate dual-degree programs, semester exchanges, faculty research consortia, and field programs with European institutions. Any of these programs that employ AI-assisted assessment, admission scoring, adaptive tutoring, or remote proctoring must evaluate their obligations under the EU AI Act.

Remote proctoring: a concrete example

AI-powered remote proctoring platforms that use face detection, gaze tracking, or behavior analysis during examinations are classified as high-risk under Annex III (Area 3d: monitoring prohibited behavior during tests). The University of Amsterdam proctoring case in 2020-2021 demonstrated how these systems intersect with privacy and discrimination concerns. Under the EU AI Act, such platforms must now meet full conformity assessment requirements including risk management, data governance, human oversight, and transparency obligations.

If a New York institution co-administers online examinations with a European partner using a US-hosted proctoring system, both the institution (as deployer) and the proctoring vendor (as provider) face EU AI Act obligations.

Emotion recognition prohibition

Article 5 of the EU AI Act prohibits emotion recognition systems in educational settings. This means AI tools that analyze student facial expressions, voice patterns, or biometric indicators to infer engagement, stress, or emotional states during learning sessions are banned. This prohibition is already enforceable since February 2, 2025.

What institutions should do

Conduct an inventory of all AI systems used in programs that involve EU students or partners. Classify each system against Annex III Area 3 criteria. Verify that no prohibited practices (emotion recognition, social scoring) are in use. Begin conformity assessment processes for confirmed high-risk systems before August 2, 2026.

Related reading

Article 2 Extraterritorial Scope · Article 4 AI Literacy · EU AI Act Fines

Assess your exposure

Take our free 5-minute assessment to determine how these obligations apply to your organization.

Start the assessment

This article provides general information about AI regulation. It does not constitute legal advice. Lexara Advisory LLC is an AI governance consulting firm, not a law firm. Published April 2026. About the author.

LA

Lexara Assistant

AI compliance guidance

AI assistant โ€” not a lawyer, not legal advice