EU AI Act vs NYC Local Law 144: What NYC Companies Need to Know
NYC companies using AI for hiring face two overlapping regulatory frameworks. One is European, one is local, and together they create the most complex AI employment compliance landscape in the United States.
Published April 13, 2026 · By Constantin Razvan Gospodin, Legal AI Risk Manager
If your company is headquartered in New York City and uses AI-powered tools for recruiting, screening, or hiring — and if any of those hiring decisions touch EU-based candidates or employees — you are subject to both NYC Local Law 144 and the EU AI Act (Regulation (EU) 2024/1689). This is not a hypothetical scenario. For multinational employers, staffing agencies, and tech companies with distributed teams, this is the current regulatory reality.
This article provides a side-by-side analysis of the two frameworks, identifies where they converge, where they diverge, and offers a practical approach to unified compliance. No other consulting firm in New York covers this specific intersection. This is Lexara’s specialty.
The Two Frameworks at a Glance
NYC Local Law 144 was enacted in 2021 and has been enforced since July 5, 2023. It regulates automated employment decision tools (AEDTs) used to evaluate candidates or employees for hiring or promotion in New York City. It is enforced by the NYC Department of Consumer and Worker Protection (DCWP). The law requires annual independent bias audits, candidate notification, and public disclosure of audit results.
The EU AI Act (Regulation (EU) 2024/1689) was published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024. Under Annex III, Category 4, AI systems used in employment — including recruitment, screening, hiring, promotion, and termination — are classified as high-risk. The full high-risk obligations become enforceable on August 2, 2026. The EU AI Act applies to US companies under Article 2(1)(c) when AI system outputs are used within the EU.
Scope: Who Is Covered and When
Local Law 144 applies to any employer or employment agency that uses an AEDT to evaluate candidates or employees for employment or promotion in New York City. The employer does not need to be based in NYC. If you are hiring for a remote role and a candidate resides in any of the five boroughs, the law applies to that evaluation. The definition of AEDT covers any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that substantially assists or replaces discretionary decision-making for employment decisions.
The EU AI Act has a broader scope. Under Annex III, Category 4, high-risk classification applies to AI systems intended to be used for recruitment or selection of natural persons (advertising, screening, filtering applications, evaluating candidates), for making decisions affecting terms of work-related relationships (promotion, termination, task allocation based on behavior or traits), and for monitoring and evaluating workers. The scope is not limited to hiring tools — it covers AI used throughout the entire employment lifecycle.
The practical implication: a NYC-based company using an AI screening tool for hiring will likely trigger both laws simultaneously if it hires candidates in the EU. But the EU AI Act’s employment category is significantly broader than LL144’s AEDT definition. AI systems used for worker monitoring, task allocation, or termination decisions are high-risk under the EU AI Act but may not qualify as AEDTs under LL144.
Audit and Assessment Requirements
Local Law 144 requires a bias audit. This audit must be conducted by an independent auditor who has no financial interest in the AEDT being evaluated. The audit must test for disparate impact across race, ethnicity, and sex categories, including intersectional analysis. The audit must be completed no more than one year before the AEDT is used, and results must be publicly posted on the employer’s website.
The EU AI Act requires a conformity assessment. Under Article 43, providers of high-risk AI systems must complete a conformity assessment before placing the system on the market. For employment AI under Annex III, this is typically a self-assessment based on internal control (Annex VI). The assessment evaluates compliance across all technical requirements: risk management (Article 9), data governance (Article 10), technical documentation (Article 11), record-keeping (Article 12), transparency (Article 13), human oversight (Article 14), accuracy, robustness, and cybersecurity (Article 15).
Additionally, deployers of high-risk AI systems in employment and worker management must conduct a fundamental rights impact assessment under Article 27 before putting the system into service. This assessment must evaluate the specific risks to the fundamental rights of the affected persons.
The key difference: LL144’s bias audit is narrow — it tests for statistical disparate impact across protected categories. The EU AI Act’s conformity assessment is comprehensive — it evaluates the entire system against multiple technical, governance, and rights-based requirements. A LL144 bias audit alone does not satisfy the EU AI Act’s conformity assessment requirements.
Notification and Transparency
Under Local Law 144, employers must notify candidates at least 10 business days before using an AEDT. The notice must describe how the tool will be used and what data will be collected. Candidates must be informed of their right to request an alternative selection process or accommodation. The notice can be provided through the employment section of the employer’s website, in a job posting, or via mail or email.
Under the EU AI Act, transparency obligations for high-risk AI are significantly more extensive. Under Article 13, the system must be designed to be sufficiently transparent to enable deployers to interpret output and use it appropriately. Under Article 26, deployers must inform individuals that they are subject to a high-risk AI system. Under Article 50, if the AI system interacts with individuals, those individuals must be informed that they are interacting with an AI system. There is no specific advance notice period equivalent to LL144’s 10-day requirement, but the obligation to inform is ongoing and systemic.
Penalties: The Scale Difference
Local Law 144 penalties are enforced by DCWP. Civil penalties range from $500 for a first violation to $1,500 per violation thereafter. Each day a violation continues constitutes a separate violation. Failure to conduct a bias audit and failure to provide proper notice are considered separate violations, so an employer could face multiple daily penalties simultaneously.
EU AI Act penalties under Article 99 operate on a fundamentally different scale. Non-compliance with high-risk AI obligations carries fines up to €15 million or 3% of total worldwide annual turnover, whichever is higher. For the most serious violations involving prohibited AI practices, fines reach €35 million or 7% of worldwide annual turnover.
To illustrate the magnitude: for a company with $500 million in annual revenue, the maximum LL144 penalty for a single violation day is $1,500. The maximum EU AI Act penalty for high-risk non-compliance is $15 million (at the 3% tier). That is a 10,000x difference in maximum exposure. Both frameworks can be enforced simultaneously — compliance with one does not satisfy the other.
Enforcement Reality in 2026
LL144 enforcement has been criticized as ineffective. A December 2025 audit by the New York State Comptroller found significant deficiencies in DCWP’s enforcement. Seventy-five percent of test complaint calls were misrouted and never reached DCWP. The agency identified just one case of non-compliance among 32 companies reviewed, while the Comptroller’s auditors found at least 17 potential violations in the same group. DCWP received only two AEDT complaints during a two-year period (July 2023 through June 2025).
However, DCWP has committed to implementing the Comptroller’s recommendations: improved complaint routing, cross-divisional training, and more proactive enforcement. DLA Piper and other legal analysts have noted that employers should expect a new phase of stricter enforcement with more frequent investigations.
EU AI Act enforcement, by contrast, will be handled by national market surveillance authorities across 27 Member States, with the AI Office at the European Commission providing centralized coordination. The EU has a track record of enforcing extraterritorially against non-EU companies — GDPR enforcement provides the precedent, with billions of euros in fines levied against US tech companies since 2018.
Building a Unified Compliance Strategy
The practical question for NYC companies subject to both frameworks is not whether to comply with each one separately, but how to build a single compliance architecture that satisfies both. Here is the approach we recommend:
Start with the EU AI Act as the baseline. The EU AI Act’s requirements are broader and more demanding than LL144’s. A compliance program built to satisfy the EU AI Act’s high-risk requirements will cover most of LL144’s obligations as a subset. The reverse is not true — a LL144 bias audit alone falls far short of EU AI Act compliance.
Map your LL144 bias audit into the EU AI Act’s risk management system. The disparate impact testing required by LL144 can serve as one component of the broader risk management system required under Article 9. But it needs to be supplemented with the additional risk categories the EU AI Act requires: accuracy, robustness, cybersecurity, fundamental rights impact, and ongoing post-market monitoring.
Unify your notification procedures. Design a single candidate notification process that satisfies both LL144’s 10-business-day requirement and the EU AI Act’s ongoing transparency obligations. The notification should describe the AI system, its purpose, the data collected, the candidate’s right to opt out (LL144), and the fact that the system has undergone conformity assessment (EU AI Act).
Establish a single documentation repository. Both frameworks require documentation. The EU AI Act’s technical documentation requirements (Annex IV) are comprehensive enough to house LL144’s bias audit results, candidate notification records, and public disclosure evidence. Maintaining a single source of truth reduces duplication and audit risk.
Appoint unified oversight. Designate a single point of accountability for AI employment compliance that covers both frameworks. This person or team needs to understand both the DCWP enforcement mechanics and the EU market surveillance authority process. For cross-border compliance, consider whether your EU authorized representative (required under Article 22) should coordinate with your US compliance team.
Why This Matters for NYC Companies Specifically
New York City is uniquely positioned at the intersection of these two regulatory frameworks. It is the largest US city to have enacted a specific AI employment regulation. It is home to the highest concentration of financial services, media, and technology companies that use AI for hiring at scale. And it is the US market most likely to have employment decisions that affect EU individuals, given the volume of multinational companies headquartered here.
The state-level regulatory environment is evolving too. Governor Hochul signed the Responsible AI Safety and Education Act (RAISE Act) on December 19, 2025, making New York the second US state after California to enact comprehensive AI legislation targeting frontier models. While the RAISE Act targets large developers ($500 million+ revenue) rather than all employers, it signals a regulatory trajectory that will only add complexity.
Organizations that build a unified compliance framework now — one that satisfies both LL144 and the EU AI Act — will be better positioned to absorb future regulatory requirements without rebuilding from scratch.
Assess Your Dual Compliance Exposure
Our free AI Regulatory Readiness Assessment evaluates your exposure under both the EU AI Act and NYC Local Law 144 across 43 controls.
Start the Free AssessmentLexara Advisory LLC — AI governance consulting, not legal practice. Lexara Advisory does not provide legal advice and is not a law firm. This article is for informational purposes only and does not constitute legal advice. Organizations should consult qualified legal counsel for advice specific to their circumstances.