Screeq
Compliance

AI CV Screening Without the Bias Lawsuits: A 2026 Compliance Map

March 14, 2026 ยท 16 min read

AI-assisted CV screening has gone from experimental in 2023 to standard in 2026. Most mid-market and enterprise hiring teams now run some form of model-assisted ranking on incoming applications. The question isn't whether to use it any more โ€” your competitors do. The question is how to use it in a way that actually moves your hiring metrics without exposing the company to discrimination claims, regulator action, or candidate-trust damage.

This post is the working compliance map: what the four major regulatory regimes (NYC AEDT, EU AI Act, US EEOC, UK Equality Act) actually require, where they overlap, where they diverge, and the operating model that satisfies all of them simultaneously.

The four regimes

1. New York City โ€” Local Law 144 (AEDT)

In force since July 2023. Applies to any 'automated employment decision tool' used to substantially assist or replace discretionary decisions about NYC residents. Requires:

  • Annual independent bias audit covering selection rate by sex, race/ethnicity, and intersectional categories.
  • Public posting of the audit summary on a careers-site URL accessible from every NYC job posting.
  • 10-business-day candidate notice before AI is used, with the option to request an alternative process.
  • Disclosure of the tool's source, the qualifications and characteristics it assesses, and how candidates can request a reasonable accommodation.

Enforcement is via the Department of Consumer and Worker Protection. Penalties: $375 for first violation, $1,500 per subsequent, per day.

2. EU AI Act โ€” High-Risk Hiring Systems

Hiring AI is named in Annex III as high-risk. Obligations applicable from August 2026 include:

  • Documented risk-management system covering the AI system's full lifecycle.
  • Data governance โ€” training data must be relevant, representative, and free of errors that could cause discriminatory outcomes.
  • Technical documentation and logging sufficient to enable post-market audit.
  • Transparency to deployers (the hiring company) and to candidates about how the system works.
  • Human oversight โ€” a named human must be able to override or disregard the AI output.
  • Accuracy, robustness, and cybersecurity requirements with documented test results.
  • Conformity assessment and CE marking before deployment.

Penalties scale to โ‚ฌ35M or 7% of global turnover, whichever is higher.

3. US EEOC โ€” Title VII and ADA Guidance

The EEOC's 2023 technical assistance documents make clear that AI tools used in hiring are subject to Title VII (race, colour, religion, sex, national origin) and the ADA (disability). The four-fifths rule remains the operational benchmark for adverse impact: if the selection rate of any protected group is less than 80% of the highest group's rate, that's prima facie evidence of disparate impact and the employer must justify the tool's job-relatedness and consistency with business necessity.

For ADA compliance: AI tools must not screen out candidates with disabilities who can perform the essential functions of the job with or without reasonable accommodation. The employer is liable for the vendor's tool โ€” 'the vendor said it was fine' is not a defence.

4. UK โ€” Equality Act 2010 and ICO Guidance

Nine protected characteristics. The ICO's 2023 guidance on AI in employment requires lawful basis under UK GDPR, transparency, automated decision-making rights under Article 22 (the right not to be subject to a solely automated decision with significant effects), and human review on request. Reasonable adjustments must be available on disclosure.

What the regimes agree on

Despite different mechanisms, the four regimes converge on five operational requirements. If you build for these, you've satisfied most of all four:

  1. Annual bias audit with documented methodology, scope, and remediation actions.
  2. Candidate notice before AI is used and disclosure of what it assesses.
  3. Human review on every adverse decision (reject, score below threshold, screen-out).
  4. Reasonable accommodation available on request, with no disadvantage to candidates who use it.
  5. Documentation of the model, the training data, the validation, and the human-oversight workflow.

Where the regimes diverge

  • NYC requires public audit posting; EU requires internal documentation only.
  • EU AI Act applies even where no automated decision is made; NYC's AEDT requires the AI to "substantially assist".
  • US EEOC operates by enforcement against discriminatory outcomes; EU operates by ex-ante conformity assessment regardless of outcome.
  • UK Article 22 gives candidates a right to human review of automated decisions; US has no equivalent federal right.

The seven things you can ship today without legal risk

  1. CV parsing and structured extraction. Not a decision, no adverse action โ€” minimal regulatory exposure.
  2. Ranking against documented JD requirements (skills, experience, certifications) with explainable scores.
  3. Async-interview scoring against a published rubric, with recruiter override on every score.
  4. JD bias-checking for exclusionary language.
  5. Candidate-pool search by natural language over structured fields.
  6. Scheduling and coordination automation.
  7. Interview question generation from a JD.

The three things to avoid in 2026

  1. Fully automated rejection. Even where legal, the optics, brand risk, and Article 22 / NYC notice burden make it not worth it.
  2. Personality / culture-fit inference from video. Weak science, EU AI Act high-risk, ADA exposure for candidates with neurodivergence.
  3. Predictive performance scoring from CV alone. Historical-data bias is too strong; the four-fifths-rule math rarely passes scrutiny.

The operating model that ships under all four regimes

1. One named owner

A single named human โ€” usually a head of recruiting ops or talent โ€” is the AI accountable owner. They sign off on every new use case, own the audit, and are the named contact for candidate questions.

2. Documented decision rights

For every AI use case, write down: what the AI suggests, what the human decides, what evidence is captured, and when override is required. Keep this current โ€” drift from suggesting to deciding happens silently otherwise.

3. Bias audit cadence

Annually at minimum. Quarterly for high-volume roles. The audit covers selection rate by protected group at every funnel stage, not just hire. Anywhere the disparity exceeds the four-fifths threshold, document the investigation and the remediation.

4. Candidate notice

One sentence on the job posting and apply confirmation. Plus a longer disclosure linked from the careers site covering source, characteristics assessed, and accommodation request flow.

5. Vendor due diligence

Your vendor questionnaire should include: the bias-audit methodology, training-data provenance, model card or equivalent, candidate-data retention policy, sub-processor list, EU AI Act conformity status (if applicable), and incident-response process for discrimination complaints.

What good looks like

A 2026 hiring stack that ships under all four regimes looks unremarkable from the outside: ranked candidate lists with explanations, recruiter-led interviews and decisions, a one-line candidate disclosure, an annual audit posted on the careers site, and a documented owner. The complexity is internal โ€” the candidate sees a faster, fairer process, and the regulator sees a defensible paper trail. That's the bar.

Try the platform
behind the writing.

Screeq is the only ATS with a full HRMS built in. 14-day free trial.