The 4 Diversity Hiring Funnel Metrics That Matter
If you only measure diversity at hire, you're measuring downstream of every other decision in your hiring process. By the time someone shows up in your hire numbers, your sourcing, screening, interviewing, and offer-extension processes have all already happened โ each one a place where the funnel can quietly narrow. The hire-rate number tells you the outcome but not the cause.
The four metrics in this post are the ones that actually point to where your funnel is unfair, in time to do something about it. They're the metrics we recommend every Screeq customer build into their monthly people review.
Metric 1: Application diversity by source
The first question is whether your sourcing channels are reaching the candidates you want to be reaching. Track the demographic composition of applications, broken down by sourcing channel.
What you're looking for
- Channels that overindex on representation. If your referrals come 70% from one demographic group and your job-board applications come 50/50, your referral programme is a homogenising force regardless of intent.
- Channels with no representation data. Some sources (paid social, programmatic) have such weak demographic signal that they're effectively black boxes. Treat unknown as a data quality problem to fix.
- Geographic concentration. Sourcing entirely from a single metro region produces predictable demographic skew. If you're remote-friendly, source remote-broadly.
What to do about it
If a channel is consistently underrepresented, either invest in better sourcing (community partnerships, targeted outreach, scholarship programmes for talent pipelines) or rebalance the channel mix so it isn't disproportionately weighted to homogenising sources.
Metric 2: Stage pass-through by demographic
This is the single highest-leverage diversity metric most companies don't track. For every stage of your funnel โ application reviewed, phone screen scheduled, technical interview, onsite, offer extended โ calculate the pass-through rate by demographic group.
The four-fifths rule as a working benchmark
The EEOC's four-fifths rule says that if any group's selection rate is less than 80% of the highest group's rate, you have prima facie evidence of disparate impact. It's a federal threshold for adverse impact in the US, but it's also a useful operational alarm everywhere โ apply it to every stage transition, not just the final hire decision.
Where the leak usually is
In our customer data, the most common leak is at the application-review stage. Underrepresented candidates pass through subsequent stages at rates similar to overrepresented candidates, but a smaller share make it past the initial review. This is the stage where the highest-leverage interventions live: structured CV review rubrics, blind initial review, AI-assisted ranking with bias audits.
The second-most-common leak
Offer-stage drop-off, where underrepresented candidates accept offers at lower rates. The cause here is almost always candidate experience: less personalisation in late-stage communication, fewer touch points with future colleagues, weaker compensation negotiation support.
Metric 3: Time in stage by demographic
How long does a candidate sit in each stage before being moved forward, rejected, or going silent? Slice by demographic group.
What it tells you
Significant differences in time-in-stage by group are a strong signal of recruiter or manager hesitancy. The classic pattern: underrepresented candidates take 20-40% longer to advance from each stage, even when they ultimately pass through at similar rates. The cumulative effect is a longer overall cycle time, which produces lower offer-accept rates (best candidates of any background take other offers in the meantime).
What to do about it
- Set per-stage SLAs and track adherence by recruiter and by hiring manager. Most slowdowns trace to a specific person, not the system.
- Implement default-to-advance rather than default-to-pause. Require an active 'no' decision; don't let stalled pipelines become silent rejections.
- Audit the stages where the time gap is widest and look for the specific decision that's getting deferred.
Metric 4: Offer accept rate by demographic
The downstream metric that tells you the rest of the story. Track the rate at which extended offers are accepted, broken down by demographic group.
What lower accept rates point to
- Compensation gaps. Underrepresented candidates who receive lower opening offers accept at lower rates, even when they could have negotiated up.
- Candidate experience differential. If the late-stage process feels less welcoming, the offer is less compelling regardless of dollars.
- Representation in the team. Candidates routinely cite 'I didn't see anyone who looked like me' as the reason for declining late-stage offers.
How to actually run this
Cadence
Monthly review of the four metrics by your people team and the hiring leadership. Quarterly deep-dive with the heads of each function. Annual external audit if you operate in jurisdictions that require one (NYC AEDT, EU AI Act high-risk).
Sample size
Below 50 candidates per stage per month, the numbers are too noisy to act on. Roll up to quarterly views for smaller functions; combine demographic categories where you have to (with clear notation that you're doing so).
Privacy and consent
Demographic data collection requires explicit, voluntary, and clearly purpose-limited consent. The EU adds GDPR special-category data restrictions. Make demographic disclosure optional on the application; treat the data as separately controlled from the rest of the candidate record.
Reporting
Internal first. External reporting (annual report, careers-site disclosure, regulator filings) should follow internal action, not precede it. Numbers without action erode trust.
What good looks like
- Application diversity within 5 percentage points of your target representation.
- Stage pass-through ratios within the four-fifths threshold at every stage.
- Time-in-stage within 15% across demographic groups.
- Offer accept rate within 10 percentage points across demographic groups.
- Quarterly remediation plan for any metric that misses the threshold, with named owner and timeline.
The discipline matters more than the dashboard
The metrics are the easy part. The discipline of looking at them every month, asking the uncomfortable questions when the numbers diverge, and following through on the remediations is the hard part. Companies with great DEI dashboards and no remediation discipline make no progress. Companies with mediocre dashboards and serious remediation discipline make a lot of progress. The work is the work.
