The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that helps interviewers evaluate candidates consistently by structuring scoring rubrics and flagging evaluation biases. Includes calibration support and rubric enforcement; distinct from resume screening which evaluates documents rather than interview performance.
AI-assisted structured scoring has achieved operational maturity in high-volume hiring but remains trapped behind persistent validity, fairness, and regulatory barriers. Multinational enterprises and large-scale recruiters—HireVue (800+ clients), Metaview (3,000+ customers), Curatal, Interviewer.AI—sustain deployments with documented efficiency gains: 27–71% time-to-hire reductions, £3,000/month CV screening savings, and 20-point improvements in final interview pass rates. The methodology itself is robust: decades of research validate structured interviews as twice as predictive of job performance as unstructured alternatives, and 85% of well-designed systems meet fairness thresholds. Yet broader adoption has stalled. Four reinforcing barriers persist: (1) validity risks from GenAI cheating in unproctored assessments, undermining scoring reliability at scale; (2) fairness inconsistency—while audited systems can meet fairness benchmarks, vendor bias metrics vary 40% between implementations and empirical audits of 361,000 resumes document systematic discrimination (85% selection bias favoring white candidates); (3) candidate trust collapsed to 26% confidence in AI fairness despite positive user experience signals, driving offer acceptance rates from 74% (2023) to 51% (2026); (4) legal exposure crystallized as federal courts (Mobley v. Workday) treat vendors as liable agents, and a patchwork of state regulations (California, Illinois, Colorado, Texas, Ontario, Germany, UK) each impose different compliance standards with August 2026 enforcement deadlines. The result is deepening bifurcation: enterprises with compliance infrastructure navigate regulatory complexity and manage fairness risk, while mid-market and risk-averse organisations remain blocked by unresolved validity threats and implementation costs. The practice is production-grade but not yet enterprise-safe at mass market scale.
The vendor ecosystem is mature in tooling and scaling in adoption, but regulatory complexity is accelerating faster than implementation readiness. HireVue serves 800+ enterprise clients — Emirates, Unilever, Philips, Nestlé among them — reporting $500k to £1M in annual savings per deployment. Metaview's 3,000+ customers cite 30-minute-per-interview time savings and 30% reduction in interviews-per-hire. Interviewer.AI reports 66% of hires closing within one week. UK SMEs adoption jumped to 54% (from 35% in 2025) with documented 71% cost-per-hire reduction and £3,000/month CV screening savings. Independent case studies show production-scale outcomes: Curatal deployed Amazon Bedrock-based AI agents with structured rubric automation achieving faster processing and reduced bias; LNER cut hiring from 7 weeks to 3 weeks (71% reduction); William Hill compressed time-to-interview from 15 days to 1.8 days (88% reduction). These deployment outcomes are credible and increasingly documented across sectors and geographies.
Yet adoption remains bifurcated by compliance readiness. Only 53% of recruiting teams use structured scoring rubrics despite 96% adopting AI tools—a persistent implementation gap. Candidate resistance is structural: offer acceptance rates dropped from 74% to 51% since 2023, and only 26% of candidates trust AI fairness despite positive user experience in live interactions. GenAI cheating (39% of applicants using GenAI in responses) undermines unproctored assessment validity at scale. Fairness metrics remain vendor-dependent and inconsistent: while 85% of systems designed with guardrails meet fairness thresholds, bias metrics vary 40% between vendors. Empirical audits quantify the risks: 361,000-resume audit found 85.1% selection bias favoring white candidates, and Berkeley Haas audit of 133 AI hiring systems found 44% exhibited gender bias—indicating that despite validation science supporting structured assessment, production deployments often lack adequate bias safeguards. A hybrid AI-human screening model outperforms either alone, reinforcing that structured human oversight remains mandatory.
The regulatory environment is simultaneously maturing and fragmenting. The EEOC removed its AI hiring guidance in January 2025, creating a federal vacuum now filled by a patchwork of state and international standards: California (FEHA, October 2025), Illinois HB 3773 (January 2026, prohibiting intent-independent discrimination), Colorado (SB 24-205, June 2026), Texas (TRAIGA, January 2026), Ontario (AI disclosure mandate), Germany (EU AI Regulation conformity assessments by August 2026), UK (Data Act 2025 reformed automated decision-making). The EU AI Act classifies recruitment as high-risk with mandatory risk management, data governance, human oversight, and transparency requirements—penalties up to 7% of global annual revenue by August 2, 2026. The Mobley v. Workday class action (certified nationwide, ~1.1B applications affected) and escalating HireVue litigation (ACLU, EEOC, EPIC complaints on bias and accessibility) establish vendor liability precedent. This fragmented compliance burden—requiring simultaneous navigation of conflicting state and international standards—favours caution and amplifies costs for enterprises seeking broad-scale deployment. The practice has achieved operational maturity but faces a compliance infrastructure crisis that blocks mass-market expansion.
— Greenhouse survey (2,950 job seekers): 63% have faced AI interviews (up 13 points in 6 months). Critical demand signals: 70% not told upfront; 38% left hiring process due to AI; 57% believe disclosure legal requirement. Candidates demand explainability, human review, bias audit proof. Adoption + trust gap convergence.
— Empirical bias analysis: Carnegie Mellon study (2.3M resume screenings) shows AI-generated text scored 18–23% higher. Algorithmic Justice League audit (40 companies) found 31% lower pass-through for immigrant English, 27% for 55+, 19% for AI-avoiders. Disparate impact scores show large-scale ATS at 0.71, human-in-the-loop hybrid at 0.91.
— Federal class action (certified nationwide, ~1.1B applications) establishing employer and vendor liability for disparate impact in AI candidate screening. Court rejected Workday's motion to dismiss, treating vendor as direct agent. Establishes disparate impact liability standard: selection rates for protected groups must be ≥80% of highest-performing group or trigger deeper review.
— Regulatory convergence documented: 26 states advancing AI hiring regulation. NYC Local Law 144: mandatory bias audits with public disclosure, four-fifths rule enforcement. Colorado SB24-205 (June 30): annual impact assessments, NIST alignment required. Illinois/Colorado grant candidate opt-out and human review rights. Defines governance pillars: notice, audits, disparate impact testing, transparency.
— Real company deployment (Meta, 2026): level-specific structured interview loops with explicit rubric dimensions. Behavioral rounds carry explicit weight (can downlevel candidates). All scoring is binary Hire/No Hire with confidence levels. Demonstrates structured rubric implementation at enterprise scale for hundreds of annual candidates.
— EU AI Act enforcement August 2, 2026: recruitment AI explicitly classified as high-risk. CV screening, interview scoring, candidate assessment systems subject to technical documentation, human oversight, bias audits, transparency. Deployment-implementation gap signal: 65% of EU large companies already use AI hiring tools; only 11% inform candidates.
— Independent bias audit by BABL AI (ForHumanity certified under NYC AEDT standard) of 29M+ assessments from Eightfold Matching Model. Gender impact ratio 0.962 (PASS), all race/ethnicity groups 0.938–1.000 (PASS). Demonstrates ecosystem maturity: vendor-audited at scale, independent attestation, published methodology, transparency in structured scoring approach.
— Survey of 382 HR/talent professionals: 94% use assessments, 50%+ with AI. Only 22% confident AI is ethical; one-third operate 'Shadow AI' with algorithms influencing talent decisions without full visibility. Documents governance gaps and candidate manipulation risks in production deployments.