Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Candidate assessment — structured scoring support

LEADING EDGE

TRAJECTORY

Stalled

AI that helps interviewers evaluate candidates consistently by structuring scoring rubrics and flagging evaluation biases. Includes calibration support and rubric enforcement; distinct from resume screening which evaluates documents rather than interview performance.

OVERVIEW

AI-assisted structured scoring has achieved operational maturity in high-volume hiring but remains trapped behind persistent validity, fairness, and regulatory barriers. Multinational enterprises and large-scale recruiters—HireVue (800+ clients), Metaview (3,000+ customers), Curatal, Interviewer.AI—sustain deployments with documented efficiency gains: 27–71% time-to-hire reductions, £3,000/month CV screening savings, and 20-point improvements in final interview pass rates. The methodology itself is robust: decades of research validate structured interviews as twice as predictive of job performance as unstructured alternatives, and 85% of well-designed systems meet fairness thresholds. Yet broader adoption has stalled. Four reinforcing barriers persist: (1) validity risks from GenAI cheating in unproctored assessments, undermining scoring reliability at scale; (2) fairness inconsistency—while audited systems can meet fairness benchmarks, vendor bias metrics vary 40% between implementations and empirical audits of 361,000 resumes document systematic discrimination (85% selection bias favoring white candidates); (3) candidate trust collapsed to 26% confidence in AI fairness despite positive user experience signals, driving offer acceptance rates from 74% (2023) to 51% (2026); (4) legal exposure crystallized as federal courts (Mobley v. Workday) treat vendors as liable agents, and a patchwork of state regulations (California, Illinois, Colorado, Texas, Ontario, Germany, UK) each impose different compliance standards with August 2026 enforcement deadlines. The result is deepening bifurcation: enterprises with compliance infrastructure navigate regulatory complexity and manage fairness risk, while mid-market and risk-averse organisations remain blocked by unresolved validity threats and implementation costs. The practice is production-grade but not yet enterprise-safe at mass market scale.

CURRENT LANDSCAPE

The vendor ecosystem is mature in tooling and scaling in adoption, but regulatory complexity is accelerating faster than implementation readiness. HireVue serves 800+ enterprise clients — Emirates, Unilever, Philips, Nestlé among them — reporting $500k to £1M in annual savings per deployment. Metaview's 3,000+ customers cite 30-minute-per-interview time savings and 30% reduction in interviews-per-hire. Interviewer.AI reports 66% of hires closing within one week. UK SMEs adoption jumped to 54% (from 35% in 2025) with documented 71% cost-per-hire reduction and £3,000/month CV screening savings. Independent case studies show production-scale outcomes: Curatal deployed Amazon Bedrock-based AI agents with structured rubric automation achieving faster processing and reduced bias; LNER cut hiring from 7 weeks to 3 weeks (71% reduction); William Hill compressed time-to-interview from 15 days to 1.8 days (88% reduction). These deployment outcomes are credible and increasingly documented across sectors and geographies.

Yet adoption remains bifurcated by compliance readiness. Only 53% of recruiting teams use structured scoring rubrics despite 96% adopting AI tools—a persistent implementation gap. Candidate resistance is structural: offer acceptance rates dropped from 74% to 51% since 2023, and only 26% of candidates trust AI fairness despite positive user experience in live interactions. GenAI cheating (39% of applicants using GenAI in responses) undermines unproctored assessment validity at scale. Fairness metrics remain vendor-dependent and inconsistent: while 85% of systems designed with guardrails meet fairness thresholds, bias metrics vary 40% between vendors. Empirical audits quantify the risks: 361,000-resume audit found 85.1% selection bias favoring white candidates, and Berkeley Haas audit of 133 AI hiring systems found 44% exhibited gender bias—indicating that despite validation science supporting structured assessment, production deployments often lack adequate bias safeguards. A hybrid AI-human screening model outperforms either alone, reinforcing that structured human oversight remains mandatory.

The regulatory environment is simultaneously maturing and fragmenting. The EEOC removed its AI hiring guidance in January 2025, creating a federal vacuum now filled by a patchwork of state and international standards: California (FEHA, October 2025), Illinois HB 3773 (January 2026, prohibiting intent-independent discrimination), Colorado (SB 24-205, June 2026), Texas (TRAIGA, January 2026), Ontario (AI disclosure mandate), Germany (EU AI Regulation conformity assessments by August 2026), UK (Data Act 2025 reformed automated decision-making). The EU AI Act classifies recruitment as high-risk with mandatory risk management, data governance, human oversight, and transparency requirements—penalties up to 7% of global annual revenue by August 2, 2026. The Mobley v. Workday class action (certified nationwide, ~1.1B applications affected) and escalating HireVue litigation (ACLU, EEOC, EPIC complaints on bias and accessibility) establish vendor liability precedent. This fragmented compliance burden—requiring simultaneous navigation of conflicting state and international standards—favours caution and amplifies costs for enterprises seeking broad-scale deployment. The practice has achieved operational maturity but faces a compliance infrastructure crisis that blocks mass-market expansion.

TIER HISTORY

ResearchJan-2022 → Jan-2022
Bleeding EdgeJan-2022 → Jul-2022
Leading EdgeJul-2022 → present

EVIDENCE (103)

— Greenhouse survey (2,950 job seekers): 63% have faced AI interviews (up 13 points in 6 months). Critical demand signals: 70% not told upfront; 38% left hiring process due to AI; 57% believe disclosure legal requirement. Candidates demand explainability, human review, bias audit proof. Adoption + trust gap convergence.

— Empirical bias analysis: Carnegie Mellon study (2.3M resume screenings) shows AI-generated text scored 18–23% higher. Algorithmic Justice League audit (40 companies) found 31% lower pass-through for immigrant English, 27% for 55+, 19% for AI-avoiders. Disparate impact scores show large-scale ATS at 0.71, human-in-the-loop hybrid at 0.91.

— Federal class action (certified nationwide, ~1.1B applications) establishing employer and vendor liability for disparate impact in AI candidate screening. Court rejected Workday's motion to dismiss, treating vendor as direct agent. Establishes disparate impact liability standard: selection rates for protected groups must be ≥80% of highest-performing group or trigger deeper review.

— Regulatory convergence documented: 26 states advancing AI hiring regulation. NYC Local Law 144: mandatory bias audits with public disclosure, four-fifths rule enforcement. Colorado SB24-205 (June 30): annual impact assessments, NIST alignment required. Illinois/Colorado grant candidate opt-out and human review rights. Defines governance pillars: notice, audits, disparate impact testing, transparency.

— Real company deployment (Meta, 2026): level-specific structured interview loops with explicit rubric dimensions. Behavioral rounds carry explicit weight (can downlevel candidates). All scoring is binary Hire/No Hire with confidence levels. Demonstrates structured rubric implementation at enterprise scale for hundreds of annual candidates.

— EU AI Act enforcement August 2, 2026: recruitment AI explicitly classified as high-risk. CV screening, interview scoring, candidate assessment systems subject to technical documentation, human oversight, bias audits, transparency. Deployment-implementation gap signal: 65% of EU large companies already use AI hiring tools; only 11% inform candidates.

— Independent bias audit by BABL AI (ForHumanity certified under NYC AEDT standard) of 29M+ assessments from Eightfold Matching Model. Gender impact ratio 0.962 (PASS), all race/ethnicity groups 0.938–1.000 (PASS). Demonstrates ecosystem maturity: vendor-audited at scale, independent attestation, published methodology, transparency in structured scoring approach.

— Survey of 382 HR/talent professionals: 94% use assessments, 50%+ with AI. Only 22% confident AI is ethical; one-third operate 'Shadow AI' with algorithms influencing talent decisions without full visibility. Documents governance gaps and candidate manipulation risks in production deployments.

HISTORY

  • 2022-H1: Structured interview assessment moved from research into production deployments. HireVue and Metaview platforms showed real-world adoption (Metaview with Catawiki; HireVue with 86%+ employer adoption), but regulatory action (BIPA lawsuit) and documented bias concerns raised questions about sustainable deployment.
  • 2022-H2: Ecosystem matured with academic validation (ETS research on Evidence-Centered Design methodology) and government sector adoption (Canada's Department of National Defence deployed structured scoring templates). However, candidate skepticism emerged from psychological research and regulatory risk crystallized: New York City's bias audit law (effective January 2023) and mounting legal liability concerns constrained adoption momentum among risk-averse enterprises.
  • 2023-H1: Adoption accelerated despite regulatory headwinds. HireVue documented major enterprise ROI (Sitel saving $408k, Emirates $500k, Flutter achieving 50% time-to-hire reduction), and new platforms like Sapia.ai reached scale (12M structured interview questions, named clients including Qantas and Woolworths). Metaview deployments at growth-stage companies (Replit, Pleo, Localyze) showed 20+ hours/week time savings. However, legal challenges persisted: CVS faced class-action lawsuits over HireVue's facial analysis features. Public sentiment remained mixed (Pew Research: 71% opposed AI making final decisions, but 47% believed AI assessed candidates more fairly), signaling ongoing tension between efficiency gains and fairness concerns.
  • 2023-H2: Adoption continued among growth-stage and enterprise adopters (81% of talent leaders exploring AI, Metaview clients reporting 20+ hours/week savings, Brex saving 1,000 hours/year). However, regulatory and readiness barriers intensified: EEOC filed its first AI discrimination settlement ($365,000 with iTutorGroup for age bias), Illinois and other states enacted video interview regulation laws, and workforce sentiment turned negative (65% of UK professionals concerned about job automation, only 25% felt prepared, 48% feared AI bias in recruitment). Academic critique of autonomy and fairness assumptions in AI assessment design emerged, and platform vendors faced increasing scrutiny—EPIC filed complaints about HireVue's "black-box" scoring methodology. The practice remained bifurcated: proven ROI in adopting companies versus growing caution among risk-averse enterprises and regulated sectors.
  • 2024-Q1: Adoption dynamics shifted with legal escalation. Psychometric validation continued (peer-reviewed research confirming AI scoring reliability and fairness), and platforms sustained momentum with growth-stage customers (Metaview, LetzInterview). However, two federal court rulings in Q1 2024 expanded legal liability: Massachusetts ruling allowed class-action against CVS/HireVue for lie-detector law violation (February), and Illinois BIPA ruling extended biometric privacy protections to facial expression analysis in AI assessments (March). These precedents signaled courts interpreting AI behavioral scoring more expansively, treating it as analogous to polygraph or biometric systems, creating new compliance uncertainty. The bifurcation deepened: enterprises with compliance infrastructure continued deployment, while regulated industries and risk-averse firms faced mounting barriers to entry.
  • 2024-Q2: Ecosystem continued maturing amid regulatory tension. New vendor products emerged (Aspect's AI-powered interview template generator) signaling sustained investment in structured assessment tooling. Peer-reviewed research reinforced scientific legitimacy of AI competency assessments (Journal of Applied Psychology validation study), underpinning vendor claims. However, industry discourse increasingly focused on the diversity-validity dilemma: balancing AI's demonstrated accuracy and consistency benefits against fairness and demographic parity concerns. Courts' expansionist interpretation of biometric privacy law (established in Q1) created uncertain liability for behavioral scoring, constraining adoption outside growth-stage and multinational enterprises with legal resources.
  • 2024-Q3: Deployment and validity challenges intensified. JetBlue's HireVue deployment achieved strong candidate satisfaction (93% CSAT), affirming practical adoption value for high-volume hiring. However, validity threats emerged: Sapia.ai's analysis of 573,500+ candidate responses revealed widespread AI-generated content (cheating), challenging the reliability of text-based structured assessments. Recruiter adoption metrics remained positive (380+ recruiters survey showing 92% adoption for productivity, 25% more candidates weekly), but the CVS settlement in July (resolving HireVue facial analysis lawsuit) crystallized regulatory and reputational risks. The market bifurcation deepened: enterprises with compliance infrastructure and high-volume hiring needs continued deployment despite risks; regulated sectors and risk-averse firms remained constrained.
  • 2024-Q4: Evidence of structured assessment maturity and persistent validity risks. Stanford RCT (n=37,000) validated AI-assisted structured interviewing, showing 20 percentage point improvement in final interview pass rates—confirming efficacy for high-volume hiring at scale. Enterprise adoption continued (Abeam Consulting deployed HireVue globally). However, cheating and fairness risks crystallized: peer-reviewed research documented widespread applicant use of GenAI in unproctored assessments, challenging validity of text-based scoring systems; psychometric vendors reported 75% of workers using GenAI with 70% reporting productivity gains, but raised alarms about detecting AI-assisted responses in live interviews. The practice remained bifurcated: organizations with high-volume structured hiring and compliance infrastructure sustained deployment despite validity and regulatory risks; organizations dependent on asynchronous assessment faced mounting barriers.
  • 2025-Q1: Adoption acceleration amid validity and regulatory crisis. AI usage in hiring surged to 72% weekly adoption with 31% deploying AI for assessments, signaling transition from experimentation to operational integration. Peer-reviewed validation of AI structured interview scoring continued. However, three critical barriers emerged: (1) validity threats from GenAI cheating on unproctored assessments affecting 92% of organizations using pre-employment testing; (2) regulatory escalation with ACLU discrimination complaint against HireVue/Intuit (March 2025) alleging bias and accessibility failures, paralleling WorkDay litigation establishing employer liability; (3) state-level AI regulation expansion (Colorado AI Consumer Protection Act). Market bifurcation deepened: high-volume tech hiring and multinational enterprises with compliance infrastructure sustained deployment; regulated sectors, risk-averse mid-market, and text-based assessment users faced compounding legal and validity barriers.
  • 2025-Q2: Enterprise deployment scale proved sustainable but tier progression constrained by implementation and regulatory realities. Major deployments continued: HireVue at 800+ enterprise clients (Emirates, Unilever, Philips, Nestlé) showing $500k-£1M annual savings; Metaview's 3,000+ customers delivering 30+ min per-interview productivity gains and 92% hiring confidence improvement. New ecosystem entrants (Criteria Corp, BarRaiser) launched AI scoring and real-time guidance tools. However, expert assessment shifted toward skepticism: industry leaders argued AI scoring "is not yet reliable enough" due to transcription errors and missing validation data; ACLU complaint pattern evidence (accessibility failures, bias against deaf/Indigenous candidates) established regulatory precedent; implementation reality showed privacy friction and narrow scope limitations. Market bifurcation persisted: multinational enterprises and high-volume tech hiring sustained deployment with legal buffer; mid-market and regulated sectors remained blocked by unresolved validity concerns and regulatory exposure.
  • 2025-Q3: Implementation gaps, candidate distrust, and validity threats intensified adoption barriers. While 96% of recruiting teams deployed AI broadly, only 53% used scoring rubrics and 47% conducted interview calibration—indicating adoption of generic AI tools without structured assessment discipline. Candidate trust collapsed: Gartner research showed only 26% trusted AI evaluation fairness, with offer acceptance rates falling from 74% to 51%; research across 13,000 participants found candidates alter self-presentation under AI assessment, downplaying empathy and creativity. Fairness capability remained vendor-dependent: Warden AI's 1M+ sample audit found 85% of systems met fairness thresholds and AI delivered 39-45% better treatment for women and minorities vs. humans, yet bias metrics varied 40% between vendors. Validity threats emerged: RPO research documented candidates successfully using GenAI to pass online tests and video interviews with higher ratings, undermining confidence in scoring reliability. Multiple HireVue lawsuits (EPIC complaint, Deyerler BIPA class action, D.K. EEOC discrimination complaint) established pattern precedent of accessibility failures and algorithmic opacity. Practice remained bifurcated: high-volume tech and multinational enterprises sustained deployment; broader adoption constrained by validity risks, legal exposure, and candidate resistance.
  • 2025-Q4: Maturity and barriers crystallized in final quarter. Enterprise deployment sustained at scale (Interviewer.AI production data: 66% hires in 1 week, 20% evaluation variance reduction; Metaview 3,000+ customers; HireVue 800+ enterprise clients), validating operational ROI in high-volume hiring. However, implementation gap persisted: 78.7% retained human final hiring authority despite AI deployment, and Willo survey showed only 69.6% using structured interviews and 47% with calibration—indicating tool adoption without methodology adoption. University of Washington research revealed humans mirror AI biases 90% of the time without intervention; HireVue responded with Multi-Penalty Optimization bias mitigation technique. Regulatory escalation continued: Workday class action certified as nationwide, HireVue faced ACLU discrimination and EEOC complaints, establishing pattern evidence for legal liability. Candidate distrust remained structural: 26% trust fairness, 39% use GenAI in applications, offer acceptance rates at 51%. Validity threats unresolved: vendor fairness metrics varied 40%, GenAI cheating remained prevalent, and assessment reliability under threat. Practice achieved operational maturity but faced fundamental barriers (candidate resistance, implementation gaps, fairness inconsistency, legal exposure) limiting expansion beyond current multinational and high-volume segments.
  • 2026-Jan: Hybrid AI-human screening research validated complementary deployment models (70,000-applicant field study showing 7% offer increase, 24% separation reduction), but critical expert assessment argued AI had made hiring worse overall and legal landscape intensified with pending Mobley v. Workday and Harper v. Sirius XM lawsuits establishing precedent for discrimination claims. Practice remained bifurcated: enterprises with high-volume hiring and compliance resources sustained deployment despite regulatory risks; broader adoption blocked by candidate distrust (26% fairness confidence), validity threats from GenAI cheating, and persistent fairness inconsistency (40% bias metric variance across vendors).
  • 2026-Feb: Professional standards maturation and vendor product evolution amid persistent adoption barriers. SIOP released formal recommendations for validating AI-based assessments signaling mainstream governance readiness; HireVue released Assessment Builder with claimed efficiency gains (60% less screening time, $667k annual savings). However, critical assessment revealed systematic AI bias patterns—research showed White names 85% selection vs. Black names near-zero, humans mirror AI bias 90% without intervention—while legal analysis documented six compliance risks (bias, transparency, data privacy, psychological inference validity, candidate fairness perception). Sapia.ai case study (Holland & Barrett: 89% turnover drop in 3 months, 47% hiring acceleration) demonstrated real-world deployment impact but remained exception. Market bifurcation persisted: multinational enterprises and high-volume hiring operations sustained deployment; broader adoption remained blocked by validity threats from GenAI cheating, candidate distrust, regulatory escalation (Illinois HB 3773, Colorado AI Act), and persistent bias metrics variance (40% between vendors).
  • 2026-Mar: Regulatory vacuum escalated adoption friction: EEOC removal of AI hiring guidance in January 2025 created a patchwork of four conflicting state laws (CA, IL, TX, CO), and Mobley v. Workday class certification expanded vendor liability exposure. Countervailing evidence emerged from a 70,000-applicant field study showing AI-conducted interviews achieved 12% more job offers, 18% higher job starts, and 78% candidate preference for AI over humans — yet fairness audit data (Warden AI, 150+ systems, 1M+ samples) confirmed 85% of systems meet thresholds only when designed with explicit guardrails, leaving the bifurcation between compliant enterprise deployments and the broader market unchanged.
  • 2026-Apr: Regulatory pressure intensified with the EU AI Act's August 2, 2026 enforcement deadline (penalties up to 7% of global revenue) requiring agent inventories, automated logging, and human oversight for high-risk assessment systems. UK SME adoption reached 54% (up from 35% in 2025) with documented 71% cost-per-hire reductions, while enterprise deployments (LNER: 71% time-to-hire reduction; William Hill: 88% compression) confirmed production-scale outcomes. The Mobley v. Workday class certification — covering ~1.1B applications — cemented vendor liability as a structural risk, shifting compliance burden squarely onto assessment tool providers.
  • 2026-May: Eightfold AI's independent BABL AI bias audit (29M assessments) achieved PASS ratings across all disparate-impact thresholds, demonstrating that compliant systems can meet fairness standards when explicitly designed for it — but the candidate trust gap hardened simultaneously: 63% of job seekers have now faced an AI interview (up 13 points in six months), 70% were never informed upfront, and 38% abandoned hiring processes over AI concerns. Mobley v. Workday expanded to a nationwide class covering ~1.1B applications, cementing vendor liability as a structural risk; Carnegie Mellon and Algorithmic Justice League research quantified discrimination patterns (AI-generated text scored 18–23% higher; immigrant English and over-55 candidates 27–31% lower pass rates), while the August 2, 2026 EU AI Act enforcement deadline surfaced explainability as an architectural requirement rather than an add-on feature.