Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Absence & attrition pattern analysis

LEADING EDGE

TRAJECTORY

Advancing

AI that analyses absence and turnover patterns to identify risk factors and predict future attrition across teams. Includes flight risk scoring and absence pattern detection; distinct from churn prediction in customer ops which predicts customer rather than employee departure.

OVERVIEW

Predictive attrition and absence analytics have crossed from experimental to operationally proven — but only at forward-leaning enterprises. Machine learning models now forecast employee departure risk and flag problematic absence patterns with documented accuracy in the 85-95% range, drawing on signals from engagement surveys, tenure data, collaboration metrics, and absence frequency. Organizations deploying these systems report 20-50% reductions in attrition and six-figure annual savings. The technology works. The harder question is whether most organizations are ready to use it. Adoption remains concentrated among data-mature enterprises with dedicated people analytics teams, while mid-market and smaller organizations face compounding barriers: data quality gaps, implementation complexity, and deep unresolved tensions around employee surveillance and algorithmic fairness. The practice's defining challenge is no longer technical feasibility but responsible scaling — bridging the gap between what the models can predict and what organizations should act on.

CURRENT LANDSCAPE

The vendor ecosystem for attrition and absence prediction is broad and maturing. SAP SuccessFactors, Workday, Dayforce, Visier, Oracle, ADP, and IBM all offer GA flight-risk and absence forecasting features; the market reached $2.33B in 2026 and is projected to grow to $4.43B by 2030 at sustained double-digit CAGR. Visier's deployment through its Paycor partnership now covers 2.1 million employees, the largest documented scale. Deployment outcomes are concrete: Experian achieved 2-3% attrition reduction with $8-10M savings over 18 months; Credit Suisse documented $70M annual savings from flight-risk modeling; IBM reports 95% turnover prediction accuracy and 25% reduction in unplanned absenteeism; healthcare organizations have saved $400K through attrition pattern analysis. Manufacturing deployments like Varun Beverages operationalize pattern detection across temporal and demographic dimensions, with 35-50% early-attrition reductions and $650K+ annual savings. U.S. voluntary turnover sits at 23.4% annually, with engagement serving as a primary attrition predictor: teams with low engagement experience 18-43% higher turnover. Organizations are increasing investment: 60% of large enterprises are expected to adopt AI-powered people analytics platforms by 2026.

These results coexist with stubborn adoption barriers and critical accuracy limitations. Realistic production-grade attrition models achieve AUC scores of 0.65-0.80, not the 90%+ accuracy cited in controlled studies — a gap often disguised by overall accuracy metrics that fail to reflect the class-imbalance problem in low-attrition populations. Employment law analysis has documented discrimination liability risks under Title VII, ADA, and ADEA when biased algorithms drive personnel decisions, and a single flawed model can affect thousands of employees. Systematic reviews of ML attrition approaches identify persistent gaps: domain-specific datasets remain sparse, model interpretability remains challenging, and ethical guardrails are inconsistently implemented. Analyses of predictive analytics failures cite poor data availability, algorithmic bias baked into historical training sets, and weak integration with broader business strategy as recurring failure modes. Perhaps most striking is a 2025 paradox: organizations successfully deploying AI frameworks face unintended attrition among AI-savvy employees who recognize enhanced marketability and pursue external opportunities at higher rates. The emerging consensus among practitioners frames these systems as diagnostic tools — a "flashlight, not a spotlight" — requiring human interpretation rather than automated action. Absence pattern analysis methodologies demonstrate promise even where overall sick-day reductions prove statistically uncertain, suggesting that engagement and organizational factors (not just absence metrics) drive meaningful outcomes.

TIER HISTORY

ResearchJan-2018 → Jan-2019
Bleeding EdgeJan-2019 → Jan-2020
Leading EdgeJan-2020 → present

EVIDENCE (128)

— Multiple named organizations demonstrating production deployment of attrition pattern analysis with specific outcomes and interventions.

— AWS official product announcement of native integration between major cloud platform and attrition analysis platform. Explicitly lists attrition as a key metric (alongside headcount, tenure, requisitions). Signals ecosystem maturity and broad adoption by major infrastructure providers.

— AWS Machine Learning blog tutorial demonstrating production enterprise integration of attrition analysis into agentic AI workflows. Includes concrete examples of attrition risk assessment and organizational health evaluation using Visier's analytics platform.

— XAI framework using GAN and Transformer encoders achieving 92-96.95% accuracy; SHAP analysis identifies JobSatisfaction and Age as top predictors; demonstrates advancing sophistication in addressing class imbalance and model interpretability.

— Critical adoption reality check: only 17% report highly successful AI implementation, 57% have no AI in core systems, 14% actually use AI; reveals implementation gap between vendor promises and organizational execution.

Talent Analytics - SHRMIndustry Reports

— SHRM talent analytics resource with adoption metrics: 82% of organizations using people analytics apply it to retention/turnover, 72% say it adds most value there. Shows mainstream adoption signal from credible professional body.

— Critical analysis of turnover prediction tool limitations. Work Institute statistic: 77% of turnover is preventable but organizations discover reasons only post-resignation. Argues models fail on input data quality, not math. Proposes adaptive conversations over surveys.

— Direct adoption metrics: 34% of organizations use AI to predict turnover with 75-89% accuracy, delivering 421% ROI on retention; maturity progression shows only 6% at predictive AI stage but with 5.4-8.7x ROI.

HISTORY

  • 2018: Early academic research on attrition and absence prediction models demonstrated feasibility. SAP SuccessFactors and other HR systems added time-off tracking and absence analytics features, but comprehensive pattern analysis tools were not yet mainstream.
  • 2019: Experian deployed and later commercialized a global predictive attrition system analyzing 200+ employee characteristics, achieving 3.5% turnover reduction. Independent case studies and adoption metrics documented real-world ROI, signaling transition from research to early enterprise adoption.
  • 2020: Multiple large organizations (Merck KGaA, IBM, financial services firms) deployed production systems with quantified outcomes—Merck identified new joiner risk patterns across 57,000 employees; IBM claimed 95% accuracy and $300M savings; one firm reduced female attrition from 90% to 3% by analyzing pattern root causes. Machine learning validation research showed tree-based models outperforming traditional approaches. Ethical concerns about surveillance, trust erosion, and self-fulfilling prophecies became explicit in practitioner discourse, framing adoption barriers beyond technical implementation.
  • 2021: Enterprise analytics platforms (SAP, Visier) deepened attrition prediction as standard capability; 60% of organizations grew people analytics teams. Academic systematic review identified persistent gaps between promised benefits and organizational outcomes. Machine learning outperformed regression across domains (military personnel, IT firms, enterprise systems with 86% accuracy). Critical assessment papers emphasized four systemic adoption risks: false objectivity in decisions, self-fulfilling prophecies, algorithmic opacity, and reduced employee autonomy; six failure modes in deployment (data obsession, trust breaches, misalignment). Practice exhibited mainstream-but-contested signature: quantified ROI at Fortune 500 scale, but ethical and implementation challenges preventing universalization.
  • 2022-H1: Platform vendors continued advancing attrition prediction tooling; Visier launched Retention Focus as a managed service with AI-powered early warning systems. Academic research on ensemble and tree-based ML models remained active, validating improved accuracy over traditional approaches (LightGBM achieving 89.8% accuracy). Market sentiment reflected strong adoption intent: 92% of HR leaders surveyed planned to increase AI use for talent management. Practice remained in mainstream-but-contested phase, with clear technical maturity and product availability but adoption constrained by organizational readiness and persistent ethical concerns about surveillance and trust erosion.
  • 2022-H2: Large-scale academic research continued validating ML approaches: Bar-Ilan University study of 700,000 employees found turnover antecedents vary by role and cultural background; European survey research demonstrated LightGBM and causality-based methods advancing methodological rigor beyond correlation. Military training attrition validation studies showed sustained interest in specialized contexts. Ethical concerns deepened: critical analysis emphasized algorithmic discrimination risks and the dangers of full automation in personnel decisions without human oversight. Practice remained at leading-edge tier with proven deployments and strong market adoption intent, but the gap between technical maturity and organisational adoption persisted due to trust, ethical, and implementation challenges.
  • 2023-H1: Talent shortages reached historic highs (77% of companies struggling with retention), elevating attrition prediction to strategic business priority. Visier and emerging vendors continued platform maturation; WorkL and other platforms deployed flight risk tracking integrated with engagement analytics. Academic research advanced deep learning approaches for attrition modeling. Critical research frameworks emerged for evaluating fairness and bias in AI predictive models used in personnel decisions, formalized the persistent ethical tensions that had constrained organizational adoption despite technical maturity. Vendor proliferation and market attention signaled mainstream acceptance of the practice category, though ethical and implementation barriers remained substantial adoption constraints.
  • 2023-H2: Attrition prediction platforms expanded investment into data-driven HR. Visier's industry analysis documented predictive analytics adoption for identifying at-risk employees and cited potential savings up to $15 million from reducing turnover. Survey data showed 40% of organizations planned to invest in HR analytics technology in 2023 (up from 33% in 2022), and 69% of large organizations had dedicated people analytics teams with predictive capabilities claimed to reduce unexpected turnover by up to 40%. Machine learning research continued advancing with transformer-based deep learning frameworks validating improved efficiency over prior approaches. Simultaneously, critical reassessment of attrition prediction intensified: practitioners and ethicists emphasized that predictive systems objectify employees, erode workplace trust, and risk unfair treatment through algorithmic discrimination—highlighting adoption barriers that persisted despite technical maturity and vendor availability. The practice remained in leading-edge status, characterized by proven deployments and strong vendor ecosystem development, but organizational adoption remained constrained by unresolved ethical concerns and implementation challenges.
  • 2024-Q1: Platform ecosystem continued maturation with major vendors (Workday, isolved) introducing or expanding general-availability attrition forecasting tools integrated with sentiment and engagement analytics. Specialized domain deployments expanded: aviation industry deployed crew absence prediction using ML time-series analysis to reduce standby utilization. Enterprise case studies documented quantified outcomes: healthcare organization saved $400K by analyzing attrition patterns to guide natural attrition vs. layoff decisions. Absence pattern analysis adoption grew via platforms like TeamSense analyzing seasonal and day-of-week trends across hundreds of thousands of employees. Practice remained in leading-edge tier with expanding platform availability and proven ROI cases, though organizational adoption barriers (trust, ethics, implementation complexity) persisted.
  • 2024-Q2: Vendor platform maturation accelerated: SAP SuccessFactors enhanced People Profile with predictive absence cards; Staff Aura launched dedicated commercial attrition prediction product claiming 86.4% accuracy. Case studies continued documenting real-world deployments in IT sector targeting the industry's 15% annual attrition rate. Academic research advanced operational applications: peer-reviewed research demonstrated predict-then-optimize methodology for using absenteeism predictions in healthcare rostering, showing robust scheduling outcomes. Cross-sector analysis expanded: education sector research applied sophisticated absence pattern decomposition to detect equity gaps, demonstrating methodological value beyond HR contexts. Empirical studies in IT companies documented positive ROI from analytics-driven retention strategies. Practice remained in leading-edge tier with deepening vendor specialization and expanding methodological applications, balanced by ongoing organizational adoption challenges around implementation complexity and trust.
  • 2024-Q3: Platform ecosystem maturation continued: Visier enhanced generative AI capabilities for workforce insights at HR Tech 2024. Attrition pattern analysis research extended beyond HR: peer-reviewed scoping review across allied health professions mapped attrition rates from 0.5%-41% and identified recurrent causal themes across 32 studies, demonstrating cross-sector methodological value. Employee adoption sentiment remained positive: 66% of workers used AI for HR-related tasks, with one-third preferring AI over human HR admins for employment issues. However, broader implementation challenges emerged: Gartner research predicted 30% of GenAI projects would be abandoned after proof-of-concept by end-2025 due to data quality, cost, and unclear ROI—a warning relevant to analytics-intensive deployments. Practice remained in leading-edge tier with proven deployments and strong market acceptance among early adopters, while organizational adoption remained constrained by implementation complexity, data quality requirements, and unresolved ethical concerns about employee surveillance and objectification.
  • 2024-Q4: Vendor ecosystem acceleration accelerated with SAP SuccessFactors launching 30 new AI use cases for talent intelligence and performance analysis, while Visier demonstrated unprecedented scale deployment to 2.1 million employees through Paycor partnership integration. Academic research continued validating domain-specific attrition predictors: military training context research identified psychological resilience and physical fitness as significant attrition signals, expanding methodological understanding beyond corporate HR contexts. Concurrently, critical assessment intensified around ethical and regulatory risks: opinion analysis highlighted algorithmic bias amplification, over-reliance on automated personnel decisions, and insufficient human oversight in flight risk modeling. Practice remained in leading-edge tier with demonstrated technical maturity and vendor ecosystem depth, but Q4 evidence surfaced growing tension between advancing deployment scale and unresolved risks around bias, trust, and regulatory compliance—factors that shaped organizational adoption decisions and drove hesitation among enterprises implementing these systems.
  • 2025-Q1: Vendor ecosystem expansion continued: Dayforce, Quantum Workplace, and other HCM platforms integrated flight risk prediction as GA features into mainstream systems. Absence management market projected to grow from $320M (2024) to $742M by 2033 (9.8% CAGR), signaling sustained commercial investment. Real-world deployments documented quantified outcomes: IBM achieved 25% reduction in unplanned absenteeism through AI analytics. Research confirmed methodological maturity: peer-reviewed studies achieved 87% accuracy with preprocessing-optimized ML models and demonstrated Deloitte benchmark of 85% prediction accuracy across implementations. Strategic investment accelerated: one-quarter of enterprise leadership boards prioritized people analytics investment for 2025, with predictive modeling for workforce planning gaining explicit emphasis. Practice remained in leading-edge tier with expanding ecosystem integration, proven deployment outcomes at scale, and strong executive investment signals—though implementation barriers (data quality, complexity, organizational readiness) and unresolved ethical concerns about employee surveillance continued to constrain universalization across mid-market and smaller organizations.
  • 2025-Q2: Vendor platform capabilities continued deepening with Quantum Workplace Retention Radar, Visier enhancements, and Staff Aura scaling specialized attrition prediction. Real-world methodology validation emerged: Worklytics case study documented passive organizational network analysis predicting flight risk during RTO transitions with specific collaboration metrics. Absence management benchmarking highlighted market signals: $225.8B yearly cost, 40% productivity loss from unplanned absences, and 20% absence reduction from tracking software adoption. Academic research balanced deployment momentum: RIT thesis analyzed AI's dual impact (opportunities and ethical constraints), emphasizing implementation challenges around fairness and trust. Critical analysis intensified in mid-2025: employment law assessment documented discrimination liability under Title VII/ADA/ADEA from biased algorithmic personnel decisions; Betterworks research revealed paradoxical attrition risk where AI-savvy employees (80% of highly engaged workers) actively job-hunt due to perceived marketability and AI uncertainty—demonstrating unintended consequences of AI adoption. Practice remained at leading-edge tier with proven deployments and strong platform ecosystem, but Q2 evidence surfaced critical tension between expanding deployment scale and unresolved legal, ethical, and operational barriers preventing broader adoption.
  • 2025-Q3: Vendor ecosystem breadth solidified with industry surveys documenting mature tooling across major platforms (ADP, Crunchr, Workday, SAP, IBM). Deployment outcomes continued validation: organizations reported 20-30% attrition reductions from targeted retention programs informed by predictive risk flagging; IBM and Humana documented 15-20% absence rate improvements; inFeedo metrics showed 59% of workers actively seeking opportunities against 25.2% annualized quit rates, with replacement costs 50-200% of salary. Strategic business correlation reinforced: McLean & Co. survey found organizations maintaining ≤10% voluntary turnover significantly more likely to exceed strategic objectives. Methodological maturity strengthened: ML models achieved F1-scores of 0.92 in attrition detection. Critical adoption perspective matured: practitioner guidance emphasized that AI systems should illuminate root causes of attrition ("flashlight not spotlight"), framing predictive capabilities as diagnostic tools requiring human interpretation. Practice remained at leading-edge tier with deepening vendor ecosystem, documented deployment outcomes, and maturing ethical frameworks—though broad market adoption remained concentrated among enterprise leaders, with mid-market and smaller organizations constrained by implementation complexity, data quality requirements, and cultural resistance to algorithmic personnel assessment.
  • 2025-Q4: Peer-reviewed research continued advancing methodological rigor: new ML studies achieved 84% accuracy in absence prediction using Random Forest models (occupational health context) and extended to mental health-related sickness absence prediction, demonstrating domain-specific technical maturity. Case study evidence showed deployment breadth: IBM's 95% turnover prediction accuracy persisted, with parallel documentation of 20-30% attrition reductions and 15-20% absence rate improvements across multiple organizations. Critical assessment intensified: analysis of predictive analytics failures identified persistent barriers—data quality and availability, algorithmic bias in historical training data, lack of contextual HR understanding, overreliance on historical patterns, and poor integration with business strategy—framing adoption constraints distinct from technical capability. Vendor ecosystem and deployment scale continued expanding, but the contradiction between proven capability and constrained organizational adoption sharpened by year-end: technical maturity and commercial availability reached mainstream status, yet ethical, legal, and implementation barriers—alongside organizational readiness gaps—remained the primary limiting factor preventing universalization beyond enterprise leaders. Practice remained at leading-edge tier with validated deployment outcomes and deepening ecosystem, characterized by the signature tension between mature technology and persistent adoption friction.
  • 2026-Jan: Vendor ecosystem and deployment maturity solidified further: manufacturing, retail, and hospitality sectors reported concrete attrition-absence linkage deployments with 35-50% reduction metrics and specific root-cause analysis outcomes. ML systems achieved production-ready status with perfect metrics (AUC-ROC 1.0) for early-hire attrition prediction, delivering $650K+ annual savings per organization. Market analysis confirmed category-level growth trajectory ($3.504B in 2025, projected $11.2B by 2035) and 17x improvement in exit risk forecast accuracy via AI. Methodological advancement continued: comparative research validated ensemble methods (Random Forest, Gradient Boosting) outperforming traditional approaches across financial sector deployments. At window end (2026-01-31), practice remained at leading-edge tier with production-scale deployments and deepening vendor specialization—technical capability no longer in question—but organizational adoption remained concentrated among data-mature enterprises due to persistent barriers around implementation complexity, data quality requirements, ethical concerns about surveillance, legal liability risks, and cultural resistance to algorithmic personnel assessment.
  • 2026-Feb: Industry research and vendor platforms continued signaling sustained commercial momentum and strategic organizational focus on absence analytics. WTW's 2026 absence management survey documented industry trends in building effective absence strategies; TeamSense benchmarks showed 1.53% average unplanned absence rate across manufacturing/frontline operations. Organizational adoption data reflected growing prioritization: UK survey data showed 37% of employers identifying short-term absence as their biggest sickness management challenge, signaling shift from reactive absence tracking to strategic pattern analysis and intervention. International scope expanded: Spanish occupational health data documented 80% increase in temporary disability incidence over past decade with €33B annual cost, reflecting cross-regional maturity and urgency. Concurrent critical discourse matured on ethics and implementation: vendor and consulting analyses emphasized shift from surveillance models to diagnostic frameworks ("flashlight not spotlight"), balancing technical capability advances with ethical guardrails and human judgment requirements. Practice remained at leading-edge tier with expanding vendor ecosystem, documented organizational prioritization, and international deployments, characterized by maturing discourse on responsible implementation alongside technical advancement.
  • 2026-Mar: Quantified deployment outcomes consolidated: AIHR case studies confirmed Experian ($8-10M savings, 2-3% attrition drop) and Credit Suisse ($70M annually) as benchmarks, while 2026 aggregated metrics (23.4% US voluntary turnover, 18-43% higher attrition in low-engagement teams) reinforced business urgency. A systematic review of ML attrition methods (2015-2025) identified persistent gaps in domain-specific datasets, model interpretability, and ethics guardrails as the primary barriers separating technical maturity from responsible deployment at scale.
  • 2026-Apr: Deployment outcomes and critical analysis matured further with evidence spanning operational and ethical dimensions. Commercial ecosystem signaled sustained confidence: Retensa (multi-industry platform with named customers), Zerve (vendor case study on Random Forest ROI), and Artefact analyst research (USD 459K pilot savings from ML-driven absenteeism forecasting) documented active deployment investments. Contact center evidence shows domain-specific maturity: NLP sentiment deterioration tracking and escalation patterns predict agent attrition with 20-25% reduction via schedule flexibility interventions. Industry maturity metrics reflect growing adoption: 10-12% of organizations now operate at predictive analytics maturity (Sapient 2024), with 60% still at descriptive and 25% at diagnostic stage—indicating market opportunity remains substantial but implementation barriers (data quality, organizational readiness) persist. Critical assessment intensified: practitioner analysis identified pervasive deployment failure modes ("predicting churn does not prevent it"), emphasizing shift from prediction accuracy to operational execution and retention OS architecture. Post-AI-rollout attrition paradox documented: organizations deploying AI frameworks face unintended departures among AI-savvy employees recognizing enhanced marketability (62% of high performers considering exit post-AI-rollout). Regulatory environment matured: NYC LL 144, Colorado SB 205, and EU AI Act high-risk provisions now require mandatory algorithmic fairness audits and bias testing—shifting attrition analytics from discretionary best practices to compliance obligations. Practice remained at leading-edge tier with broadening vendor ecosystem, quantified deployment outcomes across multiple industries and contexts, and maturing critical frameworks around deployment execution and regulatory compliance—but the defining tension persists: technical capability is established and commercial, yet organizational adoption remains gated by data readiness, implementation complexity, trust, and unresolved fairness concerns.
  • 2026-May: AWS announced native integration of Amazon Quick with Visier's Vee AI agent, signalling major cloud infrastructure endorsement of attrition analytics as a production capability; Visier's 2026 Vizzie Award winners documented named enterprise deployments with specific intervention outcomes. Peer-reviewed XAI research (Systems journal) achieved 92–97% prediction accuracy with transparent feature importance, while SHRM data (82% of people analytics teams applying models to retention) and Everest Group's PEAK Matrix (22+ vendors, retention risk as central use case) confirmed mainstream category adoption — though the Eagle Hill quarterly index surfaced a 20-point generational attrition gap and declining organisational confidence as emerging structural challenges.