The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that administers adaptive assessments that adjust difficulty based on responses to efficiently measure competency. Includes item response theory and computerised adaptive testing; distinct from skills assessment in education which maps against external frameworks rather than internal competencies.
Adaptive assessment has graduated from methodological novelty to proven operational tool. Computerized adaptive testing (CAT) adjusts question difficulty in real time based on responses, measuring competency in a fraction of the items a fixed-length test requires -- reductions of 40-88% are documented across domains. The practice now operates at scale in corporate talent evaluation, government competency measurement, healthcare outcomes, and K-12 education, supported by a mature vendor ecosystem, GA tooling, and analyst-recognised market growth. Recent algorithmic advances (Bayesian active inference frameworks achieving 30-40% trial reduction) and government-funded research infrastructure (e.g. US IES $3.8M Adult Skills Assessment Project with 20,000-learner validation sample) signal deepening methodological maturity and institutional commitment. The question facing L&D teams is no longer whether adaptive assessment works but how to operationalize it effectively. That operationalization involves managing competing demands: balancing system usability and content relevance (ranked highest in global adoption surveys), ensuring assessment integrity against generative AI circumvention (requiring item design validation and security protocols), and aligning tools with institutional workflows and practitioner capacity. Generative AI remains an active threat—with 88% of undergraduates now using LLMs for assessed work—forcing organisations to rethink assessment design toward scenario-based and contextual formats that maintain validity. These challenges are organisational and design-focused rather than methodological. Adoption is a matter of implementation discipline and assessment design rigor, not proof of concept.
The vendor ecosystem is mature and commercially active. SHL dominates corporate talent assessment, reporting 4x candidate throughput gains with AI-assisted adaptive scoring and deploying its Talent Mobility solution -- which evaluates 96 behavioural skills in 15 minutes -- at General Mills, the Royal Navy (350+ leaders), and global financial services firms (10,000+ participants). The Adecco Group runs SHL adaptive assessments across 9 brands in 7 languages. In L&D, STADA pharmaceutical cut SAP training time by 40% using Area9 Lyceum's adaptive platform while achieving near-perfect competency outcomes. Market analysts project the adaptive learning market reaching USD 11.81 billion by 2035 at a 16.81% CAGR; corporate adoption surveys show 89% of L&D leaders shifting to AI-first platforms and 75% finding adaptive learning effective for engagement. Deployment also extends to adult learner populations: University of Massachusetts ASAP partnership (serving 10,000+ adults) demonstrates adaptive assessment at scale in adult literacy/numeracy with ML-driven question sequencing.
Government and education deployments reinforce the breadth. The U.S. IES PIAAC 2023 used adaptive testing for national adult competency evaluation, and the IES is funding a $3.8M 5-year research project (Adult Skills Assessment Project) developing reusable adaptive assessment modules for adult education and community colleges with 20,000-learner validation. Wales runs statutory adaptive assessments for Years 2-9 literacy and numeracy at national scale; India's IIT Council is piloting adaptive testing for the JEE Advanced entrance exam. In healthcare, a hand surgery CAT study (268 patients) replicated full-length patient-reported outcome measures with just two questions and 95%+ correlation, while ML-based CAT now achieves 94-96% accuracy across five languages with roughly 10 items.
Institutional budget reallocation signals accelerating adoption momentum. Universities now allocate 18-24% of IT budgets to AI assessment tools (up from ~9% two years prior), with adaptive assessment identified as the primary procurement driver ahead of content delivery. Pearson's AI-powered assessment engine serves over 4 million active learners in higher education as of Q1 2026, with institutions reporting 15-22% improvement in student outcome prediction accuracy. Market analysis confirms growth: the global adaptive learning software market stands at $4.5 billion (2024) and is projected to reach $15.0 billion by 2033 at 15% CAGR; the AI adaptive learning platform market is valued at $10.6 billion (2025) with corporate training and L&D representing the largest segment at 38.4% of revenue, projected to reach $83.2 billion by 2034 at 26.1% CAGR. Patent acceleration further signals ecosystem maturity: 22+ filings in adaptive assessment technology patents during 2024-2026, compared to only ~5 in the foundational phase, with India emerging as the dominant innovation jurisdiction. Skillsoft's platform metrics reveal explosive enterprise adoption of AI skills validation infrastructure, with 994% year-over-year growth in AI-related skills validation completions.
Implementation and design barriers remain primary adoption constraints. Practitioners flag design trade-offs -- balancing adaptivity with standardisation for fairness, ensuring content validity, managing privacy and bias risks. Institution-level surveys identify system usability and content relevance as top adoption criteria; technical support emerges as the most critical institutional support factor. Assessment integrity under generative AI pressure requires active design validation: recent peer-reviewed research proposes DIF (Differential Item Functioning) methodology to identify items vulnerable to AI misuse, and critical analysis surfaces design flaws in AI-generated assessment items, necessitating expert review protocols. Standardisation gaps persist in specialised clinical domains, where heterogeneous methods and limited cross-validation slow adoption. Critical implementation research documents that 95% of AI pilot implementations fail to reach production, with the root cause identified as organizational factors (70%) rather than technology (10%) or data quality (20%); success requires explicit outcome accountability, workflow embedding, and change management disciplines. Market perspective remains balanced: adaptive learning excels for structured knowledge domains (compliance, product training, technical certifications) but struggles with interpersonal and emotional skills requiring human feedback, suggesting continued segmentation in adoption patterns.
— Institutional adoption analysis: AI assessment tools now drive 18–24% of university IT budgets (up from ~9% two years prior); adaptive assessment identified as primary procurement driver; Pearson's engine serves 4M+ active learners; institutions report 15–22% improvement in student outcome prediction accuracy.
— Peer-reviewed research in Psychometrika on deep learning CAT: achieves high-precision ability estimation (posterior SD=0.4) with average 11.2 items vs. traditional fixed-length tests, demonstrating technical advancement in adaptive testing efficiency.
— IP landscape analysis documenting innovation acceleration: 22+ patent filings in 2024–2026 vs. ~5 in foundational phase; India dominates with 60% of records; identifies five core technology pillars including adaptive assessment engines and ML-driven item generation.
— Critical analysis citing MIT, BCG, McKinsey, IDC research on AI implementation failure: 95% of organizations get zero measurable return; 88% pilot failure rate; BCG's 70/20/10 rule emphasizes organizational change (70%) over technology; identifies success factors: measurable outcomes, workflow embedding, change management, outcome accountability.
— Market research showing ecosystem maturity: $4.5B market (2024) projected to reach $15.0B by 2033 at 15% CAGR; applications across K-12, higher education, corporate training; named vendor ecosystem (SAS, D2L, McGraw-Hill, Wiley, Docebo); regulatory constraints emerging (GDPR, CCPA).
— Cambridge's Adaptive Learn platform (K-8 curriculum) launched as product GA with diagnostic gap identification, progression tracking, and real-time actionable insights for educators; major educational publisher committing to adaptive assessment as core offering.
— 2026 Best Formative Assessment Award finalists: Singapore University AdLeS® adaptive learning system, PARAKH K-12 holistic progress integration, JUZ40 Kazakhstan UNT exam prep with adaptive checkpoints. Named institutional productions across higher ed, K-12, exam preparation.
— Survey of 382 HR professionals: 94% use assessments, 51% AI-enabled; only 22% 'very confident' ethical use; 'shadow AI' governance gap reveals one-third operate AI systems they cannot audit; rising candidate cheating and fairness concerns critical adoption friction.