The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that analyses learning data to predict outcomes, identify at-risk students, and measure engagement patterns. Includes early warning systems and engagement scoring; distinct from skills assessment which evaluates competency rather than predicting trajectories.
Learning analytics can identify at-risk students with increasing precision. Whether those identifications translate into better outcomes remains the practice's defining tension. Forward-leaning universities and K-12 districts have deployed predictive models at meaningful scale, and state-level mandates are now accelerating adoption. The technical capability is proven: recent peer-reviewed research achieves 89.1% accuracy with Gradient Boosting and 89.5% F1 scores on large institutional datasets, with practitioner validation confirming system usability (78.4 SUS score). Production systems at named institutions deliver measurable retention gains. But the field has not yet crossed into mainstream practice. Documented racial bias in deployed models, persistent gaps between identification and effective intervention, and a research literature that overwhelmingly neglects learning outcome measurement—a 2020 systematic review of 46 studies found "rigorous, large-scale evidence of effectiveness is still lacking"—all constrain broader adoption. Research teams are advancing fairness-aware algorithms with demonstrable progress (e.g., 0.35→0.08 reduction in bias severity and 15.3%→4.2% improvement in demographic parity), and the ecosystem is maturing with IES-funded research on fair prediction and open-source toolkits. Yet adoption barriers remain structural and persistent: only 23% of administrators actively assess for bias, intervention effectiveness remains uncertain, and regulatory constraints (COPPA 2026 effective April 22, FERPA loopholes enabling 1,449 EdTech tools per district affecting 55M students) continue reshaping deployment constraints. Fundamental statistical limits on rare-event prediction (the "Likelihood Ratio Wall") limit achievable fairness independent of algorithm design. The vanguard is getting value; most institutions have not started.
Two US states — Utah and Iowa — now mandate early warning systems across all local education agencies, with Panorama Education serving as the primary vendor. That policy momentum, combined with Panorama's reach across 2,000+ K-12 districts and 15M+ students, marks real expansion in deployment footprint. In higher education, Civitas Learning serves 400+ institutions and reports retention gains of 3-11% across its client base. SEAtS ONE platform is in general availability across 200+ higher education institutions. New deployments confirm continued adoption: University of Utah deployed dual analytics dashboards April 2026 for engagement and retention analysis; Broward County Public Schools (one of nation's largest districts) expanded Panorama Student Success for identifying early warning signals tied to attendance, academics, and behavior; Florida International University and Georgia State University deployed ML models achieving 7% graduation rate improvement with stronger gains for underserved populations. IU Indianapolis reduced its retention gap from 19% to 12.7% through data-informed proactive advising with explainable AI and bias mitigation integrated into production systems. May 2026 evidence confirms expanded deployment momentum: Reynolds Community College achieved highest enrollment in 6 years and $1M+ cost savings via SAS Viya analytics; University of Arizona deployed systems achieving 90% early-warning accuracy within first 12 weeks; Community colleges nationally report 11-18pp retention gains from trigger-based at-risk workflows. International deployments expand the evidence base: 3 Nigerian universities with 19,961 student records demonstrate Hist Gradient Boosting effectiveness in distinct sociocultural contexts.
These successes, however, sit alongside persistent structural barriers intensifying in 2026. A meta-analysis of 936 learning analytics papers found 70% lacked any learning outcome measures, suggesting field-wide research has drifted from educational improvement. Independent analysis of 1,000+ student success initiatives found 40% showed little or no measurable impact. Fairness remains acute and increasingly visible: large-scale real-world studies across 600k+ students in 80 education systems demonstrate bias concerns in ML-based risk prediction; deployed systems document false negative rates of 19-21% for Black and Hispanic students compared to 6-12% for White and Asian students, and the Wisconsin Dropout Early Warning System disproportionately flagged African American and Hispanic students despite low actual risk. Yet only 23% of administrators actively assess for algorithmic bias. Fundamental statistical research (Likelihood Ratio Wall, ACM FAccT 2026) proves that rare-event prediction systems (student dropout 3-8% base rates) face irresolvable fairness constraints at the mathematical level—high precision on positive predictions requires tools far more discriminative than current instruments provide, and demographic groups subject to historic under-service face structurally lower maximum achievable fairness metrics independent of algorithm choice. Regulatory constraints are now sharply constraining K-12 deployment: COPPA 2026 (effective April 22) requires parental consent for any AI-powered learning analytics features and mandates data minimization. FERPA governance remains inadequate—the 1974 framework was designed for file cabinets, not cloud-based AI systems; the average U.S. school district uses 1,449 EdTech tools as potential "school officials," affecting 55M K-12 students. Generative AI integration is advancing with Panorama's Solara platform in production, but the harder problems of equitable intervention design, regulatory compliance, institutional capacity, and algorithmic fairness remain unresolved.
— University of Florida research coverage documenting field-wide awareness that AI development pace exceeds fairness research, with specific concerns for educational analytics equity.
— Multiple named institutions with concrete outcomes: IU Pennsylvania 71%→75% retention, Georgia State 7pp graduation improvement, University of Arizona 90% early-warning accuracy within 12 weeks.
— Frontiers Psychology peer-reviewed study presents 'Engagement Dynamics Forecaster' deep learning framework; engagement patterns serve as leading indicator for early intervention.
— Deployed microservices architecture system for at-risk identification within semester; Random Forest ensemble achieved 94.2% classifier accuracy and R²=0.88, with real-time faculty dashboards.
— ACM FAccT peer-reviewed paper proving fundamental statistical barriers in rare-event prediction (student dropout 3-8% base rates); 'Likelihood Ratio Wall' limits fairness achievability independent of algorithm.
— Reynolds Community College (Virginia) achieved highest enrollment in 6 years and saved over $1 million in 6 months via SAS Viya analytics infrastructure, demonstrating deployment ROI.
— Peer-reviewed study of 19,961 student records across 3 Nigerian universities with deployed Streamlit model; Hist Gradient Boosting achieved MAE 7.271, addressing sociocultural fairness factors in prediction.
— Record-breaking 372 research submissions (9.4% increase) from 46 countries with 344-researcher program committee signals sustained international engagement and field maturity.