Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Learning analytics & student risk identification

LEADING EDGE

TRAJECTORY

Stalled

AI that analyses learning data to predict outcomes, identify at-risk students, and measure engagement patterns. Includes early warning systems and engagement scoring; distinct from skills assessment which evaluates competency rather than predicting trajectories.

OVERVIEW

Learning analytics can identify at-risk students with increasing precision. Whether those identifications translate into better outcomes remains the practice's defining tension. Forward-leaning universities and K-12 districts have deployed predictive models at meaningful scale, and state-level mandates are now accelerating adoption. The technical capability is proven: recent peer-reviewed research achieves 89.1% accuracy with Gradient Boosting and 89.5% F1 scores on large institutional datasets, with practitioner validation confirming system usability (78.4 SUS score). Production systems at named institutions deliver measurable retention gains. But the field has not yet crossed into mainstream practice. Documented racial bias in deployed models, persistent gaps between identification and effective intervention, and a research literature that overwhelmingly neglects learning outcome measurement—a 2020 systematic review of 46 studies found "rigorous, large-scale evidence of effectiveness is still lacking"—all constrain broader adoption. Research teams are advancing fairness-aware algorithms with demonstrable progress (e.g., 0.35→0.08 reduction in bias severity and 15.3%→4.2% improvement in demographic parity), and the ecosystem is maturing with IES-funded research on fair prediction and open-source toolkits. Yet adoption barriers remain structural and persistent: only 23% of administrators actively assess for bias, intervention effectiveness remains uncertain, and regulatory constraints (COPPA 2026 effective April 22, FERPA loopholes enabling 1,449 EdTech tools per district affecting 55M students) continue reshaping deployment constraints. Fundamental statistical limits on rare-event prediction (the "Likelihood Ratio Wall") limit achievable fairness independent of algorithm design. The vanguard is getting value; most institutions have not started.

CURRENT LANDSCAPE

Two US states — Utah and Iowa — now mandate early warning systems across all local education agencies, with Panorama Education serving as the primary vendor. That policy momentum, combined with Panorama's reach across 2,000+ K-12 districts and 15M+ students, marks real expansion in deployment footprint. In higher education, Civitas Learning serves 400+ institutions and reports retention gains of 3-11% across its client base. SEAtS ONE platform is in general availability across 200+ higher education institutions. New deployments confirm continued adoption: University of Utah deployed dual analytics dashboards April 2026 for engagement and retention analysis; Broward County Public Schools (one of nation's largest districts) expanded Panorama Student Success for identifying early warning signals tied to attendance, academics, and behavior; Florida International University and Georgia State University deployed ML models achieving 7% graduation rate improvement with stronger gains for underserved populations. IU Indianapolis reduced its retention gap from 19% to 12.7% through data-informed proactive advising with explainable AI and bias mitigation integrated into production systems. May 2026 evidence confirms expanded deployment momentum: Reynolds Community College achieved highest enrollment in 6 years and $1M+ cost savings via SAS Viya analytics; University of Arizona deployed systems achieving 90% early-warning accuracy within first 12 weeks; Community colleges nationally report 11-18pp retention gains from trigger-based at-risk workflows. International deployments expand the evidence base: 3 Nigerian universities with 19,961 student records demonstrate Hist Gradient Boosting effectiveness in distinct sociocultural contexts.

These successes, however, sit alongside persistent structural barriers intensifying in 2026. A meta-analysis of 936 learning analytics papers found 70% lacked any learning outcome measures, suggesting field-wide research has drifted from educational improvement. Independent analysis of 1,000+ student success initiatives found 40% showed little or no measurable impact. Fairness remains acute and increasingly visible: large-scale real-world studies across 600k+ students in 80 education systems demonstrate bias concerns in ML-based risk prediction; deployed systems document false negative rates of 19-21% for Black and Hispanic students compared to 6-12% for White and Asian students, and the Wisconsin Dropout Early Warning System disproportionately flagged African American and Hispanic students despite low actual risk. Yet only 23% of administrators actively assess for algorithmic bias. Fundamental statistical research (Likelihood Ratio Wall, ACM FAccT 2026) proves that rare-event prediction systems (student dropout 3-8% base rates) face irresolvable fairness constraints at the mathematical level—high precision on positive predictions requires tools far more discriminative than current instruments provide, and demographic groups subject to historic under-service face structurally lower maximum achievable fairness metrics independent of algorithm choice. Regulatory constraints are now sharply constraining K-12 deployment: COPPA 2026 (effective April 22) requires parental consent for any AI-powered learning analytics features and mandates data minimization. FERPA governance remains inadequate—the 1974 framework was designed for file cabinets, not cloud-based AI systems; the average U.S. school district uses 1,449 EdTech tools as potential "school officials," affecting 55M K-12 students. Generative AI integration is advancing with Panorama's Solara platform in production, but the harder problems of equitable intervention design, regulatory compliance, institutional capacity, and algorithmic fairness remain unresolved.

TIER HISTORY

ResearchJan-2017 → Jan-2017
Bleeding EdgeJan-2017 → Jan-2021
Leading EdgeJan-2021 → present

EVIDENCE (136)

— University of Florida research coverage documenting field-wide awareness that AI development pace exceeds fairness research, with specific concerns for educational analytics equity.

— Multiple named institutions with concrete outcomes: IU Pennsylvania 71%→75% retention, Georgia State 7pp graduation improvement, University of Arizona 90% early-warning accuracy within 12 weeks.

— Frontiers Psychology peer-reviewed study presents 'Engagement Dynamics Forecaster' deep learning framework; engagement patterns serve as leading indicator for early intervention.

— Deployed microservices architecture system for at-risk identification within semester; Random Forest ensemble achieved 94.2% classifier accuracy and R²=0.88, with real-time faculty dashboards.

— ACM FAccT peer-reviewed paper proving fundamental statistical barriers in rare-event prediction (student dropout 3-8% base rates); 'Likelihood Ratio Wall' limits fairness achievability independent of algorithm.

— Reynolds Community College (Virginia) achieved highest enrollment in 6 years and saved over $1 million in 6 months via SAS Viya analytics infrastructure, demonstrating deployment ROI.

— Peer-reviewed study of 19,961 student records across 3 Nigerian universities with deployed Streamlit model; Hist Gradient Boosting achieved MAE 7.271, addressing sociocultural fairness factors in prediction.

— Record-breaking 372 research submissions (9.4% increase) from 46 countries with 344-researcher program committee signals sustained international engagement and field maturity.

HISTORY

  • 2017: Panorama and Civitas Learning achieve market scale (5M and 30% reach, respectively) with documented persistence and revenue outcomes at named institutions; research literature identifies critical adoption barriers in communication, data literacy, and ethical governance despite successful deployments.
  • 2018: State-level adoption expands (Utah USBE pilot); empirical research reveals limitations of simple metrics; real deployments show double-digit harms from poorly designed alerts; institutional privacy governance emerges as critical gap.
  • 2019: Broad institutional adoption reaches 1,400+ colleges and universities deploying predictive analytics; Panorama claims 900+ districts, Civitas 350+ colleges; Civitas reports only 60% of analytics-driven interventions show positive impact; vendor business challenges surface alongside expanded deployments; investigative journalism documents equity and surveillance concerns; research validates closed-loop approaches but raises questions about teacher engagement and algorithmic harm.
  • 2020: Market consolidates around Panorama and Civitas with deepened institutional integration; real-world deployments expand in scale (30K+ student districts, 3K+ participant pilots, 100+ variable intervention systems) with higher technical accuracy (90% mid-semester prediction). Research literature simultaneously documents systemic design gaps in learning analytics dashboards (theory-grounding, pedagogical support, evaluation rigor) and raises critical concerns about potential harms in K-12 (privacy, bias, stereotype threat) and fairness trade-offs in algorithmic risk prediction. Practice maturity remains asymmetrical: strong operational deployment and technical capability at scale, but persistent gaps in pedagogical effectiveness and ethical governance.
  • 2021: Market consolidation continues; Civitas expands to 400+ institutions (8M+ students) with quantified outcomes (46% completion increase, 6% retention lift). K-12 deployment breadth expands with Infinite Campus early warning across 2,000+ districts in 45 states. Real-world evidence demonstrates operational maturity: Czech Technical University dropout reduction (37%→19%), Boston Public Schools early identification framework, UCF adaptive models reducing nonsuccess rates. Algorithmic fairness concerns intensify as deployments scale; privacy governance gaps remain unresolved. Practice remains bleeding-edge: proven deployment impact alongside unresolved ethical and governance questions.
  • 2022-H1: Market scale continues with Civitas serving 400+ global HEIs (8M+ students) and Panorama expanding behavior analytics. Research confirms adoption barriers remain "sporadic and small-scale" despite technical advancement (85% grade prediction accuracy, 90%+ mid-semester identification). Post-pandemic assessment reveals systemic inadequacies: experts call for next-generation systems beyond traditional indicators. Algorithmic fairness becomes acute concern; research highlights bias in protected attributes and need for fair-AI methodologies. Structural gaps persist: privacy governance, unequal intervention outcomes, and consent frameworks remain unresolved despite operational maturity.
  • 2022-H2: Deployment momentum continues with K-12 districts integrating social-emotional learning analytics (Monticello CSD via Panorama); peer-reviewed research validates behavioral pattern detection for at-risk student identification. Critical discourse emerges on pedagogical and autonomy trade-offs: ethics scholars question whether risk-prediction systems undermine self-determined learning despite their technical efficacy. Structural barriers remain: adoption sporadic, fairness concerns unresolved, institutional consent frameworks absent. Practice maturity asymmetrical: strong technical capability and real institutional deployments offset by persistent gaps in ethical design and equitable outcomes.
  • 2023-H1: Deployment scale continues with new cases (Highline Public Schools, 18K students), though growth rate slows from 2022 peak. Research turns toward adoption maturity: studies document persistent barriers including low vendor trust (privacy/ethics concerns), data governance gaps, and low organizational capability for adoption despite technical accuracy gains (85%+ grade prediction). Precision-refinement research (Maastricht dropout prediction) shows technical sophistication advancing beyond simple early indicators. Practitioner commentary identifies ongoing gaps in real-time data utilization for decision-making, particularly for equity-focused intervention targeting. Practice remains in leading-edge phase: validated technical capability and extended vendor adoption, but structural adoption barriers (trust, governance, organizational capacity) unresolved and adoption remains below potential at institutional scale.
  • 2023-H2: Deployment expansion continues (University of Central Oklahoma $1.2M retention gains via data-activated campaigns; Durham Public Schools 33K-student MTSS adoption; 48-institution survey documenting 6.9pp degree-planning retention gains). However, peer-reviewed systematic review (38 studies) finds no evidence that learning analytics dashboards improved academic achievement, though participation gains confirmed. Student acceptance remains high (80% support in UK HE survey) but practitioner discourse documents persistent gap between analytics efficacy at identification and actual intervention effectiveness. Practice consolidates around technical maturity and operational deployment at scale (Civitas 400+ institutions, Panorama K-12 breadth) but literature confirms core tension unresolved: strong identification capability offset by uncertain or limited impact on actual student outcomes and equitable intervention delivery.
  • 2024-Q1: Research focus shifts toward fairness and dashboard design maturity. EDM 2024 papers address algorithmic bias in performance prediction across demographics with fairness-aware ML approaches. Systematic review of 23 LAD studies identifies critical design gaps: most dashboard lack meaningful adaptation (automated or user-controlled) for learner awareness. Vendor landscape stable (Panorama confirmed 2K K-12 partners, Civitas 400+ HEIs) but growth rate plateaus. School leader discourse shifts from privacy compliance to data justice frameworks addressing discrimination, exploitation, and FERPA governance. Adoption barriers remain structural: teacher trust in vendors low despite technical accuracy gains; organizational capacity gaps persist; fairness concerns elevated alongside ethical consciousness. Practice maturity consolidates: strong technical capability and established deployment base, but design limitations and fairness gaps unresolved.
  • 2024-Q2: Deployment momentum continues with new university deployments showing positive outcomes (Northwest Missouri State 8% retention lift), and peer-reviewed research advances pragmatic implementation frameworks using social network analysis for institutional integration. However, regulatory and fairness concerns sharpen: Netherlands Human Rights Board formal study identifies algorithmic bias and discrimination risks, recommending stricter testing and approval requirements for educational analytics systems. Dashboard adaptation gaps persist in literature. Practice remains leading-edge: proven technical and deployment capability with quantified outcomes, but fairness, regulatory compliance, and equitable intervention challenges unresolved and increasingly visible.
  • 2024-Q3: Deployment breadth expands to state level with Massachusetts EWIS production launch; vendor platforms continue releasing retention-focused features (Snow College 12% lift via targeted interventions). Peer-reviewed research confirms pedagogical evolution of dashboards toward learning-centered design and validates early identification via behavioral signals (LSTM clickstream models). However, fairness escalates as critical adoption blocker: large-scale studies document significant racial bias in deployed predictive models (19-21% false negatives for Black/Hispanic vs. 6-12% for White/Asian students), confirming systematic disadvantage and supporting regulatory concerns. Practice maturity remains asymmetrical: strong technical deployment and validated retention outcomes offset by persistent fairness, design, and organizational capacity gaps.
  • 2024-Q4: Vendor ecosystem innovation continues with Panorama launching Solara (AI chat tool integrating analytics for K-12 districts), and procurement documents showing Civitas sustained adoption investment. However, Q4 evidence surfaces critical intervention effectiveness and fairness gaps: rigorous RCT of EWIMS (73 schools, 37K students) shows partial success (4pp chronic absence reduction, 5pp course failure reduction) but no impact on low GPAs/suspensions/progress; UK university RCT finds no measurable intervention outcome difference between email-only and email+phone support prompted by analytics; fairness research documents Black students flagged with lower accuracy in both prior-performance and ML systems. State-level Nevada deployment controversy (Infinite Campus model) documents stakeholder concerns about effectiveness and student welfare. Practice maturity paradox sharpens: identification technical capability proven and deployed at scale, but intervention effectiveness uncertain, fairness gaps documented and unresolved, and real-world adoption facing stakeholder skepticism and regulatory/ethical scrutiny.
  • 2025-Q1: Policy-driven adoption expands with Utah state mandate requiring early warning systems across all LEAs using Panorama Education (50% cost-shared), signaling large-scale institutional commitment. Vendor ecosystem matures with Panorama's Solara AI integration (Focus/Insights features, 450+ district testing) and Civitas-RNL partnership targeting measurement gaps. Research documents technical advancement (graph deep learning prediction, self-regulated learning dashboard effectiveness) but independent analysis reveals 40% of student success initiatives lack measurable impact, confirming persistent adoption and intervention effectiveness barriers despite policy support.
  • 2025-Q2: Generative AI integration reaches production scale with Panorama Solara deployment across 380,000 students in 25 states, surfacing early-warning indicators via Claude 3.7 on AWS infrastructure with FERPA/COPPA compliance. Research advances prediction methodology (human-centered explainability frameworks, federated learning for privacy preservation, early detection by week one) while documenting fairness trade-offs (racial bias in false negatives across deployed systems). Deployment momentum continues (adoption-metric and intervention case studies) but effectiveness barriers persist: EWIMS RCT outcomes show partial success (4pp chronic absence reduction, 5pp course failure reduction) with gaps in GPA/suspension impact. Practice trajectory: sustained technical advancement and production deployment of generative AI augmentation alongside unresolved fairness gaps and intervention effectiveness uncertainty.
  • 2025-Q3: Vendor ecosystem expansion continues with Panorama-Skyward SIS partnership (2,500+ districts) and Panorama scale confirmed at 15M+ students across 2,000 districts. Research reveals critical field misalignment: meta-analysis of 936 LAK papers finds 70% lack learning outcome measures and research focus has drifted from educational improvement, questioning field-wide effectiveness. Qualitative research documents persistent student concerns about privacy and bias despite high acceptance of analytics use. Deployment case studies (Crown College) demonstrate sustained institutional outcomes (89% retention), but ecosystem research confirms adoption barriers remain structural and unresolved. Practice maturity: advanced deployment capability with ecosystem integration, but research trajectory and field evolution signal caution about intervention effectiveness improvement and learning outcome impact.
  • 2025-Q4: State-level policy-driven adoption accelerates with Iowa mandating integrated early warning systems across all LEAs using Panorama Education. Civitas Learning continues deployment with demonstrated retention gains (3-11%) and completion improvements (2-13%); product maturity advances. Research documents adoption barriers: mixed-methods study shows staff perceive analytics as highly useful for risk identification but ethical concerns and cultural resistance persist. Critical governance failures surface: legal challenges to Panorama's SEL survey practices in K-12 highlight consent violations and privacy governance gaps. Higher education analysis documents 5% retention gains from targeted interventions but reveals faculty adoption lags. Practice consolidates around deployment capability and demonstrated retention outcomes, but governance, consent, and faculty adoption barriers prevent broader institutional maturity.
  • 2026-Jan: Technical advancement continues with heterogeneous graph deep learning achieving 89.5% F1 scores and 68.6% early detection by week one; equity-focused research extends predictive analytics to low-resource schools with bias-aware ensemble models. University deployments show positive outcomes (IU Indianapolis: 19%→12.7% retention gap reduction via data-informed advising). However, adoption barriers intensify: research reveals 73% of educational AI systems exhibit measurable bias with only 23% of administrators actively assessing; student expectation research (SELAQ) documents substantial gaps between student ideal and expected LA features, indicating trust and privacy concerns as adoption blockers. Deployment case studies and technical innovations sustained, but fairness concerns and student perception barriers widen. Practice maturity: advanced technical capability with selected positive deployments offset by intensifying equity, trust, and design perception gaps.
  • 2026-Feb: Policy-driven adoption continues with Utah statewide early warning system mandate (Panorama Education vendor selection), supporting regulatory compliance (Utah Code 53F-4-207) and expanding K-12 deployment. Panorama scale confirmed at 2,000+ districts serving 15M+ students with documented outcome metrics (15% reading improvement, 8% absence reduction, 26pt grade-level gains, 80pt suspension reduction). Ethical implementation research advances: IU Indianapolis case study demonstrates production-scale deployment with explainable AI, bias mitigation, and proactive advising framework addressing fairness concerns. Transformer-based methodology research (sequence-aware models) continues advancing prediction capability. Vendor ecosystem reinforces leading-edge maturity through product advancement and policy alignment. Practice momentum: continued policy acceleration and technical innovation offset by persistent fairness gaps and intervention effectiveness questions requiring ongoing attention.
  • 2026-Mar: Deployment evidence deepens with Broward County (one of the nation's largest K-12 districts) expanding Panorama Student Success, and FIU/Georgia State deploying ML models achieving 7% graduation rate improvement with stronger gains for underserved students. Ensemble model research confirms 90.9% retention prediction accuracy on 105K+ records. Regulatory pressure intensifies sharply: COPPA 2026 (effective April 22) requires parental consent for AI-powered analytics features and mandates data minimization, directly constraining how K-12 risk identification systems can operate. Fairness concerns remain acute — the Wisconsin Dropout Early Warning System is documented disproportionately flagging African American and Hispanic students despite low actual risk, with the average U.S. district using 1,449 EdTech tools affecting 55M students through FERPA loopholes.
  • 2026-Apr: Fairness and evidence maturity research advances significantly. Peer-reviewed systematic review of 46 key learning analytics publications confirms "rigorous, large-scale evidence of effectiveness is still lacking"—key negative signal for field maturity. Bias mitigation research demonstrates measurable progress: IES-funded Fair MARS fairness-aware prediction model with open-source toolkit; peer-reviewed study achieves 0.35→0.08 Bias Severity Index reduction and 15.3%→4.2% Demographic Parity improvement using ADRL + SHAP explainability. Systematic review of ML approaches for student performance prediction (MOOCs/LMS) identifies Random Forest, SVM, Decision Trees as dominant algorithms while highlighting adoption gaps in explainability and intervention evaluation. New institutional deployments confirm continued adoption: University of Utah deploys dual analytics dashboards for engagement and retention analysis; Ohio Wesleyan achieves early retention gains through consulting-driven predictive analytics. A large-scale real-world study across 600k+ students in 80 education systems reinforces fairness concerns in deployed ML-based risk models, while a practitioner critique flags FERPA's inadequacy for cloud-based AI systems as a structural governance constraint. Market ecosystem documents significant maturity with $7.83B projected market by 2030 (23.5% CAGR) and major tech vendor commitment ($4.8B KKR acquisition of Instructure/Canvas for analytics integration). However, negative signals persist: PowerSchool settlement documents surreptitious student data collection and regulatory risks; edtech consolidation analysis reveals widespread unused licenses (67% unused, 30% activation) affecting analytics tool ROI and adoption.
  • 2026-May: Deployment momentum continues with evidence of expanded institutional adoption and international scaling. Reynolds Community College (Virginia) achieved highest enrollment in 6 years and $1M+ cost savings via SAS Viya analytics infrastructure. Peer-reviewed research from 3 Nigerian universities (19,961 student records) validates ML-based prediction in non-US contexts with attention to sociocultural factors. University of Arizona and community colleges report strong outcomes (90% accuracy within 12 weeks, 11-18pp retention improvements). Technical capability research advances: systems achieving 94.2% classifier accuracy with real-time faculty dashboards; deep learning engagement forecasting identifies leading indicators; psychological health prediction extends risk assessment beyond academic metrics. LAK'26 conference records breakthrough submissions (372 papers, 9.4% increase, 46 countries) signaling sustained international research engagement. Critical fairness research demonstrates fundamental mathematical limits: Likelihood Ratio Wall (ACM FAccT peer-reviewed) proves that rare-event prediction systems (dropout 3-8% base rates) face irresolvable precision-fairness trade-offs; demographic groups with historic under-service face structurally lower achievable fairness metrics regardless of algorithm. Field awareness of pace-fairness gap intensifies (UF research coverage). Practice maturity: accelerated deployment and international expansion offset by deepening recognition of fundamental fairness constraints and persistent intervention effectiveness gaps.

TOOLS