The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that adapts lesson difficulty, pacing, and content presentation to individual student ability and learning progress. Includes mastery-based progression and spaced repetition; distinct from adaptive assessment which tests rather than teaches.
Adaptive tutoring that personalises pacing and difficulty is operationally mature and widely deployed as a supporting practice—but the field faces an entrenched implementation divide between well-designed systems and unsupervised approaches. May 2026 evidence crystallizes this boundary: a massive RCT on Siyavula (160,000 students) shows that pedagogically-designed UI prompts (+2%) and visual nudges (+9%) measurably improve persistence in failure recovery, while May meta-analyses confirm effect sizes of 0.586 for AI-teaching effectiveness across 72 studies. Yet high-profile deployment failures persist: Khanmigo's "quiet collapse" (May 2026) documents engagement barriers and knowledge transfer limits despite personalized design. The practice has converged on a hybrid human-AI model—adaptive pacing as a blended-learning support tool under teacher oversight, not autonomous tutoring. Market evidence (May 2026) shows AI personalized learning platforms valued at $6.0B (2025) growing to $31.6B by 2034 (20% CAGR), yet practitioner research identifies that 75% of AI tutors fail by day-30 retention. The field's central lesson is clear: implementation design and pedagogical guardrails, not algorithmic sophistication alone, determine efficacy. Institutional adoption continues (607 US K-12 technology leaders report accelerating deployment), and large-scale platforms (ALEKS 7M+ users, Pearson showing 90% proficiency gains) prove viability in structured domains. But equity gaps, persistent implementation barriers (training, cost, algorithm bias), and documented failure modes of unsupervised AI access remain binding constraints on scaled pedagogical impact.
McGraw-Hill's ALEKS dominates the institutional market with 7M+ global users and 2.7x proficiency likelihood for students reaching mastery thresholds, backed by 25 years of research. Institutional vendor data (May 2026) shows Pearson AI-powered adaptive practice platforms delivering 90% improvement in proficiency achievement versus static content. Large-scale independent deployments confirm viability: Siyavula (May 2026 RCT on 160,000 students) demonstrated that UI design interventions—written prompts (+2% persistence) and visual nudges (+9%)—measurably improve student persistence through failure recovery; Efekta serves 4M students (largest adaptive AI learning deployment globally); EduAdapt serves 1.8M+ students across 250+ universities; Compass serves 50K children with 500M+ learning interactions. Market valuation (May 2026) places AI personalized learning at $6.0B (2025) growing to $31.6B by 2034 (20% CAGR), with K-12 adoption surveys across 607 US education leaders showing accelerating institutional confidence.
However, research meta-analyses and practitioner evidence expose persistent implementation divides. A meta-analysis of 72 AI-teaching effectiveness studies (May 2026, Frontiers) confirms positive effects (effect size 0.586) yet highlights that heterogeneity is "driven by AI type and implementation context"—not algorithmic sophistication alone. Critical practitioner analysis (May 2026) documents that 75% of AI tutors fail retention by day-30, identifying five architectural pillars required for sustained engagement (curriculum knowledge bases, mastery models, pedagogical guardrails, conversational capability, engagement engines). May research on ML-optimized study schedules (50,700 learners) shows powerful cognitive benefits (69% longer retention, 50% higher re-engagement rates), yet randomized assignment to spacing algorithms versus conventional rules reveals the critical success factor: implementation design, not AI alone, determines learning gains. Deployment failures illustrate the boundary: Khanmigo's "quiet collapse" (May 2026 practitioner analysis) documents why personalized tutors fail despite design—engagement barriers and knowledge-transfer gaps persist without strong pedagogical guardrails and human accountability. A systematic literature review (May 2026) covering 22 higher-education studies found that classroom environment design explains ~16% of performance variance—comparable to the AI intervention itself—signaling that institutional context shapes outcomes as much as algorithmic sophistication. Professional development, teacher monitoring, careful subject selection, and pedagogical alignment remain binding constraints on scaled efficacy.
— Critical practitioner analysis documenting Khanmigo failure despite personalized design; identifies engagement barriers, knowledge transfer limitations, and equity concerns with AI-only models.
— Systematic review of 22 studies (1984–2026) showing AI-tailored learning paths yield 0.42–0.76 SD improvement. Finds classroom environment design explains ~16% of variance—comparable to AI itself.
— Survey of 607 K-12 education technology leaders across 44 US states documenting AI adoption, instructional initiatives, and leadership confidence in personalized learning.
— Large-scale RCT (160,000 students, 17 million practice problems) on Siyavula adaptive tutoring system showing written prompts (+2% persistence) and visual nudges (+9% persistence) after incorrect answers increase student persistence, with additive effects (+11% combined).
— Pearson deployment outcome: 90% improvement in proficiency achievement with AI-powered adaptive practice vs traditional static practice at college level.
— Randomized experiment (50,700 learners) comparing ML-optimized study schedules to spacing rules: ML group remembered material 69% longer and were 50% more likely to return to studying.
— Critical analysis by EdTech builder identifying why 75% of AI tutors fail by day-30 (under 12% retention) and defining 5-pillar architecture required for sustained engagement.
— Meta-analysis of 72 studies (35 pre-test-post-test designs, 37 post-test studies) showing significant positive AI-teaching effects: post-test effect size g_p = 0.586; heterogeneity driven by AI type and implementation context.
2018: Adaptive learning platforms (ALEKS, Knewton Alta, CogBooks) transitioned from pilot to institutional production deployment. ASU achieved 20-point success rate improvement in college algebra; Knewton Alta adoption reached 250 institutions with 87% mastery rates in vendor data. Peer-reviewed research documented persistent barriers to adoption.
2019: Evidence of both capability and limitations emerged. Third-party validation (Johns Hopkins) confirmed efficacy; Squirrel AI demonstrated large-scale commercial viability (2M+ students, 2000+ centers in China). Practitioner case studies revealed critical failure modes (ChalkTalk's 8% success rate) and evidence of cognitive biases in learner self-evaluation within ALTs. Expert analysis (RAND) narrowed appropriate scope to structured domains and teacher-supporting roles. Market correction evident as Knewton collapsed (raised $180M, sold for ~$10M fire sale).
2020: Platforms consolidated around incumbent publishers and established scale in specific markets. ALEKS deployed across international institutions (Philippines case study with 82% confidence gains); Squirrel AI reached 2.6k learning centers serving 2M students in China. Systematic reviews confirmed research maturity. Efficacy evidence (Korbit study: 2-2.5x learning gains) competed with evidence of persistent limitations (human-in-the-loop models required, effectiveness limited to narrow domains). Market consolidation accelerated (McGraw-Hill with ALEKS, Wiley acquiring Knewton assets).
2021: Research and deployment evidence continued to accumulate despite practitioner skepticism. Peer-reviewed studies confirmed adaptive learning efficacy across multiple modalities (speech and typing interfaces); Taiwan research documented achievement gains in online courses. ALEKS expanded institutional footprint with multi-campus deployments; Knewton's scaled metrics (15B+ personalized recommendations) contrasted with its business collapse (acquisition at steep discount), reinforcing evidence that technical capability does not guarantee market viability. Critical practitioner voices challenged hype and questioned whether adaptive personalization genuinely replaced human interaction. Stakeholder research (23-participant study) documented persistent concerns about social boundary violations despite recognition of personalization benefits.
2022-H1: Evidence consolidated around realistic capabilities and limitations. ALEKS showed measurable K-12 gains (5.1% increase in highly proficient scores on state assessments), with stronger effects for disadvantaged students; Knewton Alta received industry recognition (second SIIA CODiE award). Simultaneously, peer-reviewed research documented clear constraints: teacher-led instruction outperformed ALEKS for underachieving students; integration exceeded teachers' digital competence in primary classrooms. Engineering advances (neural network assessment improvements) demonstrated platform maturation, while growing research on implementation infrastructure (teacher dashboards, professional development) indicated the practice required substantial supporting ecosystem to work effectively. Consensus solidified that adaptive pacing is a powerful supporting tool for specific subjects and contexts, not primary instruction.
2022-H2: Academic engagement and practitioner perspectives dominated evidence. Peer-reviewed research topic in Frontiers on AI personalization techniques signaled sustained scholarly attention; practitioner surveys from Finland revealed skepticism about teacher replacement but acknowledged self-directed learning benefits. Deployment-level evidence strengthened: Every Learner Everywhere report showed 26,400 students across 12 institutions with equity gap narrowing by third term of adaptive courseware use. Implementation research (Tutti dashboard study) documented real-world integration of adaptive systems requiring teacher oversight and real-time monitoring, reinforcing human-in-the-loop necessity. News coverage reflected broader AI tutor adoption with persistent efficacy doubts, especially for language learning. By year-end 2022, adaptive pacing had shifted from growth narrative to established practice with well-understood strengths (structured domains, disadvantaged student support, measurable gains) and clear deployment prerequisites (teacher training, dashboard support, subject-matter fit).
2023-H1: Platform capability advancement continued alongside broader AI adoption narratives. McGraw-Hill deployed neural network enhancements to ALEKS in April, achieving 20% reduction in assessment time and 9% material mastery increase, signaling ongoing algorithmic refinement in major platforms. Industry adoption metrics showed acceleration: 25% of global education organizations reported successful AI implementation (up from 14% in 2019), with intelligent adaptive learning identified as a disruptive technology. Implementation practice data from lighthouse institutions revealed gaps in equity-focused adoption: only 25% of faculty used student performance data for weekly instructional modifications and 38% integrated adaptive learning with inclusive teaching practices, indicating implementation infrastructure remained limiting factor. Consistent with prior windows, evidence affirmed both capability (measurable gains in structured domains) and constraints (teacher-led instruction remained superior for underachieving students; full automation ineffective without pedagogical oversight).
2023-H2: Research validation strengthened alongside deployment evidence, establishing clearer boundaries for effective AI-tutoring implementation. Carnegie Mellon's three-school quasi-experimental study demonstrated that hybrid human-AI tutoring models work at scale (585 students, $700 annual cost) with lower-achieving students showing greater gains, validating the teacher-augmentation hypothesis. However, critical limitations became more visible: University of Würzburg research showed large language models tested as standalone tutors achieved only 82% accuracy on thermodynamics questions—well short of the 95% threshold needed for reliable unsupervised tutoring, indicating AI-only approaches remain immature. Industry metrics from McGraw-Hill reinforced platform consolidation, with ALEKS showing 14% year-to-date activation growth and digital revenue at 61% of total business, reflecting institutional commitment despite maturing competitive landscape. Academic perspectives (ECER 2023) questioned whether adaptive learning was driven by genuine pedagogical evidence or primarily by commercial and policy promotional narratives. By year-end 2023, the field had solidified around a hybrid human-AI model with well-documented scope boundaries (structured domains, teacher oversight required), clear cost economics ($700/student), and specific deployment prerequisites (professional development, monitoring infrastructure). Full automation had effectively failed; personalized adaptive pacing had become a supporting tool within blended learning ecosystems rather than a replacement for instruction.
2024-Q1: Research validation broadened beyond mathematics and science. A meta-analysis of 27 studies confirmed adaptive learning's positive effect on reading literacy (g=0.29), extending evidence of efficacy to new domains. Real-world deployment data showed EIDU's adaptive learning tool serving 225,000 students across 4,000 schools in Kenya, demonstrating viability in low-resource settings with algorithmic pacing and feedback. Market consolidation continued with McGraw-Hill ALEKS maintaining 14% activation growth and 61% digital revenue share. However, critical expert assessments reinforced maturity boundaries: technologist Satya Nitta (IBM Watson) documented a five-year failed attempt to build generalized AI tutors, citing hallucination and fundamental limitations requiring human oversight. New market entrants (Eureka Labs, launched by OpenAI researcher Andrej Karpathy; PeTai; OIAI) announced personalized tutoring platforms, though without independent deployment validation. The consensus remained stable: adaptive pacing is a proven supporting tool for structured subjects within blended learning, effective for lower-achieving learners, but dependent on teacher integration and operational infrastructure; AI-only tutoring approaches had not matured beyond proof-of-concept.
2024-Q2: Evidence consolidation confirmed mature operational status with emerging implementation barriers. McGraw-Hill's FY2024 results documented 7M+ ALEKS users and 12% growth, sustaining scale. New research expanded efficacy evidence to adult learners (Apprentice Tutors) and higher education (ALS-LMS integration showing statistically significant performance gains); peer-reviewed adoption research from rural China identified key enablers (system quality, teacher support, computer self-efficacy). However, practitioner research surfaces critical limitations: foundry10's study of 25 classroom teachers found that while adaptive tools improved efficiency and engagement, teachers faced persistent barriers (training gaps, implementation support deficits, data reliability issues). NSF-funded ASEE study across 330 students showed small negative effects on knowledge assessment but positive classroom environment, signaling context-dependence of effectiveness. Product development continued (ALEKS Adventure for K-3 launch), and platform algorithmic improvements (neural networks for assessment) demonstrated ongoing maturity. Consensus shifted firmly toward pragmatism: adaptive pacing works at scale within defined contexts and requires substantial implementation infrastructure; autonomous AI tutoring had been largely abandoned in favor of human-AI hybrid models.
2024-Q3: Research and practitioner perspectives continued to validate both capability and persistent limitations. A comprehensive systematic review (2010-2025) found consistent evidence of 20% performance improvements while highlighting the complex mixed landscape requiring greater research rigor, reinforcing field maturity. McGraw-Hill Q1 FY2025 financial results confirmed sustained growth with 14% activation increase and 8% unique user growth across ALEKS and Connect platforms, pushing K-12 digital adoption to 50% mix. Critical expert voices balanced optimistic vendor metrics: learning technologist Clark Quinn cautioned against generative AI for adaptive sequencing without rich pedagogical models, while Cognitive Resonance CEO Benjamin Riley warned against AI-only tutoring, citing Wharton research showing ChatGPT math students learned less than peers. Real-world deployments continued: SkillDict's adaptive e-learning platform at Paks Nuclear Power Plant demonstrated measurable gains in knowledge acquisition and confidence through personalized pacing in rigorous safety training. By September 2024, adaptive learning systems remained operationally established at scale with well-documented efficacy in structured domains and clear limitations for autonomous approaches, reinforcing the sector consensus that personalized pacing works best as teacher-augmenting support within blended learning rather than as primary instruction.
2024-Q4: Evidence matured into final-quarter affirmations of both proven efficacy and well-defined limitations. Khan Academy published large-scale efficacy results (December 2024) demonstrating ~20% greater learning gains across ~350K students with 30+ minutes weekly mastery-based practice, providing vendor-scale deployment validation. McGraw-Hill's FY2025 mid-year results confirmed 10% growth in ALEKS unique users, sustained institutional adoption, and rollout of generative AI features. Independent evaluation from Digital Promise documented mixed outcomes from Gates Foundation-funded pilots (Carnegie Learning, Amplify, Discovery Education), emphasizing caution and identifying key risks of hallucinations and data inaccuracy. Peer-reviewed research continued accumulation: systematic review of ITS literature identified personalized/adaptive learning as central evolution area with persistent implementation challenges and ethical concerns; Korean study of 79 college students confirmed ALEKS-driven performance gains with variation by initial knowledge level; NSF AI Institute expert analysis balanced recognition of ITS effectiveness for well-defined tasks against critical limitations (poor performance on open-ended problems, collaboration, math accuracy still below reliable thresholds). By year-end 2024, adaptive learning systems had solidified as operationally mature, evidence-validated supporting tools for structured domains within blended learning—with clear consensus that full automation remained unfeasible and teacher-AI hybrid models represent the viable deployed form.
2025-Q1: Institutional adoption continued at scale with evidence of both capability maturation and implementation constraints. McGraw-Hill announced ALEKS Adventure study (2025-2026) targeting K-3 adaptive math supplementation, signaling continued product expansion through rigorous evaluation. Large-scale deployment metrics confirmed global viability: Ensar Solutions' EduAdapt platform served 1.8M students across 250+ universities in 45 countries, demonstrating multinational scale with 79% course completion rates and 9% dropout rates. Systematic reviews of Intelligent Tutoring Systems (arXiv March 2025) affirmed AI advancements in adaptability and engagement while documenting persistent challenges in ethical considerations and cognitive adaptability. Market forecasts (February 2025) projected 22.2% CAGR growth for adaptive learning through 2034 across K-12, higher education, and enterprise, identifying adoption drivers (AI solutions, personalized learning) and barriers (development costs, privacy compliance, educator resistance, interoperability). Critical perspective from industry leadership remained cautious: McGraw-Hill's chief AI officer publicly warned against full generative AI tutoring replacement of human teachers, emphasizing risks of inadequate challenge and social skill loss. Implementation infrastructure continued as binding constraint for effectiveness. By quarter-end, consensus held that adaptive pacing is operationally proven for blended learning support in structured subjects, requires substantial teacher professional development and monitoring infrastructure, and fundamentally depends on human-AI integration rather than full automation.
2025-Q2: Vendor integration and ecosystem maturity indicators strengthened alongside critical evidence of adoption-outcome gaps. McGraw-Hill integrated Pearson's PRoPL interim assessment into K-12 curriculum solutions (June rollout to California, expanding nationwide), signaling ecosystem consolidation toward sophisticated data-driven personalization infrastructure. McGraw-Hill's Q2 financial results documented sustained scale with 7M ALEKS global users and 18% digital revenue growth to $972M (54% of total revenue), with K-12 digital billings surging 24%, confirming institutional commitment and business viability. Market research (GII, April 2025) projected market expansion from $4.03B (2024) to $14.12B by 2030 (23.19% CAGR). However, critical limitations surfaced in independent research: a cross-sectional survey of 300 higher education students and lecturers found moderate-favorable attitudes toward AI tutoring but less-significant positive correlation with learning outcomes, suggesting adoption enthusiasm may exceed validated pedagogical impact. Adoption barrier analysis identified persistent challenges: high development/maintenance costs, data privacy concerns, algorithm bias, and limited human interaction, reinforcing earlier findings that implementation complexity and cost remain primary constraints. By quarter-end, a clear tension had emerged between scaling vendor infrastructure (ecosystem integrations, user base expansion, revenue growth) and stagnating evidence of learning impact when adoption enthusiasm is measured against outcome validation.
2025-Q3: Portfolio expansion and infrastructure consolidation continued alongside critical reliability evaluations. McGraw-Hill expanded ALEKS offerings to calculus (September 2025), extending AI-driven personalized learning across full mathematics curriculum, signaling continued product maturity and confidence in core platform architecture. McGraw-Hill's post-IPO strategic pivot accelerated institutional adoption through ecosystem integration: digital revenue reached 61% of total sales with 95% adoption of digital-first model, reflecting deep commitment to AI-powered personalization infrastructure. Educator and student adoption research documented rapid integration: Michigan Virtual's 2025 snapshot showed widespread AI tool uptake in K-12 and higher education alongside persistent concerns about reliability and algorithm bias. Critical limitations remained center-stage: Common Sense Media evaluation of four major AI tutoring tools (Google Gemini, Khanmigo, Curipod, MagicSchool) identified biased outputs, hallucinations, and failure to detect missing student comprehension—confirming that production-deployed tutoring tools remain below thresholds for reliable unsupervised use. Capability validation continued: Carnegie Learning's adaptive tutoring program received competitive Evidence for Impact grant funding (160+ applications), signaling rigorous evaluation commitment. By quarter-end, the sector faced growing scrutiny on adoption-outcome alignment: institutional deployments at scale (7M+ ALEKS users, McGraw-Hill's 95% digital adoption, global platforms serving 1.8M+ students) contrasted with flat learning-outcome validation, persistent implementation barriers (algorithm bias, cost, educator resistance), and accumulating evidence that AI-only approaches remain immature despite infrastructure scaling.
2025-Q4: Academic and practitioner evidence crystallized fundamental practice boundaries. Peer-reviewed systematic literature review (Pelánek, International Journal of AI in Education, December 2025) documented persistent modeling challenges in adaptive learning (accuracy, cognitive load, bias prevention). Meta-analysis of 48 peer-reviewed studies confirmed STEM efficacy alongside critical limitations: over-reliance on AI reduces critical thinking and independent problem-solving, with many implementations yielding only modest gains versus traditional instruction. Founder critique of production AI tutoring tools identified specific failures: ChatGPT math accuracy only 50%, tools give answers instead of teaching, widespread student cheating (89% use ChatGPT for homework). McGraw-Hill sustained operational momentum (ALEKS serving 7M+ global users; EduAdapt platform at 1.8M+ students across 45 countries), while business consolidation (McGraw-Hill IPO, ecosystem integrations) signaled infrastructure maturity. By year-end, sectoral consensus had narrowed: adaptive pacing remains a validated supporting tool for bounded structured domains within blended learning, but efficacy is conditional on human oversight, professional development, careful subject selection, and operational infrastructure. Implementation barriers (cost, bias, educator resistance) persist as binding constraints on scaled impact. The practice had moved decisively from infrastructure scaling questions to addressing the harder problem of translating technical capability into reliable classroom outcomes within realistic deployment constraints.
2026-Jan: Institutional deployment momentum sustained despite persistent efficacy questions. McGraw-Hill's ALEKS remained the dominant K-12 and higher education platform (2.7x proficiency improvement for mastery-based learners, 25+ years research validation), with digital revenue mixing to 53% at McGraw-Hill and 27% K-12 order processing efficiency gains. Market remained growth-oriented: adaptive learning valued at $4.8B with 19% CAGR through 2030s, PowerSchool reaching ~50M K-12 students. Critical evaluation confirmed earlier findings: industry analysis documented K-12 implementation barriers (fidelity, professional development, device access) as outcome determinants; practitioner research identified core limitation that self-paced adaptive-only models show 80-90% dropout rates. Vendor consolidation accelerated with ecosystem integrations (Pearson assessment into McGraw-Hill K-12 solutions) signaling platform maturity. Analysis of technology deployment patterns showed adaptive pacing worked effectively for mastery-based, structured subjects under high-fidelity conditions (documented 8-percentile gains for RAND algebra deployments) but required cohort structures and human accountability layers for completion. By month-end, adaptive learning systems had become institutionalized infrastructure supporting blended learning—operationally proven yet fundamentally limited by implementation complexity and persistent efficacy questions at scale.
2026-Feb: International deployment evidence and critical scope analysis confirmed practice maturity boundaries. World Bank pilot in Côte d'Ivoire demonstrated adaptive learning efficacy in developing-country contexts (2,000 TVET students showed 11.2 months learning gain in math, 5.8 months in French, with greatest benefit for initially struggling students), validating transferability beyond developed markets. Ecosystem adoption signals strengthened: Coursera survey of 4,200+ faculty and students showed 95% AI tool usage with 47% citing personalized learning as key benefit, indicating broad integration. However, critical scope limitations emerged: UCL professor Rose Luckin documented that AI tutoring addresses only ~16% of human learning development (content delivery and procedure drilling), citing research showing AI-assisted learners exhibit reduced self-monitoring and metacognitive laziness with no transfer without AI support. Research analysis (SIAI) cited 2025 Nature study showing AI tutor outperformed active learning but documented hallucination risks, inequality concerns, and training gaps (76% UK, 69% US teachers lack formal AI training). By month-end, adaptive learning systems remained operationally entrenched with proven efficacy for bounded structured subjects, but critical voices and independent evidence reinforced understanding that human-AI hybrid models are required and that pedagogical scope remains fundamentally limited.
2026-Mar: Controlled research delivered mixed but informative signals: an AEI synthesis of empirical studies found RL-based adaptive sequencing with dynamic difficulty adjustment produced 0.15 SD learning gains (equivalent to 6–9 months progress) among 700+ high school students, while a quasi-experimental study (n=120) confirmed statistically significant performance improvements with adaptive support as metacognitive scaffold. A K-12 EdTech platform in India demonstrated the model's viability at 50,000+ students with 22% test score improvement through adaptive difficulty adjustment. Critical counterevidence reinforced scope limits: synthesis of tutoring research reaffirmed that generic AI without adaptive guardrails (ChatGPT unguided) produces 17% worse exam outcomes, sharpening the distinction between well-designed adaptive systems and unstructured AI access.
2026-May (May 1–8): May 2026 evidence crystallized why implementation design, not AI sophistication, determines efficacy. Meta-analysis of 72 AI-teaching studies (May, Frontiers Psychology) confirmed positive effects (g=0.586) with heterogeneity driven by "AI type and implementation context." Massive field trial on Siyavula (160,000 students, 17M practice problems; Carnegie Mellon, CHI 2026 Honorable Mention) showed UI design interventions measurably improve persistence: written prompts +2%, visual nudges +9%, combined effects +11%—demonstrating how pedagogical guardrails shape outcomes. ML optimization research (50,700 learners) showed adaptive spacing rules produced 69% longer retention and 50% higher re-engagement versus conventional spacing, confirming algorithmic personalization's potential. However, critical practitioner research surfaced why AI tutors fail at scale: Khanmigo's "quiet collapse" (May 7 analysis) documents how engagement barriers and knowledge-transfer gaps persist despite personalization; analysis of 75% day-30 failure rate identified five-pillar architecture requirement (curriculum KB, mastery model, pedagogical strategy, conversational layer, engagement engine) for AI tutoring viability. Systematic review of 22 higher-education studies found classroom environment explains ~16% of variance—comparable to AI itself—underlining institutional context importance. Vendor deployment data (Pearson, May 5) showed 90% proficiency improvement with AI-powered adaptive practice. Market research (May) valued AI personalized learning at $6.0B (2025) growing to $31.6B by 2034 (20% CAGR); adoption survey of 607 K-12 technology leaders documented accelerating institutional deployment. By May 8, field consensus held: adaptive pacing is pedagogically effective when implemented with human oversight, subject-matter alignment, and strong design discipline—but AI-only or under-resourced deployments remain above failure risk, and implementation infrastructure remains the binding constraint on scaled impact.
2026-Apr (April 10–24): Implementation divide crystallized in new evidence. Rigorous dual-RCT comparison published April 14 showed Harvard's pedagogically-designed adaptive tutoring achieved 2x learning gains while Wharton's unrestricted GPT access produced 17% exam decline—confirming implementation design as critical success factor. Meta-analysis of 81 studies (April 15) provided strong controlled evidence: effect size 1.01 for K-12 science with mastery-based adaptive scaffolding. However, high-profile deployment failures surfaced: Khan Academy founder's candid admission (April 10) that Khanmigo AI tutoring failed to achieve adoption and impact targets despite visibility; critical analyst assessment documented Khanmigo and Alpha School real-world deployment failures showing promised benefits not materializing. K-12 trend analysis mapped adoption acceleration (teacher use doubled 25% to 53% in single year) alongside equity barriers (34-percentage-point rural-urban adoption gap). Systematic review (50 studies, 2018–2025) confirmed engagement gains 18–25% and dropout reduction 28%, while identifying that 70% of under-resourced institutions face implementation barriers. Deployment evidence at scale included Efekta serving 4M students (largest AI learning deployment), Compass 50K students with 500M+ interactions, Medly 74% GCSE improvement. By late April, consensus reinforced: adaptive pacing is operationally proven for structured domains within blended learning under high-fidelity conditions, but autonomous AI tutoring remains ineffective and implementation infrastructure (teacher training, monitoring, subject selection) is the binding constraint on pedagogical impact.