The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that localises, adapts, and summarises educational content for different contexts, languages, and learning levels. Includes textbook summarisation and cultural adaptation; distinct from curriculum design which creates new content rather than transforming existing material.
Educational content adaptation and summarisation represents a technical research frontier focused on transforming existing educational material—textbooks, lecture transcripts, articles—into forms suited to different contexts, languages, and learner ability levels. The practice is distinct from curriculum design, which creates entirely new learning pathways; instead it concentrates on the automated modification of existing content to extend its reach and utility.
The field sits at the intersection of natural language processing (particularly abstractive summarization) and instructional design. The core technical challenge centres on factual consistency and quality trade-offs: models that summarise complex educational material frequently introduce factual errors, hallucinations, or lose nuance critical to learning, and compression-based approaches reveal persistent tensions between conciseness and accuracy. This remains the primary barrier to reliable deployment in educational settings. By January 2026, the practice exhibits a fundamental paradox: consumer-scale adoption of summarization tools is mainstream and growing, yet institutional deployment remains constrained by unresolved accuracy, pedagogical outcome, and liability concerns.
As of March 2026, the practice exhibits marked polarization between accelerating learner adoption and institutional caution driven by mounting evidence of accuracy and learning outcome risks. Learner adoption has reached saturation in developed higher education markets: UK undergraduates have climbed to 92% (March 2026 HEPI survey, up from 66% in 2024), with summarization of articles and textbooks ranking as the second most-used AI application after concept explanation. Consumer summarization tools continue scaling: Mindgrasp operates at 100k+ users globally with stable revenue; SciSummary serves 700k+ academic users (including Harvard, Stanford, MIT) processing 1.5M+ papers. The AI transcription/summarization market is projected to reach $19.2B by 2034, with 62% of professionals reporting 4+ hours weekly time savings.
Yet evidence quality has become the decisive barrier to institutional deployment. March 2026 academic research from UC San Diego quantifies fundamental content adaptation failures: LLM-generated summaries exhibit a 26.42% "nuance shift" rate (content altered in direction or meaning) and a 60% hallucination rate. Independent testing by ToolHunt (March 2026) on BBC news articles found 51% of AI-generated summaries had significant problems and 19% contained outright factual errors. A Brookings Institution global study (March 2026, drawing on 500+ stakeholders across 50 countries and 400+ studies) concludes that "at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits," citing impacts on foundational learning capacity, social-emotional well-being, and trust relationships. The critical paradox: the OECD Digital Education Outlook (March 2026) documents that unrestricted AI tools improve immediate task performance (students using LLMs wrote better essays, math exercises scored higher) but simultaneously undermine learning transfer—80% of students using LLMs to write essays could not recall their content afterward, and Turkish mathematics students using ChatGPT performed worse on concept exams than peers despite higher exercise scores.
Duolingo's Vision 2026 roadmap commits to building unique adaptive curricula per user, demonstrating continued large-scale deployment of content adaptation technology. Yet the December 2025 Duolingo failure—a 68% stock decline attributed to user complaints of "robotic lessons" and engagement collapse following aggressive AI-first content generation—signals the fragility of automation-heavy strategies. Institutional concerns have intensified: policies remain nascent (25% of campuses have formal AI policies), and deployment blockers cited include systemic bias, accuracy inadequacy (legal experts warn 80% accuracy is insufficient for liability-sensitive contexts), privacy risk (FERPA violations), and unresolved pedagogical outcome gaps. The Brookings framework distinguishes between "AI-enriched learning" (pedagogically sound design with human oversight) and "AI-diminished learning" (overreliance that undermines capacity), underscoring that tool design and institutional safeguards, not just capability, determine whether content adaptation supports learning or substitutes for it.
The practice remains learner-driven rather than institutionally deployed. Production-ready tooling exists, consumer demand is mainstream, and time-savings are documented. Institutional deployment remains blocked by unresolved accuracy deficits (hallucination rates 19-26% in current systems), learning outcome risks (performance-retention decoupling), bias in training data, and liability concerns. April 2026 evidence reveals no resolution of core barriers: EACL 2026 research identifies Harmful Factuality Hallucination (HFH) failure mode where LLMs misplaced correctness in rephrasing (mitigable ~50% via prompting), peer-reviewed studies document representational and linguistic bias endemic in personalized content generation (>75% of educators acknowledge non-neutral outputs), and K-12 real-world deployment shows cultural erasure risks (AI simplifying Spanish text in student writing). Duolingo's April 2026 content adaptation features (Explain My Answer, Video Call, Roleplay) with 10-fold generation capacity and 148 new courses demonstrate continued large-scale tool deployment but without resolution of the pedagogical barriers that defined the bleeding-edge stall. By April 2026, the bleeding-edge phase exhibits stalled institutional momentum: consumer adoption has plateaued at saturation in leading markets, specialized deployment tools (Diffit, Curipod, NotebookLM) reach 31-40% teacher and learner usage but with moderate satisfaction (52% rate outputs as good/excellent), institutional inflection point remains visible (Cal State 460k+ deployment, $5.88B→$32.27B market expansion 2024-2030) yet constrained by unresolved accuracy, bias, and pedagogical outcome gaps.
By May 2026, institutional infrastructure maturity signals emerging alongside persistent barriers. Moodle LMS integration with Gemini for text summarization reached general availability, signaling mainstream LMS adoption of content adaptation features. Teachers actively use AI tools to translate educational materials for multilingual learners, expanding the practice into localization workflows. US school districts formalized AI acceptable-use policies, moving from early experimentation toward institutional governance. However, core barriers remain unresolved: citation fabrication rates persist at 55% for GPT-3.5 and 18% for GPT-4; systems lack persistent learner models for effective adaptation and require hybrid architectures combining knowledge graphs and retrieval-augmented generation to achieve reliable deployment. Critically, learning outcome risks intensify—passive AI summarization undermines memory formation compared to active retrieval practice, and students delegating written work to AI perform 18-25 percentile points lower on in-person assessments. Pre-service science teachers exhibit low trust in AI-generated explanations despite institutional pressure to deploy, positioning truth assessment as a pedagogically responsible practice requiring explicit verification. Research-backed frameworks (TASU in literature education, Revise–Locate–Justify routine) emerge for integrating content adaptation with human oversight, yet adoption remains limited. The meta-analysis evidence (g_p=0.586 across 72 studies) confirms positive teaching effectiveness when AI organizes and adapts materials, provided implementation includes pedagogical design and human verification infrastructure. By May 2026, the practice exhibits a stable contradiction: LMS integration and instructor adoption climb, consumer tools scale to 57% college usage weekly, specialized K-12 tools reach 31-40% adoption, yet institutional expansion remains constrained by unresolved accuracy, hallucination persistence, learning outcome decoupling, and the pedagogical complexity of deploying content adaptation safely.
— Gemini integrated into Moodle LMS for text summarization (product-GA); teachers using AI to translate materials for multilingual learners; US districts formalizing AI policies. Signals movement from experimentation to institutional governance.
— Systematic review of 8,000+ academic records: LLMs lack persistent learner models for content adaptation; hallucinations risk reinforcing misconceptions; hybrid architectures (knowledge graphs, RAG) required for reliable educational deployment.
— YouTube practitioner guide demonstrating AI tools for content adaptation: rewriting paragraphs for different reading levels using ChatGPT, Diffit, Brisk, EduCafe. Shows teacher adoption of AI-assisted content differentiation.
— Practitioner analysis: passive AI summarization for reading replaces active retrieval practice (proven low-effectiveness study technique), undermining memory formation. Highlights learning outcome risks of unguided summarization tool use.
— Frontiers in Psychology: Pre-service science teachers exhibit low trust in GenAI-generated explanations, positioning truth assessment as pedagogically responsible practice requiring explicit verification—signals practitioner adoption barriers.
— Frontiers in Education framework (TASU): seven pedagogical functions including content adaptation/curation role; Revise–Locate–Justify routine required to evidence-ground AI suggestions, protecting factual integrity and student voice.
— Meta-analysis of 72 studies: AI-enabled teaching shows positive effect (g_p=0.586). AI organizing and adapting instructional materials reduces teacher workload and enhances alignment between resources and learning goals.
— Industry analysis: 78% HS, 64% college students use AI; students delegating writing to AI perform 18-25 percentile points lower on in-person assessments than peers. Documents adoption scale alongside learning outcome decoupling.