The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that designs curricula, generates learning paths, creates course content, and produces lesson plans aligned to learning objectives. Includes standards-aligned content creation and prerequisite mapping; distinct from question generation which creates assessment items rather than instructional content.
AI-powered curriculum design continues to reveal the practice's core tension: tools demonstrably work and are widely adopted, yet deployment requires persistent human expertise and oversight. In early 2026, adoption has broadened significantly—68-72% of K-12 teachers now use AI weekly for lesson planning and content generation, and Stanford analysis of 150,000+ teacher prompts confirms curriculum design is the dominant substantive use case. However, the quality ceiling remains unchanged. Independent practitioner assessment finds tools excel at text leveling and rubric scaffolding but produce "structurally competent and educationally generic" lesson plans requiring substantial teacher revision. Peer-reviewed research documents persistent pedagogical limitations: 45% of AI-generated lessons remain at Bloom's basic "remember" level, and tools without explicit learning science foundations function as "sophisticated photocopiers" rather than pedagogical partners. The practice achieves leading-edge status not because AI generates perfect curriculum, but because forward-leaning educators have built workflows that accept AI as a brainstorming partner and efficiency tool while maintaining non-negotiable human control over learning quality. For the majority of teachers and institutions, the adoption barriers remain real: insufficient training (71% received no formal AI instruction), governance uncertainty, and the fundamental "supervision debt"—every AI artifact requires human validation. The defining question has shifted from "can AI design curriculum?" to "what institutional infrastructure makes AI design sustainable?"
Teacher adoption has reached mainstream scale. April 2026 data shows 68-72% of K-12 teachers use AI weekly for lesson planning; 65% generate worksheets, 54% differentiate content, and 72% create lesson plans (saving 3.2 hours per week on average). Stanford's analysis of 150,000+ prompts from 4,400+ teachers confirms the pattern: roughly 50% of teacher-AI interactions relate to curriculum design—unpacking standards, generating examples, aligning materials with learning objectives, and revising content for accessibility. The vendor ecosystem remains consolidated: MagicSchool (6M+ users, 1M on free/lower tiers with no admin visibility), Khanmigo (700k users across 380+ districts with Harvard/Stanford RCT validation), and emerging standards-aligned tools (Kuraplan, Atomic Jolt, PepperMill). Policy-level adoption has accelerated: 134 AI education bills across 31 US states, with Georgia and Mississippi now requiring AI curriculum in graduation standards.
However, adoption outpaces institutional readiness. Only 29% of teachers report receiving formal AI training; 77% feel stressed by pace of tool change. Quality assessment is sobering: independent practitioner review finds basic worksheets rated 71% good/excellent, but assessment items requiring higher-order thinking only 38% acceptable, and IEP recommendations 36% good. This creates the "supervision debt"—every AI output requires validation, and tools lacking explicit learning science design produce generic content. Institutional frameworks lag adoption velocity: 87% of schools use AI tools without formal governance; NYC Department of Education policy lacks criteria for algorithmic bias or instructional effectiveness. The result is dispersed, teacher-initiated curriculum experimentation without coherent system design. Field-building initiatives (Digital Promise/TNTP partnership, $23M National Academy for AI Instruction, MIT's PEA²K cohort with 14 districts) signal maturation toward structured integration, but mainstream adoption remains constrained by training gaps, governance infrastructure deficits, and unresolved pedagogical questions about tool-generated content quality.
— Research study (120 respondents from US higher ed) documenting five institutional barriers to effective AI integration, including access without guidance on curriculum expectations and uneven workforce preparation—directly relevant to systemic failures in curriculum design adoption.
— Large-scale adoption survey (n=1,041, margin of error ~3%) reveals a critical curriculum design gap: 88% of UK university students use AI in assessments, but only 36% received institutional training on AI skills. Documents rapid behavioral shift and unmet pedagogical needs.
— Peer-reviewed framework directly addressing curriculum design with AI, providing implementable policy rules, taxonomy of AI didactic functions, guardrails for assessment, and routines protecting student voice and academic integrity.
— Named district deployment of AI-powered curriculum creation platform (ACES Curriculum Creator) with quantified cost savings and capability expansion—direct evidence of AI enabling in-house curriculum/edtech development at scale.
— Named district deployment study with specific metrics: lesson planning platform (Solara) adoption (55% first year), sustained use (54% repeat users 9+ times), quantified time savings (5 hrs/month), output quality ratings (79-84% clarity/usefulness), and behavioral shift (time reinvested in curriculum refinement, differentiation, student support).
— Direct evidence of quality gap in AI-generated curriculum: lesson plans appear complete but lack rigor, intervention plans sound structured but fail student needs. Documents that polished AI outputs mask underlying pedagogical failures.
— Critical academic assessment from UCL challenging whether AI should substitute for lesson planning and teaching; important negative signal on pedagogy.
— Vendor announcement of GA product features, including Educational Song Generator for curriculum-aligned content creation, showing evolution of AI-driven curriculum content generation tools.