Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Curriculum design & content generation

LEADING EDGE

TRAJECTORY

Stalled

AI that designs curricula, generates learning paths, creates course content, and produces lesson plans aligned to learning objectives. Includes standards-aligned content creation and prerequisite mapping; distinct from question generation which creates assessment items rather than instructional content.

OVERVIEW

AI-powered curriculum design continues to reveal the practice's core tension: tools demonstrably work and are widely adopted, yet deployment requires persistent human expertise and oversight. In early 2026, adoption has broadened significantly—68-72% of K-12 teachers now use AI weekly for lesson planning and content generation, and Stanford analysis of 150,000+ teacher prompts confirms curriculum design is the dominant substantive use case. However, the quality ceiling remains unchanged. Independent practitioner assessment finds tools excel at text leveling and rubric scaffolding but produce "structurally competent and educationally generic" lesson plans requiring substantial teacher revision. Peer-reviewed research documents persistent pedagogical limitations: 45% of AI-generated lessons remain at Bloom's basic "remember" level, and tools without explicit learning science foundations function as "sophisticated photocopiers" rather than pedagogical partners. The practice achieves leading-edge status not because AI generates perfect curriculum, but because forward-leaning educators have built workflows that accept AI as a brainstorming partner and efficiency tool while maintaining non-negotiable human control over learning quality. For the majority of teachers and institutions, the adoption barriers remain real: insufficient training (71% received no formal AI instruction), governance uncertainty, and the fundamental "supervision debt"—every AI artifact requires human validation. The defining question has shifted from "can AI design curriculum?" to "what institutional infrastructure makes AI design sustainable?"

CURRENT LANDSCAPE

Teacher adoption has reached mainstream scale. April 2026 data shows 68-72% of K-12 teachers use AI weekly for lesson planning; 65% generate worksheets, 54% differentiate content, and 72% create lesson plans (saving 3.2 hours per week on average). Stanford's analysis of 150,000+ prompts from 4,400+ teachers confirms the pattern: roughly 50% of teacher-AI interactions relate to curriculum design—unpacking standards, generating examples, aligning materials with learning objectives, and revising content for accessibility. The vendor ecosystem remains consolidated: MagicSchool (6M+ users, 1M on free/lower tiers with no admin visibility), Khanmigo (700k users across 380+ districts with Harvard/Stanford RCT validation), and emerging standards-aligned tools (Kuraplan, Atomic Jolt, PepperMill). Policy-level adoption has accelerated: 134 AI education bills across 31 US states, with Georgia and Mississippi now requiring AI curriculum in graduation standards.

However, adoption outpaces institutional readiness. Only 29% of teachers report receiving formal AI training; 77% feel stressed by pace of tool change. Quality assessment is sobering: independent practitioner review finds basic worksheets rated 71% good/excellent, but assessment items requiring higher-order thinking only 38% acceptable, and IEP recommendations 36% good. This creates the "supervision debt"—every AI output requires validation, and tools lacking explicit learning science design produce generic content. Institutional frameworks lag adoption velocity: 87% of schools use AI tools without formal governance; NYC Department of Education policy lacks criteria for algorithmic bias or instructional effectiveness. The result is dispersed, teacher-initiated curriculum experimentation without coherent system design. Field-building initiatives (Digital Promise/TNTP partnership, $23M National Academy for AI Instruction, MIT's PEA²K cohort with 14 districts) signal maturation toward structured integration, but mainstream adoption remains constrained by training gaps, governance infrastructure deficits, and unresolved pedagogical questions about tool-generated content quality.

TIER HISTORY

ResearchJun-2023 → Jun-2023
Bleeding EdgeJun-2023 → Oct-2024
Leading EdgeOct-2024 → present

EVIDENCE (105)

How Higher Ed Can Make AI WorkIndustry Reports

— Research study (120 respondents from US higher ed) documenting five institutional barriers to effective AI integration, including access without guidance on curriculum expectations and uneven workforce preparation—directly relevant to systemic failures in curriculum design adoption.

— Large-scale adoption survey (n=1,041, margin of error ~3%) reveals a critical curriculum design gap: 88% of UK university students use AI in assessments, but only 36% received institutional training on AI skills. Documents rapid behavioral shift and unmet pedagogical needs.

— Peer-reviewed framework directly addressing curriculum design with AI, providing implementable policy rules, taxonomy of AI didactic functions, guardrails for assessment, and routines protecting student voice and academic integrity.

— Named district deployment of AI-powered curriculum creation platform (ACES Curriculum Creator) with quantified cost savings and capability expansion—direct evidence of AI enabling in-house curriculum/edtech development at scale.

— Named district deployment study with specific metrics: lesson planning platform (Solara) adoption (55% first year), sustained use (54% repeat users 9+ times), quantified time savings (5 hrs/month), output quality ratings (79-84% clarity/usefulness), and behavioral shift (time reinvested in curriculum refinement, differentiation, student support).

— Direct evidence of quality gap in AI-generated curriculum: lesson plans appear complete but lack rigor, intervention plans sound structured but fail student needs. Documents that polished AI outputs mask underlying pedagogical failures.

— Critical academic assessment from UCL challenging whether AI should substitute for lesson planning and teaching; important negative signal on pedagogy.

— Vendor announcement of GA product features, including Educational Song Generator for curriculum-aligned content creation, showing evolution of AI-driven curriculum content generation tools.

HISTORY

  • 2023-H1: MagicSchool and Twinkl introduce AI-powered lesson planning and content creation tools; K-12 adoption reaches ~40% of surveyed teachers; significant faculty skepticism persists in higher education regarding AI's educational impact.
  • 2023-H2: Khan Academy launches Khanmigo teacher tools in production (lesson planning, rubrics, discussion prompts); district-level pilots expand (Newark, Gwinnett); individual practitioner adoption accelerates (teachers building full curriculum maps with ChatGPT); standards bodies begin addressing AI-generated content and plagiarism detection challenges.
  • 2024-Q1: Vendor platforms mature to scale (MagicSchool 4M+ users, Curriculum Genie 300+ LEAs); industry consortia formalize K-12 integration frameworks (CoSN/CGCS maturity tool); business schools report 60% planning curriculum transformation; critical assessments emerge highlighting quality gaps and high pilot failure rates (95% of enterprise AI pilots deliver zero ROI).
  • 2024-Q2: Khanmigo expands free access to all US teachers via Microsoft partnership; Indiana statewide AI pilot reaches 112 schools with 53% positive impact on student outcomes; international platforms scale (NovaEscola in Brazil reaches 15,000+ users); higher education lags in curriculum review despite interest (only 14% of institutions reviewed curricula). Critical barriers remain: institutional adoption slow, quality concerns persistent, teacher training gaps evident.
  • 2024-Q3: Khanmigo extends globally to 49 countries via Microsoft partnership; LAUSD's custom curriculum chatbot shuts down after 5 months, revealing implementation risks; peer-reviewed research documents teachers' AI-driven curriculum adaptation patterns; practitioner and institutional frameworks emphasize educator control and caution. Evidence converges on simultaneous expansion and consolidation: tools scaling internationally while deployment failures expose adoption barriers and quality concerns, requiring heightened institutional oversight.
  • 2024-Q4: Peer-reviewed research confirms teacher adoption of MagicSchool in real classrooms; universities demonstrate rapid course generation (full courses via ChatGPT in under 24 hours with expert approval); instructional design adoption broadens (84% of practitioners use AI) but shows diminishing returns and platform stagnation. However, quality gaps persist: survey of 104 teachers finds only 40% of AI-generated lesson plans classroom-ready; instructional designers report 2024 as continuity-not-change year due to generic models. Adoption plateau evident: tools prove technical viability but encounter institutional and pedagogical limits requiring expert oversight.
  • 2025-Q1: MagicSchool scales to 5M+ educators across 160+ countries with 13,000+ schools; Enid High School (Oklahoma) reports positive outcomes in geometry curriculum deployment via Khanmigo with improved student engagement; practitioner research emphasizes need for educator quality control; industry analysis positions curriculum development as high-benefit, low-maturity use case. Tension persists between vendor scalability claims and institutional implementation reality: adoption broadens among early adopters while broader institutional scaling remains constrained by quality assurance and teacher training requirements.
  • 2025-Q2: Adoption metrics confirm 63% of K12 teachers and 42% of HED instructors use GenAI for lesson planning, yet peer-reviewed research (Penn GSE, UMich) documents systematic pedagogical limitations in AI-generated content. Vendors respond with pedagogically-grounded products (Curipod, others); practitioners develop frameworks emphasizing educator control. Administrator support reaches 55% but teacher adoption in classrooms remains low (25% report AI-assisted instruction). Quality concerns emerge across K12, higher education, and early childhood, converging on universal requirement: AI-generated lesson plans need expert review before classroom deployment.
  • 2025-Q3: Institutional curriculum AI adoption accelerates: Georgia University System deploys AI mapping across 26 institutions (344K+ students); Immaculata University integrates MagicSchool into teacher preparation programs; new product categories (Atomic Jolt, PepperMill) automate gap analysis and standards alignment. Research demonstrates quantified effectiveness (89.72% completion, 91.44% retention) in controlled deployments. Yet persistent pedagogical limitations documented: educators report AI generates generic, low-engagement content lacking critical thinking activities. Michigan Virtual survey (554 educators, September) confirms continued adoption growth. Practice achieves technical maturity and broad early-adopter reach, but remains constrained by universal requirement for expert curriculum review and quality assurance systems before classroom deployment.
  • 2025-Q4: Global policy frameworks institutionalize AI literacy into national curricula (UNESCO, Colombia, India, UAE initiatives); Khanmigo expands to Vietnam with native localization; peer-reviewed evidence documents critical limitations—90% of AI-generated civics lessons constrain thinking to basic levels, AI accuracy fails on subjective assessment tasks, Estonia survey (15,631 students) reveals implementation gaps where adoption outpaces pedagogical readiness. India reports 57% institutional AI policy adoption, signaling strategic institutional integration despite persistent classroom implementation gaps. Adoption plateaus: continued 63% K-12 and 42% HE tool adoption but only 25% classroom deployment; practice achieves operational maturity at scale but remains fundamentally constrained by unresolved quality, bias, and pedagogical limitations.
  • 2026-Jan: Major vendor innovations accelerate (Microsoft Copilot Teach, Google Gemini integration with Khan Academy); institutional deployments expand (Palm Springs Unified, Bloomington Junior High); practitioner critiques intensify, warning against proliferation of low-value AI curriculum tools and emphasizing gap between hype and classroom-ready solutions. Adoption remains steady at ~60% teacher usage for lesson planning; tools prove mature for early adopters while quality and pedagogical constraints continue limiting mainstream classroom implementation.
  • 2026-Feb: Global scale confirmed: Ciklum's AI platform serving 85,000 students across 160+ countries with 70% parent adoption increase; 30,000+ teachers driving 115,000+ AI-generated lesson plans. UK efficiency data shows 70-80% planning time reduction. However, Bend La-Pine Schools removes MagicSchool's student-facing Raina after parent protests, highlighting safety barriers. Peer-reviewed research (February) reiterates quality limitations: 90% of AI civics lessons constrain student thinking to basic levels. EdTech expert analysis identifies "supervision debt"—mandatory human validation across all curriculum AI workflows. Practice reaches scale but deployment failures and quality constraints confirm hard limits on autonomous systems and mainstream classroom reach.
  • 2026-Mar: Khanmigo growth accelerates to 700,000 users across 380+ districts with Harvard/Stanford RCT validation; MagicSchool adoption survey (3,600+ educators) documents 71% lesson planning use and 600+ district-customized tools deployed. Major institutional investment: $23M National Academy for AI Instruction partnership (Anthropic, Microsoft, OpenAI) commits to training 400k teachers on agentic curriculum workflows, signaling shift from template-based to reasoning-agent approaches. Veteran practitioner accounts (American Federation of Teachers) confirm AI used for lesson planning, differentiation, and rubrics but emphasise critical revision before classroom deployment. Critical countervailing evidence: OECD Digital Education Outlook 2026 shows pedagogy-grounded tools outperform generic LLMs; meta-analysis of 11 RCTs finds time savings (25 min/week average) but no improvement in lesson quality (45% stay at Bloom's "remember" level); Alpha School investigation documents AI-generated lesson failures at scale; 3-year deployment across 15 schools shows measurable standards-coverage gains (67→99%) but requires structured AI sequencing engines. Adoption momentum sustained but quality barriers persist—deployment continues to require expert human oversight and instructional design expertise.
  • 2026-Apr: Adoption reaches mainstream scale: 68-72% K-12 teachers use AI weekly for lesson planning; RAND survey (4,200 teachers) documents 72% create plans, 65% worksheets, 54% differentiation, but quality varies sharply (71% basic tasks good, 38% higher-order thinking, 36% IEP recommendations). Stanford SCALE Initiative analysis of 150,000+ teacher prompts confirms 50%+ relate to curriculum design. Policy-level institutionalization: 134 bills across 31 states, Georgia/Mississippi mandate AI in graduation standards. Independent practitioner review reveals MagicSchool excels at text leveling but lesson plans remain "educationally generic" requiring substantial revision; MagicSchool April 2026 updates add curriculum-aligned song generation as a new content format. UCL academic critique challenges whether AI should substitute for lesson planning at all, warning that proliferation of convenience tools risks reducing teachers to editors of generic outputs. Critical framework emerges: tools without learning science foundation (cognitive load theory, Bloom's progression, retrieval practice) function as "sophisticated photocopiers" not teaching partners; OECD Digital Education Outlook 2026 confirms AI benefits depend on curriculum design quality, not mere tool access. Field-building initiatives signal maturation (Digital Promise/TNTP, NSF AmplifyGAIN research center, Massachusetts PEA²K cohort) but widespread adoption constrained by training gaps (71% received no formal instruction), governance deficits (87% schools lack formal AI policy), and unresolved quality thresholds. Practice achieves scale but remains fundamentally dependent on institutional capacity for curriculum vetting and learning sciences integration.
  • 2026-May: HEPI survey (1,041 UK students) finds 88% use AI in assessments but only 36% received institutional training, crystallising the curriculum integration gap at the student-demand layer. A parallel US higher ed study (120 institutions) documents five systemic barriers to AI curriculum adoption — access without guidance, uneven workforce preparation, and institutional inertia — confirming that adoption breadth now far exceeds instructional design infrastructure.