Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

🎓 Education & Learning

AI for teaching, tutoring, assessing, and managing learning experiences. Mostly leading-edge: adaptive tutoring and automated grading are approaching good practice, but institutional adoption is slow due to academic integrity concerns and uneven infrastructure. Three practices are bleeding-edge, including AI-generated curricula and autonomous classroom agents. Most trajectories are stalled — policy and pedagogy lag behind the technology.

15 practices: 2 good practice, 11 leading edge, 2 bleeding edge

Education & Learning -- Biweekly Brief

The headline: AI tools are now used by 80% of students and 68% of teachers, but most schools have no governance framework and no evidence the tools improve learning. The gap between adoption and impact is the domain's defining problem.

The Picture

Most educational institutions are now using AI in some form -- for lesson planning, practice quizzes, student communications, or content generation. A small group of forward-leaning districts and universities have built the governance and training infrastructure to use these tools effectively: Khanmigo operates across 380+ U.S. districts, Gradescope spans 500+ universities, and adaptive learning platforms like ALEKS serve 7 million users. But the majority are adopting tools without formal policies (87% of schools), without teacher training (71% of teachers received none), and without evidence that the tools improve outcomes. Only 34% of teachers who use AI weekly believe it makes them more effective. The window for getting ahead of this is narrowing: three states have already signed laws mandating AI governance by July 2026, and the EU classifies educational AI as high-risk from August 2026.

This Fortnight

  • Khan Academy admitted its flagship AI tutor is not landing. Only 15% of students with access regularly engage with Khanmigo, despite 108 million cumulative interactions. The company is redesigning the platform over summer 2026. For organizations evaluating AI tutoring vendors, this is a reality check: availability does not equal adoption, and adoption does not equal learning.

  • A widely cited study claiming AI boosts learning was retracted. Springer Nature pulled a meta-analysis with 262 peer-reviewed citations for methodological discrepancies. The retraction weakens the evidence base that many institutions have relied on to justify AI investments, and it should prompt any organization citing this research to revisit its assumptions.

  • The University of Texas at Austin banned all third-party AI detection software. UT joins Vanderbilt, UCLA, Yale, Johns Hopkins, and Northwestern in formally abandoning detection tools, citing student intellectual property concerns and instructor liability. Independent testing shows detection accuracy has not improved since 2023, and expert humans outperform all commercial detectors on paraphrased text.

  • A massive study showed that small design changes drive big learning gains. A randomized trial across 160,000 students on South Africa's Siyavula platform found that simple UI nudges -- written prompts after wrong answers and visual cues -- improved student persistence by up to 11%, at negligible cost. The finding reinforces that implementation design, not AI sophistication, determines outcomes.

  • AI training simulations hit mainstream enterprise adoption. Fifty-eight percent of Fortune 500 companies now use AI-powered sales roleplay, with 91% adoption among high-performing teams and documented 3.2x ROI within 12 months. Insurance, home services, and negotiation training are the newest deployment domains, moving simulated practice well beyond its sales-training origins.

Coming Up

  • Three state AI governance deadlines hit July 2026. Idaho, Ohio, and Georgia have signed laws requiring school districts to implement AI policies and governance frameworks. Districts in these states that have not started should treat this as a compliance priority, not a technology initiative.

  • The EU AI Act classifies educational AI as high-risk from August 2026. Any organization deploying AI tutoring, assessment, or analytics tools to EU-based learners will need to demonstrate traceability, data quality, transparency, and human oversight. Training providers serving European clients should begin compliance mapping now.

  • Khan Academy is launching a structured assessment product. The new "Assessments" tool adds psychometrics and norming to Khan Academy's platform, marking a shift from tutoring into formal evaluation. Institutions using Khanmigo should watch for how this changes the product's value proposition and competitive positioning against traditional assessment vendors.

What's Hard About This

  • AI tools improve task performance but degrade learning retention. Students using AI to write essays produce better work but retain 80% less content afterward. Coding students assisted by AI score 17% lower on comprehension tests. The OECD calls this the "performance-learning paradox" -- the tools make output look better while making the learner worse. This trade-off is structural, not a bug to be fixed, and it means that any deployment without accountability mechanisms (exams, demonstrations of understanding) will undermine the learning it claims to support.

  • Demographic bias is baked into every assessment-adjacent AI tool. AI grading models give different feedback by student race and gender. Content detectors falsely accuse non-native English speakers at two to three times the rate of native speakers. Learning analytics systems miss at-risk Black and Hispanic students at nearly double the rate of white students. These are not edge cases -- they reflect training data patterns that current architectures cannot overcome without explicit, continuous bias mitigation that most institutions are not resourced to provide.

  • The research infrastructure for measuring what works is disappearing. The U.S. Department of Education's Institute of Education Sciences lost $881 million in contracts and went from 200 staff to 31. This eliminates the primary mechanism for rigorous, independent evaluation of educational technology programs. Districts are being asked to adopt AI at scale with no external capacity to verify whether it helps.


Go deeper: the full Education & Learning briefing -- the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.