Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Coding education & interactive exercises

LEADING EDGE

TRAJECTORY

Stalled

AI-powered coding education platforms that generate exercises, evaluate solutions, and provide personalised debugging guidance. Includes automated exercise generation and solution feedback; distinct from chat-based code assistance which helps working developers rather than learners.

OVERVIEW

AI-powered coding education has crossed from prototype into production at a handful of major platforms, but the field is stuck on a pedagogical paradox that limits broader institutional adoption. Sololearn, JetBrains Academy, and Codecademy all ship AI-generated exercises and personalised feedback to millions of learners, and student uptake is strong -- 44% of people learning to code now use AI tools. The problem is what happens to learning. Controlled studies show AI assistance lowers concept mastery by 17% when students default to code generation rather than explanation-seeking, and the exercises themselves face a validity gap: LLMs can solve the tasks they create. Forward-leaning vendors are pushing into agentic workflows -- autonomous debugging, AI-guided practice courses -- yet most educational institutions have not adopted these platforms, and the evidence base for learning outcomes remains thin. The defining tension is between scale and scaffolding: platforms can generate and deliver exercises efficiently, but ensuring those exercises teach rather than substitute for understanding is an unsolved design problem.

CURRENT LANDSCAPE

The vendor ecosystem is consolidating around AI-augmented practice environments. JetBrains has been the most aggressive mover, embedding agentic AI directly into its IDE and launching structured courses on Hyperskill that pair AI agent assistance with guided exercises. Codecademy's AI Learning Assistant logged nearly a million learner conversations by end of 2024, and JetBrains' free Student Pack now puts AI-assisted tools in the hands of over three million students globally. Developer adoption is rapid and accelerating: 64% of developers now use AI to learn coding (up from 37% in 2024), though 38% cite lack of trust as a barrier. Institutional adoption among students is mainstream—69% report AI in coursework (up from 57% in 2024)—yet governance gaps remain acute: only 31% of institutions have formal AI policies, 11% ban it outright, and 13% remain uncertain.

The pedagogical paradox has sharpened. A March 2026 longitudinal study tracking three cohorts over three years documents that as AI became normalized, students' help-seeking practices systematically shifted, raising the core challenge: "how courses redefine productive learning practices while maintaining student agency." More troublingly, a controlled trial of 52 developers learning Python found AI-assisted learners scored 17% lower on comprehension quizzes; however, interaction patterns determine outcomes—active engagement (asking questions, requesting explanations) yields 65-86% comprehension, while passive delegation yields 24-39%, proving the risk lies in usage mode, not tool presence. Code quality compounds the concern: analysis of 470 repositories shows AI-generated code carries 1.7x more bugs and 75% more logic errors than human code, while a peer-reviewed study of 11 LLM models found only 75% accuracy on structured outputs—problematic for exercises where correctness is foundational. Yet counterevidence exists: a high school study of 83 students found GenAI-assisted programming significantly improved computational thinking (p < 0.01) when deployed with real-time scaffolding. The defining gap is between platform-scale adoption (vendors shipping AI features, learners embracing them) and institutional confidence in learning gains—which remains blocked by the absence of large-scale, independent evidence that AI-supported coding exercises improve long-term retention and transfer beyond immediate task completion. April 2026 evidence shows this gap widening: CodeSignal's AWS partnership reached 5,000 learners across 13 countries with 50,000 exercises completed and 72% platform engagement; University of Wisconsin-Oshkosh deployed interactive AI exercises that increased course completion from 5% to 97% with measurable test score improvements; and a 20-institution community college study integrated code animations and Parsons puzzles with learning analytics correlation to outcomes. Yet simultaneously, critical design analysis documents that current tools optimize for professional productivity rather than learning, and reliability benchmarks confirm teaching/learning tasks remain among the least reliable AI applications (0.67/1 accuracy). The practice is in mature platform deployment phase with measurable adoption gains, but the central tension remains unresolved: vendors ship at scale, institutions adopt incrementally, but the pedagogical and technical validation that would justify mainstream institutional investment still lags deployment.

TIER HISTORY

ResearchJan-2023 → Jan-2023
Bleeding EdgeJan-2023 → Apr-2025
Leading EdgeApr-2025 → present

EVIDENCE (85)

— Maier et al. meta-analysis of 23 studies shows moderate productivity gains (g=0.33) but no significant learning outcome improvements (g=0.14) in educational coding contexts, directly addressing the core tension between tool scale and pedagogical efficacy.

— Springer Nature retracted widely-cited meta-analysis claiming ChatGPT produces large positive learning effects, exposing methodological flaws and weak evidence standards in AI education research, signaling need for rigorous validation before deployment.

— Meta-analysis of academic literature on ChatGPT in programming education reveals dual positioning: learning aid for feedback/efficiency versus pedagogical risk from overreliance and unreliable outputs, reflecting field maturation to leading-edge stage.

— Synthesis of OECD, Stanford, and systematic review evidence emphasizes that performance gains do not equal durable learning and that pedagogical constraints (hint-based assistance vs. direct answers) determine whether AI tools support or undermine skill development.

— UC San Diego researcher documents critical assessment validity failure: students score well on AI-assisted assignments but 33% fail proctored skill demonstrations, revealing that interactive exercises no longer measure competency when AI access is implicit.

— Urozboyev systematic review directly evaluates intelligent programming platforms (Codecademy, LeetCode, Codio, GitHub Copilot) and reports positive knowledge/motivation effects but critical concerns about over-dependence and diminished critical thinking.

— Elezaj proceedings propose pedagogical frameworks (PBL, design thinking, computational thinking) for AI-integrated coding education that balance efficiency gains with critical thinking preservation, addressing core design tension in interactive exercise platforms.

— AWS partnership deployed CodeSignal's AI-native platform serving 5,000 learners across 13 countries with 50,000 exercises completed and 72% platform engagement, demonstrating production-scale interactive coding education with Socratic AI tutoring.

HISTORY

  • 2023-H1: Peer-reviewed case studies documented ChatGPT's mixed impact on self-regulated learning in programming—strong on conceptual guidance but weak on assessment. Purdue's AI-Lab framework proposed structured integration into courses, with 48.5% of students already using GenAI for assignments. JetBrains Academy and Sololearn released GA updates with improved feedback loops, while Codecademy published critical guidance emphasizing AI's limitations for foundational skill development.

  • 2023-H2: Deployed systems matured: CodeAid's 700-student classroom pilot revealed learner demand for accessible AI guidance alongside educator concerns about accuracy. JetBrains and Sololearn shipped integrated "Code with AI" features. Research documented technical barriers (ChatGPT code accuracy declining through June) and pedagogical fracture—instructors across nine countries split between banning AI to teach fundamentals versus integrating it for industry-readiness, with no consensus on gating strategies or assessment frameworks.

  • 2024-Q1: Exercise generation research advanced with empirical studies on GPT-4 personalization and ChatGPT deployment at scale. A meta-review of 21 papers confirmed exercise generation and evaluation as dominant use cases, but surface-level quality persists: students cannot distinguish AI from human exercises, and longitudinal data showed declining adoption within 8 months. Codecademy and platforms doubled down on AI-powered case study exercises while practitioners documented overreliance risks and accuracy limitations as primary barriers to broader institutional adoption.

  • 2024-Q2: Major vendors intensified platform integration: Codecademy deployed AI Learning Assistant, JetBrains Academy shipped 2024.5 with collaboration features. Empirical research confirmed high-quality GPT-4 exercise generation and user engagement. However, critical limitation emerged: research found LLMs can solve their own generated exercises, creating pedagogical validity concerns. Market challenge surfaced: Replit discontinued free Teams for Education service due to infrastructure costs. University studies (Twente, others) documented that majority of current exercises are solvable by ChatGPT/Copilot, forcing curriculum redesign decisions. Adoption remains concentrated in early adopters; broader institutional confidence depends on addressing exercise self-solvability and economics of free platforms.

  • 2024-Q3: Vendor momentum continued: JetBrains Academy added AI-documented projects and AI topics, Codecademy continued AI Learning Assistant rollout. Real-world deployments expanded: CodeSignal pilots showed 86% of developers reported faster learning with AI tutoring (Cosmo), while William & Mary's CodeTutor deployment demonstrated mixed outcomes—improved scores but declining utility for advanced tasks and 63% of prompts rated unsatisfactory. Peer-reviewed research deepened critical analysis: EDM 2024 found ChatGPT excels in data analysis (93.1% accuracy) but fails on visual tasks; MIT's controlled experiment showed AI-assisted students solved problems fastest but failed retention tests while traditional learners passed—highlighting the core pedagogical tension. Quality assessments (LatIA) revealed significant variations across code generation tools. The window revealed a widening gap between vendor enthusiasm and academic evidence of learning effectiveness.

  • 2024-Q4: Vendors consolidated platform maturity and expanded reach: Codecademy's AI Learning Assistant achieved 976,331 learner conversations (270K+ users), while JetBrains' survey of 23,991 learners showed 28% planning AI-focused courses and 33-34% already exploring AI in coding education. New ecosystem entrants like BootSelf launched commercial AI tutoring with personalized learning paths. Research continued validating exercise generation techniques: BugSpotter demonstrated that LLM-generated debugging exercises matched instructor-created ones in pedagogical effectiveness when properly designed. Community-driven open-source tools (GitHub AI-Coding-Tutor) showed ongoing developer interest in interactive AI education infrastructure. By year-end 2024, the practice had shifted from capability validation (2023) through deployment proof (mid-2024) to scale and refinement—platforms handling millions of interactions, learners embracing AI-assisted education at global scale, and research focus turning from "can we do this?" to "how do we ensure pedagogical soundness at scale?"

  • 2025-Q1: Exercise generation matured as a research domain with empirical validation of personalized AI-created tasks. Tutor Kai study demonstrated 89.5-92.5% quality on AI-generated programming exercises with high student satisfaction, while Hour of Code analysis revealed systematic gaps in AI beginner activities (growth from 6 to 47 activities but persistent emphasis on perception over hands-on reasoning). Sololearn continued platform expansion with 35M+ learners. Critical tension remained unresolved: research validates that AI-generated exercises can reach production quality, but pedagogical barriers persist (reasoning complexity, exercise self-solvability by LLMs, sustainability of free platforms). Adoption momentum continued at platform scale despite underlying efficacy questions.

  • 2025-Q2: Capability expansion continued alongside critical evidence of technical and security limitations. JetBrains Academy and Microsoft Education released new AI-powered features (hints, Copilot Chat GA for teens), signaling continued vendor investment. Adoption broadened: Cengage survey showed 63% K12 teachers and 49% HED instructors using GenAI in teaching, with specific use in course content (45%), lesson planning (42%), and quizzes (39%). However, peer-reviewed benchmarking delivered stark findings: LiveCodeBench Pro (8 universities) showed frontier models achieve only 53% accuracy on medium problems and 0% on hard problems; ChatGPT error analysis documented 10-50% failures in coding/testing; UTSA security research found significant vulnerabilities in AI-generated code. These findings sharply highlighted the core tension: platforms ship AI features at scale, adoption metrics climb, but underlying reliability and security concerns remain unresolved, with pedagogical validity questions (exercise self-solvability, overreliance risks) persisting.

  • 2025-Q3: Vendor momentum accelerated—JetBrains launched free Student Pack integrating AI Assistant for 3M+ students globally, achieving broad platform maturity. Student adoption saturation confirmed: HEPI survey (August 2025) showed 92% of UK HE students using GenAI (up 26 points YoY), 88% for assessments. However, reliability and pedagogical validation evidence worsened sharply. MIT CSAIL research (July 2025) documented fundamental hallucination and communication barriers in AI on large codebases; Poldrack practitioner analysis (July 2025) revealed specific AI-generated test failures (incorrect assertions, wrong constants); Code.org classroom reports (August 2025) showed AI Tutor integration bugs and curriculum misalignment; peer-reviewed assessment research identified model deception vulnerabilities and student dependency risks. Core tension unresolved: platforms ship reliably, users adopt widely, but underlying quality, security, and pedagogical validity of generated exercises and tests remain undocumented and problematic. Early adopters continue integration despite evidence gaps; mainstream institutional confidence depends on demonstrable exercise quality and learning outcome validation.

  • 2026-Jan: Platform integration accelerated toward autonomous agents—JetBrains integrated OpenAI Codex directly into IDEs for autonomous debugging and refactoring (January 22-26, 2026), while Hyperskill launched structured courses with AI agent assistance. Learner adoption remained mainstream: Stack Overflow survey (January 2026) showed 44% of those learning to code used AI tools, up from 37% in 2024. However, learning outcome evidence darkened: controlled study (January 31, 2026) documented that AI assistance led to 17% lower mastery on concept quizzes, with strategic prompting required to mitigate losses. Code quality analysis of 470 repositories revealed AI-generated code produces 1.7x more bugs than human code, with 75% higher logic errors—a critical signal for exercises where correctness is pedagogical foundation. The bifurcation persisted: vendors ship agents and automation, learners adopt continuously, but mounting empirical evidence (learning outcome losses, code quality deficits) signals that unrestricted AI assistance carries measurable pedagogical and technical risks. Institutional adoption remains early-adopter only; mainstream deployment blocked by evidence of learning outcome and code quality concerns.

  • 2026-Feb: Vendor platform integration continued at scale with minimal new ecosystem changes. JetBrains Academy released 'Learn AI-Assisted Programming With Junie,' a partnership course with Nebius, signaling ongoing agentic AI expansion in vendor offerings. However, the month produced no substantive new adoption metrics, deployment case studies, or empirical learning outcome data specific to interactive coding education. Broader education sector data (Coursera, EdWeek, Microsoft) documented general AI adoption in K-12 and higher education (80-95% of teachers/learners using AI tools), alongside persistent barriers: lack of professional development (44% of educators), unclear policies (only 13% have formal AI policies), and mixed sentiment (47% educators negative on AI impact in 5-year outlook). The practice remained in mature deployment phase with learner adoption normalized, but the absence of new pedagogical efficacy or learning outcome evidence for February reinforced the core tension: vendors continue to scale AI-integrated exercise platforms, but independent validation of learning gains—essential for mainstream institutional confidence beyond early adopters—remained absent.

  • 2026-Mar: A cluster of new research sharpened the central pedagogical tension. A three-year longitudinal study confirmed that as generative AI normalised in introductory programming courses, student help-seeking practices systematically shifted — raising unresolved questions about how to maintain agency and productive struggle. A survey of 50 educators and 90 students mapped the core design conflict: educators prefer indirect scaffolding that preserves reasoning, students prefer direct actionable answers. Meanwhile, a high school study (n=83) found GenAI-assisted programming significantly improved computational thinking (p < 0.01) when used with real-time scaffolding, providing a positive counterpoint. Adoption continued to surge — 64% of developers now use AI to learn coding (up from 37% in 2024) — but a University of Waterloo benchmark found only 75% accuracy on structured outputs across 11 models, reinforcing that tool reliability gaps remain a pedagogical concern.

  • 2026-Apr: Trust and deployment quality emerged as critical adoption barriers. A quasi-experimental study (n=82) of LLM-supported collaborative C++ learning showed significant computational thinking gains and lower cognitive load in the LLM group, providing direct positive evidence of real K-12 deployment effectiveness. Concrete deployment wins reinforced the positive case: CodeSignal's AWS partnership reached 5,000 learners across 13 countries with 50,000 exercises completed and 72% platform engagement; University of Wisconsin-Oshkosh deployed AI-powered interactive exercises that lifted course completion from 5% to 97% with measurable test score improvements. However, multiple lines of evidence exposed systemic reliability concerns: a 172-billion-token hallucination benchmark found even leading models fabricate details at 10%+ rates under longer context windows; ChatGPT code quality assessment across three knowledge levels showed severe degradation with specialization (82% on basics → "blatantly wrong" on advanced topics); and field analysis of 450 engineers found 19.7% of AI-recommended packages hallucinated with 58% repeating. Developer sentiment revealed an adoption-trust gap: 84% of engineers use agentic AI tools, but only 3% highly trust output. Institutional barriers persisted: IT instructor survey (n=105) found competence was not the blocker — external factors (academic dishonesty risk, licensing, data privacy) prevented integration. The month reinforced the core tension: platforms deploying at scale with measurable completion and engagement gains, but pedagogical design analysis confirmed current tools optimize for professional productivity rather than learning, leaving the exercise-quality validation gap unresolved.

  • 2026-May: A meta-analysis of 23 studies (Maier et al.) found generative AI in programming education yields moderate productivity gains (g=0.33) but no significant learning outcome improvements (g=0.14), directly quantifying the productivity-vs-learning gap that has been the field's central tension. Simultaneously, Springer Nature retracted a widely-cited meta-analysis claiming large positive ChatGPT learning effects, exposing weak evidence standards in AI education research and raising the bar for deployment validation claims.