Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

AI-assisted test generation

BLEEDING EDGE

TRAJECTORY

Stalled

AI that generates unit, integration, or end-to-end tests from source code, requirements documents, or API specifications. Includes tools generating test suites from implementations, PRDs, and OpenAPI specs; distinct from adversarial test generation which targets fault discovery rather than coverage.

OVERVIEW

AI-assisted test generation uses machine learning to produce unit, integration, or end-to-end tests from source code, requirements, or API specifications. The practice is split in two, and the halves are moving in opposite directions. Specialized platforms like Diffblue Cover have secured real enterprise deployments -- financial services firms report coverage jumps from under 20% to over 80% on legacy codebases -- and Gartner now explicitly recommends AI-assisted test generation for safe legacy refactoring. Agentic testing, where autonomous agents discover flows, generate cases, and triage failures, has moved from concept to operational pilot. General-purpose AI assistants, however, remain stalled: 75% of organisations discuss AI testing but only 16% deploy it, developer trust sits at 3%, and practitioners report steep maintenance costs that erode initial productivity gains. The defining tension is whether the specialized vanguard can pull broader adoption forward, or whether the trust and governance deficits on the general-purpose side will keep most teams on the sidelines. For now, this remains a bleeding-edge practice: proven value exists, but only for organisations willing to invest in purpose-built tooling and formal guardrails.

CURRENT LANDSCAPE

Specialized platforms and autonomous agents are advancing toward governed production deployments. Diffblue Cover (Java market leader) maintains product velocity: Q1 2026 expanded Gradle 9.x, Scala, and merge-mode test maintenance, with financial services reporting 26x productivity over general-purpose AI and coverage jumps from under 20% to over 80% on legacy systems. Youzan, Ctrip, and China Unicom show adoption extending beyond Western finance into e-commerce and telecoms. Autonomous agents reached production maturity: TestMu/KaneAI platform processed 1.5B tests across 250K users with Boomi reporting 78% faster execution; Gartner Challenger and Forrester recognition signal analyst validation. Multi-agent architecture (planner, generator, runner, analyser) emerged as 2026 standard, with Gartner projecting 33% of applications running agentic AI by 2028. Domain-specific platforms (Panaya for SAP/ERP, CasePilot for Azure DevOps) with ISTQB methodologies show product maturity in enterprise toolchains. Agentic test generation achieved operational production in niche sectors: game studios generate from design docs in hours, OpenObserve scaled 380→700+ tests with 85% flaky reduction via Claude Code and systematic governance, Axelerant closed zero-coverage gaps across NextJS/Strapi/Magento in 48 hours. Market sizing: automation testing market $25.4B (2026)→$69.2B (2033); AI-enabled testing $3.6B→$6.9B (2036), unit testing 40% of segment driven by CI/CD acceleration and talent shortages.

General-purpose tools signal mainstream maturity but production deployment gaps persist. GitHub Copilot Workspace (March 2026) generated test suites at 85% average coverage; 94% of testers use AI, 45% specifically for test generation. Yet structural barriers remain critical: 75% of organizations discuss AI testing but only 16% deploy it, 60% lack code review processes for AI-generated tests, and empirical data reveals systematic failure modes. Quality gaps emerged sharply in April 2026: Lightrun SRE survey (200 leaders) showed 43% of AI-generated code still fails in production after QA, with 88% requiring 2–3 redeploy cycles per change; 49% failure rate on deployment and 15–18% more security vulnerabilities than human code. Practitioner research (SWE-bench Verified) documented AI-generated tests missing 62.5% of failure classes (cascade-blindness where tests miss related function impacts); TestSprite analysis (470 GitHub PRs) confirmed 1.7× more bugs, with security vulnerabilities (XSS 2.74× higher) and 45% containing OWASP Top 10 flaws. Maintenance costs materialized: $500–800/month token spend, 60% test redundancy, 4+ hours debugging per generation cycle. Governance gap: self-healing automation masks defects rather than exposing them; lack of business context leaves domain logic unvalidated (e.g., credit risk rules, regulatory compliance). Strategic finding: maximum value accrues to mechanical scaffolding (structure, boilerplate) while strategic decisions (what to test, priorities, design) require human judgment. Negative signal strength: testRigor documented four failure categories (business context gaps, historical-data over-reliance, self-healing masking, integration complexity) explain why adoption barriers are structural, not technical. Until governance infrastructure and quality-confidence baselines mature, most teams will continue rewriting AI-produced tests pre-production.

  • 2026-May: Meta's TestGen-LLM case study emerged as best-documented industrial deployment with 75% production test acceptance and 10%+ measurable coverage gains on Instagram/Facebook test-a-thons, grounding tier classification in real scale. Concurrently, Amazon.com's March 2026 outage (6.3M lost orders from untested AI-generated code changes) provided critical negative signal, catalyzing structural changes: 43% of AI-generated code requires production debugging post-QA (Lightrun survey of 200 SRE leaders); 70% of organizations have AI vulnerabilities in production; developer trust dropped to 29% (Stack Overflow). Independent developer deployments validated agentic potential: TestSprite generated 47 Next.js integration tests in 12 minutes vs 3 months manual, with auto-repair of UI changes, though locale-handling gaps (date/currency formatting) required manual international team validation. Regression bloat emerged as critical adoption barrier (Ministry of Testing): AI-driven test expansion creates CI/CD delays (financial services: 10k+ tests requiring days to run), throttling promised speed gains—solution: Test Impact Analysis for selective execution. Market sizing (Mordor Intelligence): AI-powered testing $11.99B (2026) → $39.43B (2031) at 26.88% CAGR; 61% of enterprises run AI test engines on every dev stage; AI contract-testing reduces microservice defects by 40% in production studies. Bifurcation persists asymmetrically: specialized and agentic platforms demonstrating governed production adoption with validated ROI; general-purpose tools mainstream in awareness but stalled by quality-confidence gaps, security vulnerabilities (45% OWASP Top 10 flaws), governance complexity, and maintenance costs ($500–800/month token spend, 60% redundancy) blocking enterprise-scale real-world deployment outside niche sectors.

TIER HISTORY

ResearchSep-2022 → Sep-2022
Bleeding EdgeSep-2022 → present

EVIDENCE (91)

— Amazon.com March 2026 outage (6.3M lost orders, 99% marketplace downtime) traced to untested AI-generated code; Lightrun survey: 43% of AI code needs production debugging; 70% of orgs have AI vulnerabilities in production.

— Meta TestGen-LLM case study: 75% test acceptance rate and 10%+ coverage gains on Instagram/Facebook; RCT showing 19% performance slowdown despite developer perception of 20% speedup—revealing critical perception-reality gap.

— Independent Next.js SaaS deployment: 47 integration tests generated in 12 minutes vs 3 months manual; auto-repair of broken selectors on UI changes; identified locale-handling gaps requiring international team validation.

— Industrial evaluation of autonomous LLM-based test repair on 636 test cases: only 10% first-attempt success, 70% repair convergence at scenario-family level, 38% failed to produce executable artifacts; documents assertion weakening as workaround.

— Ministry of Testing identifies critical adoption barrier: AI test expansion creates CI/CD bottlenecks; case study of financial services firm with 36 microservices and 10k+ tests requiring days-long regression cycles—speed gains negated.

— Market projection: $11.99B (2026) → $39.43B (2031) at 26.88% CAGR; 61% of enterprises run AI test engines on every dev stage; AI contract-testing reduces microservice defect rates by 40% in production studies.

— Industry coverage of agentic testing evolution with Gartner projection of 33% agentic AI by 2028; documents multi-agent architecture (planner, generator, runner, analyser) emerging as standard.

— Azure DevOps extension with three-pass quality validation (Worker, Judge, Optimizer) implementing ISTQB techniques, showing product maturity in mainstream enterprise toolchains.

HISTORY

  • 2022-H2: Diffblue Cover released incremental GA updates with improved test assertions for mutated arrays and better IDE integration; GitHub Copilot Chat demonstrated test generation capability within its IDE interface. Critical analyses noted that ~85% of AI projects fail, with barriers around data drift, cost, and pilot-to-production transitions affecting adoption broadly, including test generation tools.
  • 2023-H1: Real-world adoption accelerated in enterprise sectors (Financial Services, Banking, IT), with Diffblue Cover customers reporting significant time savings. Industry surveys identified strong demand drivers—36% of developers find manual testing most time-consuming—but adoption remained selective. Critical assessments highlighted that AI test automation requires human oversight and cannot fully replace domain expertise, especially for complex systems.
  • 2023-H2: Diffblue Cover achieved AWS Marketplace listing and released updates for modern frameworks (Spring Core 6, Java 17 Records), while expanding CI/CD integration via GitHub Actions. Industry survey data showed 78% of testers adopted AI, with 46% using it for test case generation, validating broad market adoption. However, user reports surfaced technical frictions—Android incompatibilities, connection failures—and practitioner analyses highlighted fundamental LLM limitations (hallucination, inconsistency) constraining autonomous reliability. Adoption remained selective and operator-dependent despite commercial maturity.
  • 2024-Q2: Peer-reviewed empirical research quantified reliability limitations of general-purpose AI: GitHub Copilot produced 92.45% failing tests in isolation and 45.28% pass rates with test suite context. Comparative studies confirmed that specialized tools (Diffblue Cover) significantly outperformed general-purpose LLMs. Developer adoption broadened (76% using AI assistants) with test generation identified as a prioritized use case, yet 38% reported inaccurate outputs. Diffblue Cover maintained product evolution with enum support and enhanced analytics. Market showed bifurcation: specialized platforms gaining traction in regulated sectors while general-purpose adoption remained broad but shallow in test generation confidence.
  • 2024-Q3: Academic reviews catalogued 100+ AI test automation tools; specialized platforms (Diffblue Cover, Applitools, Testim) demonstrated production ROI with enterprise deployments achieving 70%+ coverage gains and reduced outages. However, practitioner and industry assessments revealed sharp adoption barriers: only 16% of organizations found testing efficient; 85% integrated AI tools but 68-73% experienced reliability issues; Gartner predicted 30% of GenAI projects abandoned post-POC by end 2025 due to cost and ROI uncertainty. Systematic review of 55 tools confirmed efficiency gains but persistent false positive, domain knowledge, and contextual understanding gaps. Bifurcation deepened: specialized tools gaining foothold in Financial Services, Banking, and large enterprise; general-purpose adoption broad but shallow in confidence for production deployment.
  • 2024-Q4: Diffblue raised $6.3M, confirmed service to 10+ largest US banks and Fortune 500 firms, and secured integrations into GitLab 17.0 CI/CD and AWS Marketplace. Product maturation continued (November release: Mockito support, Developer Edition GA). However, critical production-readiness gap emerged: Economist Impact/Databricks survey of 1,100 executives (November) showed 85% of enterprises using GenAI but only 37% confident applications are production-ready; cost, skills gaps, and quality concerns cited as barriers. Practitioner experience confirmed friction: 16% find testing efficient; 85% integrated AI but 68-73% faced reliability issues. Bifurcation consolidated: specialized tools securing enterprise deployments with documented ROI, general-purpose adoption remaining experimental and shallow in production confidence.
  • 2025-Q1: Diffblue Developer Edition GA launch broadened accessibility for individual developers. ICEIS 2025 peer-reviewed research confirmed AI assistants (Copilot, ChatGPT, Gemini) drive productivity gains but require constant code review. Critical new signal: developer trust in AI-generated code collapsed to 3% (down from 40%), with 46% actively distrusting outputs despite 90%+ continued tool use—revealing a widening gap between adoption and confidence. Professional testers remained selective: 45.65% had not adopted AI tools; among adopters, 40.58% used AI for test case creation. Market grew from $0.7B (2024) to $0.86B (2025) with projection to $1.9B by 2029, driven by autonomous testing agents and predictive models. Bifurcation sharpened: specialized tools consolidating enterprise footholds with validated ROI; general-purpose adoption facing plateau as developer trust eroded.
  • 2025-Q2: Specialized platforms (Diffblue, Applitools) accelerated enterprise deployments in Financial Services, with NextWave consulting partnerships reporting 26x productivity gains over general-purpose AI and coverage increases from 20% to 80% on legacy systems. Diffblue released product enhancements (Optional type handling, JaCoCo coverage targeting) and maintained GitLab/AWS ecosystem integrations. General-purpose adoption entered erosion phase: California Management Review analysis of Q2 2025 surveys showed only 4% of organizations with cutting-edge GenAI capabilities and 74% of leaders reporting little progress; developer trust remained at 3%; 65% reported AI test generation missing critical context; 68%+ of enterprises faced reliability issues with integrated AI tools. Bifurcation calcified: specialized tools validating ROI, general-purpose adoption actively declining as developer confidence collapsed.
  • 2025-Q4: Diffblue released next-generation platform (November 2025) adding Test Asset Insights, LLM-Augmented Intelligence, and JUnit 6 support, maintaining 20x productivity claim over general-purpose AI assistants and positioning Java modernization programs. Industry adoption remained broad (81% of teams use AI testing tools) but maintenance challenges persisted: practitioners documented specific costs ($500-800/month token usage, 60% test redundancy, 4+ hours debugging per 30-second code generation burst). Adoption-reality gap widened: 75% of organizations discuss AI testing but only 16% actually deploy it. Over 70% of developers continue rewriting AI-generated test code before production, signaling persistent quality-confidence gaps. Bifurcation deepened asymmetrically: specialized tools cementing enterprise foothold with validated ROI; general-purpose adoption broadening in discussed intent but narrowing in production confidence and maintenance feasibility.
  • 2026-Jan: Diffblue released Q1 2026 platform updates expanding Java ecosystem support (Gradle 9.x, Mockito 5.21.0, Scala projects) and introducing test maintenance features (merge-mode, @WriteTestsTo annotation). Gartner's January modernization research explicitly recommended AI-assisted test generation for safe legacy refactoring, projecting 90% of modernization projects will use AI-augmented tools by 2029. Agentic test generation (autonomous agents generating test cases from natural language, discovering flows, executing suites, auto-triaging failures) shifted into operational pilots: game studios deployed production-grade test case generation from design docs, automating triage from hours to minutes. General-purpose adoption remained stalled: developer trust near-zero (3%), 65% reported AI-generated tests missing context, 75% of teams discuss AI testing but only 16% deploy it. Cost barriers materialized: $500-800/month token costs, 60% redundancy, 4+ hours debugging per burst. Bifurcation advanced asymmetrically: specialized platforms and agentic pilots progressing toward governed autonomous testing; general-purpose adoption blocked by quality-confidence gaps and deployment barriers.
  • 2026-Feb: Diffblue advanced merge-mode and @WriteTestsTo features reducing test redundancy in integrated workflows. Named deployments reported by WeTest (Youzan e-commerce, Ctrip travel, China Unicom telecom) demonstrated evolution from AI-assisted to autonomous testing. BrowserStack survey of 250+ leaders revealed 64% achieving ROI over 51% from AI testing, but 37% identified tool integration as primary barrier. Developer surveys showed 71% using AI for unit tests, yet governance gaps remained critical: 60% of organizations lacked code review processes for AI-generated tests, and research demonstrated LLMs failed fault localization on semantic-preserving code changes 78% of the time. Production-deployment gap persisted: broad discussion masked weak real-world adoption. Bifurcation stabilized: specialized platforms consolidating enterprise foothold with validated ROI; general-purpose adoption stalled by governance, quality-confidence, and maintenance cost barriers.
  • 2026-Mar: GitHub Copilot Workspace launched AI-powered test suite generation (March 28) with 85% average coverage on real projects, signaling mainstream general-purpose tool maturation in test generation. Diffblue Testing Agent reached GA (March 10) with benchmark showing 80.7% line coverage vs 32.3% for senior developers, demonstrating autonomous orchestration viability. Named production deployment: OpenObserve scaled test suite from 380 to 700+ tests (84% growth) using Claude Code with 85% flaky test reduction and feature analysis time compressed from 45-60 to 5-10 minutes, paired with systematic quality governance (mutation testing, coded rules). Market data remained bifurcated: 89% piloting/deploying GenAI QE (37% production, 52% pilot) but only 15% enterprise-scale deployment, with integration as primary barrier (37%) not technology. Strategic insight from World Quality Report: value accrues to teams using AI for mechanical scaffolding (test structure, boilerplate) while reserving strategic decisions (what to test, coverage priority, design) for human judgment. Cost barriers persist: $500-800/month token spend, 60% redundancy, 4+ hours debugging per generation cycle. Bifurcation evolved: specialized platforms and autonomous agents progressing toward governed production; general-purpose adoption mainstream in awareness (93% use AI testing tools) but maintenance costs and quality-confidence gaps continue blocking enterprise-scale real-world deployment outside niche sectors.
  • 2026-Apr: Autonomous agents and specialized platforms advanced toward production governance; general-purpose tools stalled by quality-confidence and security gaps. Autonomous agent deployments matured: TestMu/KaneAI processed 1.5B tests (Boomi 78% faster execution, Gartner/Forrester recognition), multi-agent architecture (planner, generator, runner, analyser) emerged as standard (Gartner: 33% of apps by 2028). Domain-specific platforms gained traction: Panaya (SAP/ERP with business logic awareness), CasePilot (Azure DevOps with ISTQB methodology, three-pass quality validation) deployed in enterprise toolchains. Named deployments: Axelerant (NextJS/Strapi/Magento, 48-hour zero-to-comprehensive coverage), OpenObserve (380→700+ tests, 85% flaky reduction), game studios (design-doc-to-tests in hours). Market acceleration: automation testing $25.4B→$69.2B (2026–2033), AI-enabled testing $3.6B→$6.9B (2036), unit testing 40% driven by CI/CD and talent shortages. General-purpose tools signaled mainstream awareness (GitHub Copilot Workspace 85% coverage, 94% tester adoption) but fundamental quality gaps emerged: Lightrun SRE survey (200 leaders) showed 43% AI-generated code fails post-QA in production, 88% need redeploy cycles, 49% deployment failure rate, 15–18% higher security vulnerabilities. Empirical evidence quantified systematic failures: SWE-bench Verified (AI misses 62.5% of failure classes via cascade-blindness), TestSprite (1.7× bug rate, XSS 2.74× higher), 45% OWASP Top 10 compliance failures, 483% increase in AI tool CVEs. Governance barriers: 60% lack code review for AI tests, self-healing masks defects, business context gaps (credit risk, compliance) unvalidated. Maintenance costs concrete: $500–800/month tokens, 60% redundancy, 4+ hours debugging per cycle. Adoption-intent gap widened: 75% discuss AI testing, 16% deploy, 70%+ rewrite AI tests pre-production. testRigor documented four enterprise failure categories explaining structural barriers. Bifurcation crystallized: autonomous agents and specialized platforms consolidating governed production with validated ROI; general-purpose adoption stalled by quality-confidence, security concerns, and governance complexity despite mainstream tool proliferation.