The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
AI across the development lifecycle — writing, reviewing, testing, and shipping code. Code completion is established and IDE-native; agentic coding and AI-driven CI/CD are advancing fast but half the domain remains bleeding-edge. The widest maturity spread of any domain: a few practices are table stakes while many are still experimental.
Software engineering is the domain where AI adoption is most advanced and most paradoxical. Inline code autocomplete is now infrastructure -- 92% of US developers use it daily, 41% of code is AI-generated, and GitHub Copilot has 4.7 million paid subscribers across 90% of the Fortune 100. Deployment risk assessment and visual regression testing have settled into standard practice. Agentic exploration tools like Claude Code (which hit $1 billion in annualized revenue within six months) and Cursor ($2 billion ARR) are the fastest-growing enterprise software categories in history. The market is real, the scale is real, and the revenue is real.
What is not real is the productivity narrative that justified the procurement. LinearB's analysis of 8.1 million pull requests across 4,800 organizations found AI-generated code accepted at 32.7% versus 84.4% for human code, with 2.74 times more security flaws. A METR randomized controlled trial showed experienced developers 19% slower on real tasks despite believing they were 20% faster -- a 39-percentage-point perception gap. Developer trust has collapsed to 29%, down from 40% in 2024, even as adoption climbed to 84%. Amazon's March 2026 outage -- 6.3 million lost orders traced to inadequately reviewed AI-generated code -- is the most expensive public illustration of what happens when code generation velocity outpaces organizational capacity to review, test, and govern it.
The structural story of mid-2026 is a domain splitting in two. On one side, seven practices are advancing: adversarial test generation, agentic exploration, architecture documentation (which crossed into broad deployment this scan), dependency management, legacy migration, security-focused review, and test coverage analysis. These share a pattern -- they augment specific expert workflows rather than replacing developer judgment. On the other, the practices that promised to remove humans from the loop -- fully autonomous coding, multi-agent pipelines, auto-approve code review, AI test generation at scale -- remain experimental, with deployment rates stuck at roughly one in ten organizations despite two-thirds actively piloting. The gap between piloting and production is not closing. The binding constraint is not the models. It is that code review, security triage, and architectural judgment do not scale at the same rate as code generation, and no amount of tooling has changed that.
Architecture documentation and specification writing crossed from experimental to broad deployment this cycle, the only maturity shift in the scan. The trigger was commercial scale: Mintlify's $500 million Series B revealed that 45% of its documentation traffic now comes from AI agents, with Claude Code alone generating 199 million requests in a single month. Google deployed autonomous agents generating standardized architecture files across a microservices mesh, with AI-powered CI gates catching critical infrastructure issues that had gone undetected for months. Specification-driven development -- where human-authored specs constrain agent execution -- has consolidated as the dominant operational pattern, with AWS Kiro, GitHub Spec Kit (93,000+ stars), and OpenSpec all in production use.
The security evidence sharpened considerably. Georgia Tech's CVE tracking project documented an acceleration from 18 AI-tool-attributed vulnerabilities in the last seven months of 2025 to 56 in the first three months of 2026, with 35 new CVEs in March alone. A formal verification study found 55.8% of AI-generated code vulnerable across 3,500 artifacts and seven models, while static analysis tools detected only 2.2% of those flaws. An assessment of 1,514 live AI-generated applications found 81% contained critical or high-severity vulnerabilities. Meanwhile, Johns Hopkins researchers demonstrated a "Comment and Control" attack where a single GitHub PR title hijacked Claude Code, Gemini CLI, and GitHub Copilot to exfiltrate API keys -- rated CVSS 9.4 Critical by Anthropic.
Beyond that tier shift, the broader landscape held position. The six practices stuck at the experimental stage (production agentic coding, full autonomy, auto-approve review, AI test generation, refactoring, and multi-agent pipelines) showed no movement. Stability is the signal: despite enormous vendor investment and an active funding cycle ($20 billion of $42.6 billion in Q2 AI funding went to agentic tools), the organizational bottlenecks that constrain these practices -- review capacity, governance maturity, failure attribution -- have not materially improved.
Code generation has outrun review capacity, and the gap is widening. AI-generated PRs wait 4.6 times longer for review than human code, review time has increased 91% per PR, and code churn is up 41%. ByteIota's survey of 2,847 developers found engineers now spend more time reviewing AI code (11.4 hours per week) than writing it (9.8 hours). Plandek's analysis of 2,000+ teams found bottom-quartile teams take 35+ hours to merge, with code review -- not code generation -- as the visible delivery bottleneck. Every AI coding tool accelerates supply; none has meaningfully expanded the human review pipeline that gates quality.
Security vulnerability accumulation is accelerating faster than detection. Georgia Tech tracked 74 confirmed CVEs from AI tools through March 2026, with the rate tripling quarter over quarter. Formal verification found 55.8% vulnerability rates that static tools catch at only 2.2%. OX Security's analysis of 300+ repositories surfaced 865,000+ security alerts per year, with 71-88% false positives that cost an estimated $20,000 per developer annually in triage time alone. The asymmetry is structural: AI generates vulnerable code at machine speed, but vulnerability detection and remediation remain human-speed operations. Organizations without dedicated AppSec programs and tiered review governance are accumulating undetected risk.
The autonomy thesis is losing to orchestrated delegation. Fully autonomous agents solve 74% of isolated benchmark patches but only 11% of complex multi-file features. Mathematical analysis shows 85% per-step reliability yields only 20% end-to-end success on 10-step workflows. Stripe processes 1,000+ autonomously merged PRs per week, but only through heavy "harness engineering" -- deterministic verification loops, not autonomous validation. Anthropic's own data confirms engineers use AI in 60% of their work but fully delegate only 0-20% of tasks. The production model that is actually emerging is bounded delegation within human oversight scaffolding, not the end-to-end autonomous engineering that the term "AI agent" implies.
Measurement itself is broken, distorting investment decisions. Thoughtworks' Technology Radar flagged lines of code and PR volume as actively harmful metrics for AI-assisted development. The METR RCT's 39-percentage-point perception gap (developers felt faster while performing slower) suggests that self-reported productivity data -- the basis for most ROI cases -- is unreliable. McKinsey found only 4% of enterprises report material business impact from AI tools despite 78% adoption, and Gartner found 62% failed to achieve measurable team-level improvement. The metrics organizations use to justify AI coding tool procurement do not correlate with the outcomes those tools deliver.
Specification engineering is emerging as the control interface, but it does not solve the requirements problem. Spec-driven development (structured, machine-readable specifications that constrain agent outputs) is now the dominant pattern in practices that are advancing -- architecture documentation, agentic exploration, CI/CD generation. AWS Kiro, GitHub Spec Kit, and OpenSpec have production deployments. But as critics point out, vague requirements still produce vague specifications, and 60% of AI pilots generate no value. The discipline works for specification-ready organizations with strong architectural governance; it is inaccessible to the majority that lack those foundations.
One of the Largest Online Retailers Lost 6.3 Million Orders in One Day (case-study) — The Amazon March 2026 outage is the most expensive public proof that code generation velocity without proportionate review governance is not a productivity story but a liability story. https://www.gspann.com/insights/blog/one-of-the-largest-online-retailers-lost-6-3-million-orders-in-one-day
AI Developer Productivity: 30% Faster, Hidden Costs (industry-report) — LinearB's 8.1 million PR analysis across 4,800 teams is the single most cited dataset in the domain; its 32.7% vs 84.4% acceptance rate comparison and 2.74x security flaw multiplier underpin every claim in "Where AI Stands." https://byteiota.com/ai-developer-productivity-30-faster-hidden-costs/
How Secure Is an AI-Generated App? 2026 Benchmark of Lovable, Bolt, Cursor, Replit, and V0 (adoption-metric) — An 81% critical/high vulnerability rate across 1,514 live applications makes the security acceleration claim concrete and vendor-specific rather than theoretical. https://vibe-eval.com/data-studies/ai-app-security-benchmark-2026/
Developers Spend 11.4 Hours/Week Reviewing AI Code (adoption-metric) — The inversion where engineers spend more time reviewing AI output than writing code is the empirical foundation of the "code generation has outrun review capacity" tension; this survey of 2,847 developers is its primary source. https://byteiota.com/ai-verification-bottleneck-developers-spend-11-4-hours-reviewing-ai-code/
The J-Curve: Measuring AI Productivity Beyond Throughput (opinion) — The METR RCT's 39-percentage-point perception gap (developers felt faster while performing slower) and the code churn data together explain why the measurement infrastructure organizations use to justify AI tool procurement is unreliable. https://thesynthesisai.substack.com/p/the-j-curve
89% of Enterprise Engineering Teams Have Experienced an AI-Generated Code Incident (adoption-metric) — This 500-engineer survey from March 2026 shows that the Amazon outage is representative, not exceptional: 25% of enterprises surveyed suffered complete outages from AI-generated code, establishing failure as a normal distribution problem not an edge case. https://www.qodo.ai/blog/ai-coding-paradox-report/
Claude Code, Gemini CLI, and GitHub Copilot Vulnerable to Prompt Injection via GitHub Comments (case-study) — The "Comment and Control" CVSS 9.4 disclosure demonstrates that the tools meant to improve security review are themselves high-severity attack surfaces, which is the structural asymmetry at the heart of the security tension. https://cybersecuritynews.com/prompt-injection-via-github-comments/amp/
Agentic Coding 2026: 60% Use, 20% Trust AI Agents (industry-report) — Anthropic's own data showing developers use AI in 60% of work but fully delegate only 0-20% of tasks is the strongest evidence that the production model is bounded delegation, not autonomous engineering, regardless of what benchmark scores imply. https://byteiota.com/agentic-coding-2026-60-use-20-trust/
Mintlify Raises $45M to Power AI-Readable Documentation for AI Agents (adoption-metric) — The 45% AI-agent documentation traffic share and 199 million Claude Code requests in a single month are what triggered the architecture documentation maturity shift from experimental to broad deployment this scan cycle. https://www.tea4tech.com/startup-stories/mintlify-raises-45m-to-power-ai-readable-documentation-for-ai-agents/amp
Spec-Driven Development Doesn't Fix the Requirements Problem (opinion) — This counter-signal is essential context for the spec-driven development pattern's rise: the control interface only works for organizations with pre-existing architectural discipline, which is not the majority, and the 60% pilot failure rate is the consequence. https://www.scalateams.com/blog/spec-driven-development-requirements-problem