The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that generates architecture diagrams, system design documents, and technical specifications from codebases and requirements. Includes C4 diagram generation and design doc drafting; distinct from code documentation which targets inline and API-level references.
AI-assisted architecture documentation has achieved production maturity at the commercial layer—Mintlify now processes 45% of its documentation traffic from AI agents and Claude Code alone generated 199 million requests in a single month—but a hard ceiling on architectural reasoning keeps the practice fundamentally constrained. Tools can generate simple diagrams and design-doc drafts at scale; Google's deployment demonstrates autonomous agents can identify critical system-level issues when architecture documentation is committed to CI/CD pipelines. Yet peer-reviewed benchmarks show near-zero accuracy on complex diagrams beyond 30-40 components, and models lack pragmatic architectural reasoning. This creates a persistent split: simple artifacts (service diagrams, ADRs, draft specifications) benefit from automation, while complex systems architecting and documentation maintenance demand human judgment. Documentation drift—the gap between live code and documented architecture—has accelerated from weekly to daily misalignment in AI-accelerated teams, exposing the inadequacy of tool-based synchronization. The compensating discipline is specification engineering: structured, machine-readable specifications that constrain AI outputs and serve as the binding interface between human architectural intent and agent execution.
Commercial tooling matured sharply in April 2026. Mintlify announced a $500M Series B valuation, revealing that 45% of its documentation traffic now comes from AI agents—significantly exceeding human browser access at 46%. Claude Code alone generated 199 million documentation requests in one month. The platform serves 100+ million monthly users across 20,000+ customers including Microsoft, Anthropic, Coinbase, and PayPal, with $10M ARR at end of 2025 (10x growth YoY). Eraser continues ecosystem expansion with official AI agent integrations (Claude Code, Cursor, Windsurf) and community MCP servers. Architecture-specific case studies are emerging: Google deployed autonomous AI agents to generate ARCHITECTURE.md files across a microservices mesh, with AI-powered CI/CD quality gates identifying critical system-level issues (distributed tracing blackouts, storage leaks) undetected for months—demonstrating that architecture documentation can serve as an automated reasoning layer for infrastructure assurance. Legacy systems like Drupal 7-based platforms show C4 methodology deployment reducing risk and improving team estimation accuracy.
However, the capability boundary remains sharp and creates a persistent adoption ceiling. Documentation drift has accelerated from weekly misalignment to daily architectural divergence in AI-accelerated teams, exposing a structural problem: AI and version control lack temporal architecture tracking (branch-aware diagrams, Git-integrated change history). ThoughtWorks analyst assessment identifies spec-driven development frameworks (OpenSpec, GitHub Spec Kit, BMAD) as critical guardrails for agent reliability—positioning specifications as the control interface rather than relying on AI for autonomous architectural reasoning. Research (ICLR 2026, Text2Arch) validates fine-tuned models on diagram generation tasks, yet benchmarks on large diagrams and generative image models remain at 42-55% accuracy versus 82% human performance. The practical result is specification engineering as the dominant pattern: practitioners write machine-readable specifications that constrain agent outputs, treating architecture documentation as binding interface rather than autonomous artifact.
— Comprehensive SDD practitioner guide covering 4-phase workflow and EARS notation; reports 3-10x higher first-pass success rates from GitHub and AWS adoption data.
— Ecosystem survey documenting SDD tooling maturity: AWS Kiro (GA Nov 2025) uses EARS notation; GitHub Spec Kit 93k+ stars; OpenSpec and BMAD frameworks mature with enterprise adoption.
— Survey of 1,131+ practitioners shows 76% use AI regularly in documentation workflows (up 16 points YoY); validates adoption crossing mainstream threshold in documentation tooling.
— Critical analysis documenting SDD's structural limitations: vague requirements still produce vague systems; essential counter-signal preventing premature tier advancement.
— Professional development firm documents SDD as standard practice with five-phase workflow; demonstrates real-world deployment of specification-first methodology across production projects.
— Analysis documenting systemic AI adoption failure: 60% of pilots generate no value; exposes adoption ceiling limiting specification-driven architecture work at scale.
— Peer-reviewed analysis (arXiv May 2026) establishing formal Specification Governance Model grounded in Transaction Cost Economics; addresses productivity-reliability paradox in AI-assisted development.
— Peer-reviewed empirical study comparing five ADR templates; provides evidence-based guidance for architecture documentation standardization across adoption.