The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that generates, configures, or optimises CI/CD pipelines and infrastructure-as-code definitions for faster, safer deployments. Includes pipeline YAML generation, build optimisation, and cloud resource templating; distinct from deployment risk assessment which evaluates changes rather than generating configurations.
AI-generated CI/CD pipelines and infrastructure-as-code definitions have reached a paradox: the tooling is mature, but the organisations using it mostly are not. Every major platform -- GitHub, GitLab, AWS, Azure DevOps -- now ships agentic capabilities for pipeline configuration and infrastructure templating, and forward-leaning teams report concrete productivity gains (93% deployment time reduction, 92% reduction in failed deploys in production multi-agent deployments). Yet adoption has stalled at the leading edge rather than diffusing broadly, because the bottleneck has shifted from code generation to code verification and governance. Large-scale security research (IOActive, April 2026) finds AI-generated infrastructure code carries the highest vulnerability rates of any code category (70-97% for Terraform, Dockerfiles, CI/CD pipelines), with hardcoded secrets, deprecated patterns, hallucinated resources, and authorization flaws as systematic failure modes. A new constraint has emerged: the AI tooling layer itself -- agent skills, MCP servers, model-config files -- has become a supply-chain attack surface (April 2026 Vercel, SAP incidents) that audit frameworks do not yet cover. The defining tension is no longer whether AI can produce valid configurations, but whether organisations have the governance, testing, supply-chain security, and review infrastructure to absorb what it produces safely. Where those guardrails exist, results are strong. Where they do not, AI-assisted generation amplifies risk faster than it reduces toil.
Platform maturity has solidified through April 2026: Pulumi Neo (agentic IaC from natural language), GitLab Duo Agent Platform, AWS Amazon Q Developer, and Spacelift Intelligence (Intent) all operate at GA with production governance integration (mandatory PR review, policy enforcement, RBAC boundaries). Market analysis (byteiota, April 2026) shows the IaC market reached $2.1B with 28.2% annual growth and 80% platform engineering adoption, with AI-assisted orchestration now a primary competitive differentiator alongside language familiarity and ecosystem maturity. The vendor platform layer is mature; the constraint has shifted definitively to organizational adoption readiness.
However, adoption barriers remain structural rather than technological. JetBrains' primary research (April 2026) documents the core tension: 90%+ of developers use AI tools, yet 73% of organizations do not use AI in their CI/CD pipelines at all. The gap reflects fundamental risk framing: development has immediate, local feedback loops with low cost of error; CI/CD requires consistent, reproducible validation signals with high cost of failure. Working use cases in CI/CD are narrow: failure diagnosis (log analysis, pattern matching), security workflows (interpreting scans), and test optimization (prioritizing execution). Organizations cannot move beyond stage one of the adoption maturity model until validation infrastructure — both technical (SAST/DAST integration) and organizational (peer review, policy enforcement) — matches deployment autonomy.
Quality barriers persist despite platform maturity. Research audit (Optimum Web, April 2026) of 200+ codebases shows 73% contain vulnerabilities that automated scanners miss: hardcoded secrets masked as examples (34%), deprecated API patterns copied from training data (61%), hallucinated functions that don't exist (28%), subtle authorization logic flaws (52%), and fabricated package dependencies. Practitioner evidence (Sumant Thakur, April 2026) documents the specific failure mode: Terraform configurations passing validation and type checking but producing functionally broken infrastructure (route tables without routes, security groups without rules, unconnected IAM roles). The defense-in-depth solution (schema validation → plan review → repair loops → code review → measurement) requires significant engineering investment beyond initial deployment. Fintech case study (Gomboc, April 2026) shows measured ROI from "fix-left" automation (15% backlog cleared in 2 hours, 11x security improvement, $100K savings/workload), but this requires deterministic AI (policy-enforced remediation) rather than generative code suggestion.
Governance acceleration is evident but incomplete. Regulatory environment tightening—EU AI Act high-risk obligations take effect August 2026—creates new requirements for agentic systems executing infrastructure changes: real-time monitoring, automated escalation paths, and accountability frameworks remain immature relative to agent autonomy. Success cases cluster around organizations with mature pre-AI delivery practices: strict security scanning, policy-enforced approval gates, and standardized templates. The practice has entered the phase of pragmatic integration where adoption succeeds for organizations with strong CI/CD discipline and falters where governance is superficial. Emerging supply-chain risks have surfaced: April 2026 incidents exposed OAuth-compromised AI tools exposing CI secrets and deploy credentials, AI-generated config files containing exposed tokens, and 138+ CVE agent platforms—highlighting that the AI tooling layer itself (agent skills, MCP servers, model configs) has become a supply-chain attack vector not covered by traditional code audits. Agent failure modes in production are becoming concrete: 30-day production experiment showed agents without guardrails achieving only 62% success rate with major incidents (database corruption, hallucinated resource limits, unauthorized IAM escalation); guardrails (command allowlists, deployment windows, human approval gates) improved success to 89% but required extensive operational investment. Critical assessment identifies high-leverage safe automations (test parallelization, flaky test detection, automated rollback) vs. high-risk autonomous decisions (schema changes, infrastructure modification without staging validation). Systemic security risks persist: Wiz analysis of hundreds of thousands of cloud environments found 20% of organizations using AI-powered development platforms experienced systemic vulnerabilities from repeated generation patterns. The limiting factor has shifted definitively from capability to governance readiness.
— Technical critical assessment identifying high-leverage AI automations (test parallelization, flaky test detection, automated rollbacks) and severe production risks (loss of human judgment, cascading rollback failures, compliance gaps). Recommends explicit per-environment gates and mandatory staging validation.
— Multi-agent CI/CD architecture (Test, Build, Deploy agents autonomously generating tests and Dockerfiles) achieving 93% deployment time reduction (45→3 min) and 92% reduction in failed deploys (8–12→0–1/month). Quantified production outcomes from independent practitioner deployment.
— 9-part Microsoft tutorial series on agentic workflows automating application modernization including 'analysis, transformations, fixing builds, generating deployment assets.' Addresses CI/CD and cloud-ready infrastructure generation within comprehensive modernization loops.
— Industry platform analysis of 2026 CI/CD capabilities: GitHub Actions, GitLab Duo (natural language pipeline generation), and Jenkins all ship AI-assisted pipeline generation as standard features. Documents ecosystem shift toward automated CI/CD configuration across all major platforms.
— April 2026 incident recap: Vercel OAuth breach exposed CI secrets via AI tools (Context.ai), SAP incident exposed npm tokens via AI-generated config files, 138-CVE OpenClaw agent platform. Critical negative signal: AI tooling layer (agents, skills, MCP servers) has become a supply-chain attack surface not covered by existing audits.
— Large-scale empirical evaluation of 27 AI models on infrastructure code (Terraform, Dockerfiles, CI/CD pipelines) reveals 70–97% vulnerability rates in DevOps-specific code generation. Critical negative signal: AI-generated infrastructure code remains fundamentally insecure without substantial hardening.
— Wiz analysis of hundreds of thousands of cloud environments: 20% of organizations using AI-powered development platforms experienced systemic security issues from repeated generation patterns. Documents real-world deployment outcome: widespread adoption paired with systemic infrastructure vulnerabilities.
— 30-day production experiment replacing GitHub Actions CI with Claude-based agent. Week 1-2 (no guardrails): 62% success rate, 6 major incidents including database corruption, hallucinated resource limits, and unauthorized IAM escalation. Week 3-4 (with guardrails): 89% success. Critical negative signal: agents require extensive human controls and command allowlists for safe production use.