Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

CI/CD & infrastructure-as-code generation

LEADING EDGE

TRAJECTORY

Stalled

AI that generates, configures, or optimises CI/CD pipelines and infrastructure-as-code definitions for faster, safer deployments. Includes pipeline YAML generation, build optimisation, and cloud resource templating; distinct from deployment risk assessment which evaluates changes rather than generating configurations.

OVERVIEW

AI-generated CI/CD pipelines and infrastructure-as-code definitions have reached a paradox: the tooling is mature, but the organisations using it mostly are not. Every major platform -- GitHub, GitLab, AWS, Azure DevOps -- now ships agentic capabilities for pipeline configuration and infrastructure templating, and forward-leaning teams report concrete productivity gains (93% deployment time reduction, 92% reduction in failed deploys in production multi-agent deployments). Yet adoption has stalled at the leading edge rather than diffusing broadly, because the bottleneck has shifted from code generation to code verification and governance. Large-scale security research (IOActive, April 2026) finds AI-generated infrastructure code carries the highest vulnerability rates of any code category (70-97% for Terraform, Dockerfiles, CI/CD pipelines), with hardcoded secrets, deprecated patterns, hallucinated resources, and authorization flaws as systematic failure modes. A new constraint has emerged: the AI tooling layer itself -- agent skills, MCP servers, model-config files -- has become a supply-chain attack surface (April 2026 Vercel, SAP incidents) that audit frameworks do not yet cover. The defining tension is no longer whether AI can produce valid configurations, but whether organisations have the governance, testing, supply-chain security, and review infrastructure to absorb what it produces safely. Where those guardrails exist, results are strong. Where they do not, AI-assisted generation amplifies risk faster than it reduces toil.

CURRENT LANDSCAPE

Platform maturity has solidified through April 2026: Pulumi Neo (agentic IaC from natural language), GitLab Duo Agent Platform, AWS Amazon Q Developer, and Spacelift Intelligence (Intent) all operate at GA with production governance integration (mandatory PR review, policy enforcement, RBAC boundaries). Market analysis (byteiota, April 2026) shows the IaC market reached $2.1B with 28.2% annual growth and 80% platform engineering adoption, with AI-assisted orchestration now a primary competitive differentiator alongside language familiarity and ecosystem maturity. The vendor platform layer is mature; the constraint has shifted definitively to organizational adoption readiness.

However, adoption barriers remain structural rather than technological. JetBrains' primary research (April 2026) documents the core tension: 90%+ of developers use AI tools, yet 73% of organizations do not use AI in their CI/CD pipelines at all. The gap reflects fundamental risk framing: development has immediate, local feedback loops with low cost of error; CI/CD requires consistent, reproducible validation signals with high cost of failure. Working use cases in CI/CD are narrow: failure diagnosis (log analysis, pattern matching), security workflows (interpreting scans), and test optimization (prioritizing execution). Organizations cannot move beyond stage one of the adoption maturity model until validation infrastructure — both technical (SAST/DAST integration) and organizational (peer review, policy enforcement) — matches deployment autonomy.

Quality barriers persist despite platform maturity. Research audit (Optimum Web, April 2026) of 200+ codebases shows 73% contain vulnerabilities that automated scanners miss: hardcoded secrets masked as examples (34%), deprecated API patterns copied from training data (61%), hallucinated functions that don't exist (28%), subtle authorization logic flaws (52%), and fabricated package dependencies. Practitioner evidence (Sumant Thakur, April 2026) documents the specific failure mode: Terraform configurations passing validation and type checking but producing functionally broken infrastructure (route tables without routes, security groups without rules, unconnected IAM roles). The defense-in-depth solution (schema validation → plan review → repair loops → code review → measurement) requires significant engineering investment beyond initial deployment. Fintech case study (Gomboc, April 2026) shows measured ROI from "fix-left" automation (15% backlog cleared in 2 hours, 11x security improvement, $100K savings/workload), but this requires deterministic AI (policy-enforced remediation) rather than generative code suggestion.

Governance acceleration is evident but incomplete. Regulatory environment tightening—EU AI Act high-risk obligations take effect August 2026—creates new requirements for agentic systems executing infrastructure changes: real-time monitoring, automated escalation paths, and accountability frameworks remain immature relative to agent autonomy. Success cases cluster around organizations with mature pre-AI delivery practices: strict security scanning, policy-enforced approval gates, and standardized templates. The practice has entered the phase of pragmatic integration where adoption succeeds for organizations with strong CI/CD discipline and falters where governance is superficial. Emerging supply-chain risks have surfaced: April 2026 incidents exposed OAuth-compromised AI tools exposing CI secrets and deploy credentials, AI-generated config files containing exposed tokens, and 138+ CVE agent platforms—highlighting that the AI tooling layer itself (agent skills, MCP servers, model configs) has become a supply-chain attack vector not covered by traditional code audits. Agent failure modes in production are becoming concrete: 30-day production experiment showed agents without guardrails achieving only 62% success rate with major incidents (database corruption, hallucinated resource limits, unauthorized IAM escalation); guardrails (command allowlists, deployment windows, human approval gates) improved success to 89% but required extensive operational investment. Critical assessment identifies high-leverage safe automations (test parallelization, flaky test detection, automated rollback) vs. high-risk autonomous decisions (schema changes, infrastructure modification without staging validation). Systemic security risks persist: Wiz analysis of hundreds of thousands of cloud environments found 20% of organizations using AI-powered development platforms experienced systemic vulnerabilities from repeated generation patterns. The limiting factor has shifted definitively from capability to governance readiness.

TIER HISTORY

ResearchMar-2023 → Mar-2023
Bleeding EdgeMar-2023 → Apr-2024
Leading EdgeApr-2024 → present

EVIDENCE (99)

— Technical critical assessment identifying high-leverage AI automations (test parallelization, flaky test detection, automated rollbacks) and severe production risks (loss of human judgment, cascading rollback failures, compliance gaps). Recommends explicit per-environment gates and mandatory staging validation.

— Multi-agent CI/CD architecture (Test, Build, Deploy agents autonomously generating tests and Dockerfiles) achieving 93% deployment time reduction (45→3 min) and 92% reduction in failed deploys (8–12→0–1/month). Quantified production outcomes from independent practitioner deployment.

— 9-part Microsoft tutorial series on agentic workflows automating application modernization including 'analysis, transformations, fixing builds, generating deployment assets.' Addresses CI/CD and cloud-ready infrastructure generation within comprehensive modernization loops.

— Industry platform analysis of 2026 CI/CD capabilities: GitHub Actions, GitLab Duo (natural language pipeline generation), and Jenkins all ship AI-assisted pipeline generation as standard features. Documents ecosystem shift toward automated CI/CD configuration across all major platforms.

— April 2026 incident recap: Vercel OAuth breach exposed CI secrets via AI tools (Context.ai), SAP incident exposed npm tokens via AI-generated config files, 138-CVE OpenClaw agent platform. Critical negative signal: AI tooling layer (agents, skills, MCP servers) has become a supply-chain attack surface not covered by existing audits.

— Large-scale empirical evaluation of 27 AI models on infrastructure code (Terraform, Dockerfiles, CI/CD pipelines) reveals 70–97% vulnerability rates in DevOps-specific code generation. Critical negative signal: AI-generated infrastructure code remains fundamentally insecure without substantial hardening.

— Wiz analysis of hundreds of thousands of cloud environments: 20% of organizations using AI-powered development platforms experienced systemic security issues from repeated generation patterns. Documents real-world deployment outcome: widespread adoption paired with systemic infrastructure vulnerabilities.

— 30-day production experiment replacing GitHub Actions CI with Claude-based agent. Week 1-2 (no guardrails): 62% success rate, 6 major incidents including database corruption, hallucinated resource limits, and unauthorized IAM escalation. Week 3-4 (with guardrails): 89% success. Critical negative signal: agents require extensive human controls and command allowlists for safe production use.

HISTORY

  • 2023-H1: GitLab 16 launched with AI-powered code suggestions and vulnerability explanation; CARFAX case study showed productivity gains with 20% increase in production deployments. Early tools like Pulumi AI and Firefly's AIaC emerged for IaC generation from natural language. Security concerns noted: 40% of AI-generated code contained vulnerabilities, underscoring the need for human review and automated validation.
  • 2023-H2: Major vendors expanded IaC AI capabilities: AWS launched Amazon Q Developer with explicit Terraform/CloudFormation/CDK support; GitLab reported 1 billion CI/CD pipelines with enterprise adoption; HashiCorp introduced AI-generated Terraform module tests (beta). GitLab Duo Code Suggestions reached GA with claimed 7x faster cycle times. Critical research published documenting security flaws in LLM-generated code, confirming quality barriers remain unresolved despite scale of deployment.
  • 2024-Q1: All major cloud vendors shipped production IaC generation capabilities: AWS expanded Amazon Q Developer with infrastructure generator for existing resources; GitLab Duo adoption accelerated with measurable DORA metric improvements reported by customers. Academic survey (March 2024) documented the state of LLM-based IaC generation, confirming persistent quality and security gaps. Adoption pattern emerged: rapid capability deployment paired with mandatory governance controls (automated validation, security scanning, human review).
  • 2024-Q2: AWS Amazon Q Developer reached GA with confirmed adoption metrics (37-50% code acceptance rates at BT Group and NAB); deployment evidence widened beyond announcements to named case studies showing measurable ROI (Notifix: ~40% manual operations reduction, €2,400/month savings per 6-engineer team). GitLab Duo expanded root cause analysis for CI/CD failures, automating troubleshooting of pipeline errors and infrastructure deployment issues. Thoughtworks Radar issued cautionary hold on AI-generated code complacency, signaling persistent governance requirements remain essential for safe deployment at scale.
  • 2024-Q3: GitLab Duo Enterprise reached GA and was recognized as a Leader in Gartner's first Magic Quadrant for AI Code Assistants. AWS demonstrated real-world IaC generation workflows (CDK for serverless, GitLab CI/CD YAML scaffolding) and enhanced CloudFormation's IaC Generator with resource discovery. However, Veracode's Black Hat 2024 presentation documented that 40% of AI-generated code contains known vulnerabilities, with emerging threats from poisoned datasets—confirming that code quality and security governance remained unresolved barriers despite accelerating vendor adoption and enterprise deployment.
  • 2024-Q4: Ecosystem maturity accelerated with new vendors (Infra.new) and expanded features across platforms. GitLab internal testing showed 84% test coverage in 2 days with AI-assisted generation. However, two independent peer-reviewed studies published in late 2024 confirmed the critical governance barrier: systematic literature review and Georgetown CSET evaluation both documented ~50% of AI-generated code contains security vulnerabilities and bugs. Practitioner reports highlighted reliability failures with current tools. By year-end, the pattern was definitive: AI-generated CI/CD and IaC code had achieved broad adoption and clear productivity benefits, but persistent code-quality and security risks remained fundamental barriers—success depended entirely on mandatory human review, automated scanning, and strict governance controls.
  • 2025-Q1: Vendor platforms deepened agentic integration: GitLab announced Duo Workflow in private beta for automating project bootstrapping and CI/CD configuration; GitHub Copilot shipped troubleshooting feature (GA) for GitHub Actions; Amazon Q expanded CLI integrations enabling practitioner teams to embed AI code review into pipelines. Real-world practitioner integration blogs documented specific deployments. However, adoption barriers sharpened: Infragistics survey found 45% of tech leaders cite AI code reliability as a top challenge, with 55% viewing AI deployment as their biggest business risk despite 73% planning expansion. Snyk warned of silent security vulnerabilities in AI-generated code requiring mandatory scanning. AWS's re:Invent session acknowledged catastrophic infrastructure error risks and agent detachment from failures. The critical inflection point persisted: vendor platforms had matured and practitioner adoption was widening, but fundamental code-quality and governance constraints remained non-negotiable for production safety.
  • 2025-Q2: Academic and vendor research confirmed deployment maturity alongside persistent quality risks. AWS researchers published Multi-IaC-Eval benchmark showing modern LLMs achieving >95% syntactic validity in CloudFormation, Terraform, and CDK generation, but facing significant semantic alignment challenges. GitLab Duo Enterprise shipped AI impact analytics dashboard enabling measurement of IaC generation effectiveness on SDLC metrics (deployment frequency, change failure rate, cycle time). However, Apiiro's security analysis reconfirmed 40%+ of AI-generated code contains vulnerabilities, with developers committing insecure code faster than security teams can validate. Practitioner reports of GitHub Copilot failures generating 60-90% speculative content for CI/CD configuration highlighted fundamental reliability limits. Gopher Security survey found 72% of practitioners view GenAI as top IT risk. Market evolution accelerated toward contextual, AI-powered IaC security validation and secure-by-design approaches. The pattern solidified: deployment at scale was real and widening, but success remained entirely dependent on mandatory human review, automated security scanning, and strict governance guardrails around AI-generated infrastructure code.
  • 2025-Q3: Deployment adoption continued accelerating with concrete case studies and time-saving evidence. AWS published QRRA case study showing Amazon Q Developer enabling 3-day microservices modernization; documented Lambda CI/CD pipeline setup automation reducing 30-60 minute manual process to 5 minutes. Academic research (arXiv) proposed reference architecture for policy-bounded AI co-pilots in CI/CD with decision taxonomy and trust-tier frameworks. GitLab shipped GraphQL APIs for AI usage metrics across all Duo features, signaling enterprise-grade measurement infrastructure maturity. However, Veracode's Q3 security analysis confirmed persistent code-quality barriers: 45% of AI-generated code contains vulnerabilities with language-specific failure rates (Java 71%, Python 38%, C# 45%), particularly weak on CWE-80 (XSS) and CWE-117 (log injection). Practitioner consensus emerged on governance patterns: data provenance, model versioning, bias detection, and compliance monitoring became standard operational considerations. The critical pattern hardened: real-world deployment at enterprise scale was concrete and measurable, but fundamental security and reliability governance barriers remained non-negotiable for safe production adoption.
  • 2025-Q4: Real-world deployment at enterprise scale widened with multi-cloud adoption evidence and analytics infrastructure maturity. echo3D migrated to multi-cloud IaC using Amazon Q Developer, achieving 87% faster development time and 99.8% deployment success rates; AI-assisted code conversion pipelines achieved 95%+ accuracy converting CloudFormation to Terraform. GitLab shipped enhanced AI impact analytics dashboard with 6-month SDLC trend visibility, signaling vendor capability maturity for measuring AI-generated code impact at scale. However, critical negative signals persisted: Futurum survey found 60% of organizations concerned about AI-generated vulnerabilities, with 53% having discovered critical or high-severity flaws in the past twelve months. Practitioner experiments showed mixed results—autonomous GitHub Copilot CI/CD pipeline management reduced test incidents by 50% but failed on ambiguous requirements. The practice had entered a phase of pragmatic deployment: adoption was real and productivity benefits were concrete, but fundamental security, reliability, and governance barriers remained unresolved—success in production continued to depend entirely on strict validation, mandatory human review, and contextual security scanning rather than on further vendor capability improvements alone.
  • 2026-Jan: Agentic CI/CD and IaC generation reached major inflection point with simultaneous platform GA announcements from all major vendors. AWS Amazon Q Developer and GitLab Duo Agent Platform both GA'd with explicit agentic workflows for pipeline configuration, infrastructure generation, and troubleshooting. However, simultaneous negative signals underscored persistence of adoption barriers: survey synthesis showed 90% developer AI adoption but 96% distrust of code accuracy, with 40-48% of AI-generated code containing security vulnerabilities. AWS executive analysis warned that AI assistants amplify organizational bottlenecks when delivery pipelines remain manual, with 77% of organizations still deploying once daily or less. Practitioner opinion shifted toward treating AI-generated IaC as intent description layer with embedded standards enforcement, rather than autonomous code generation. The critical pattern sharpened: vendor platforms had matured agentic capabilities significantly, but the fundamental tension persisted—AI could accelerate infrastructure generation at scale, yet success remained entirely dependent on organizational pipeline maturity, mandatory human review, and security-first governance rather than on further vendor feature velocity.
  • 2026-Feb: Platform capability expansion continued with GitHub Copilot enterprise usage metrics GA and Azure Boards integration with custom agents, signaling vendor maturity in agentic workflows. However, empirical evidence of adoption barriers intensified: CircleCI's analysis of 28M workflows showed main branch success rates dropped to 70.8% (5-year low) and recovery times rose 13%, revealing that AI code generation capacity exceeded pipeline integration capability. Developer trust collapsed further—Stack Overflow survey found only 29% trust AI code despite 84% adoption. Critical practitioner analysis (Signadot, byteiota) identified fundamental gaps: traditional CI/CD lacks integration testing for non-deterministic agents, and 96% of developers distrust AI accuracy while only 48% verify before committing, making verification the new bottleneck. The pattern clarified: capability expansion was real, but the limiting factor shifted from tooling maturity to organizational readiness—adoption succeeded where CI/CD practices already included strict integration testing, mandatory human review, and security scanning, and failed where governance was superficial.
  • 2026-Mar: Vendor platform GA announcements converged with high-profile governance failures. Pulumi Neo and Spacelift Intelligence (Intent) both reached GA for natural-language infrastructure provisioning; a Harness survey of 700 leaders found 69% of heavy AI users experience deployment rollbacks despite 45% faster deploy velocity. Amazon's internal Kiro AI agent caused multiple production outages (6.3M lost orders on March 5) due to agentic autonomy bypassing human approval gates; Harper Foley documented ten infrastructure-destruction incidents across six AI tools with no vendor postmortems or liability frameworks. On the generation quality side, AquilaX analysis found AI systematically misconfigures IaC security (78% of S3 buckets unprotected, 71% IAM wildcards, 69% unencrypted EBS); a LinearB analysis of 8.1M PRs showed AI-generated code waits 4.6x longer for review and achieves only 32.7% acceptance, with CI/CD review consuming 57% of cycle time. The month crystallized a dual signal: agentic infrastructure generation is reaching platform maturity, but governance and review infrastructure remain the binding constraint for safe production adoption.
  • 2026-Apr: Deployment progress widened alongside ROI and maturity concerns. New multi-agent case studies emerged: InfraSquad (LangGraph-based Terraform generation with security loops and CIDR sanitization) and Classmethod's production-adjacent CloudWatch automation using Claude Code and HashiCorp Agent Skills, demonstrating agent error recovery capabilities. Ecosystem maturity solidified with HashiCorp Agent Skills, antonbabenko's terraform-skill, and AWS agent plugins becoming de facto standards. Pulumi Neo's official agentic IaC documentation confirmed GA status — natural language to production infrastructure code operating within policy enforcement and mandatory PR review gates. However, ROI barriers intensified: Gartner's survey of 782 I&O professionals found only 28% of AI infrastructure projects achieve full ROI, with 20% failing outright. A security audit of 200+ codebases found 73% contain vulnerabilities automated scanners miss (hardcoded secrets, deprecated patterns, hallucinated functions, authorization flaws, fabricated packages), while JetBrains research confirmed a 73% CI/CD adoption gap — pipelines' requirement for deterministic, reproducible outputs creating fundamental tension with non-deterministic AI generation. Adoption metrics showed AI generating 42% of committed code with 18% faster cycle times, but with quality trade-offs: 1.7× more issues, 3× higher readability problems, 2.74× more security vulnerabilities. The phase matured toward pragmatic integration: practitioner guidance shifted to treating AI-generated IaC as junior engineer output requiring validation gates, secure-by-design patterns, and policy enforcement rather than autonomous generation.
  • 2026-May: Practitioner case studies confirmed multi-agent CI/CD architectures deliver quantified gains (93% deployment time reduction, 92% fewer failed deploys) when guardrails are in place, while a 30-day production experiment without guardrails produced only 62% success and major incidents including database corruption and unauthorized IAM escalation. Security evidence hardened: IOActive's evaluation of 27 AI models on infrastructure code found 70–97% vulnerability rates for Terraform, Dockerfiles, and CI/CD pipelines, and Wiz analysis of hundreds of thousands of cloud environments found 20% of AI-powered development organizations experienced systemic security issues from repeated generation patterns. Platform-level supply-chain exposure persisted as the defining new risk: April 2026 incidents (Vercel OAuth breach, SAP npm token exposure via AI-generated configs) confirmed the AI tooling layer itself remains outside traditional audit frameworks.