The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that analyses cloud spending patterns and recommends rightsizing, reserved instances, and architectural changes to reduce cost. Includes waste detection and commitment planning; distinct from capacity planning which focuses on performance rather than cost.
Cloud cost optimisation is a proven discipline with mature tooling, competitive vendors, and documented ROI — organisations that apply it systematically report 30–52% reductions in cloud spend. The practice centres on analysing spending patterns and automating rightsizing, commitment management, and waste detection across cloud infrastructure. Since reaching good-practice maturity in 2022, the challenge has shifted from whether optimisation works to whether organisations can sustain the execution discipline it demands. Only about a third of enterprises report fully achieving their cloud cost goals, even with formal FinOps teams in place. That gap is now widening: AI workloads introduce burst-driven, token-based spending patterns that break the allocation and forecasting assumptions traditional FinOps was built on. The defining tension for this practice is no longer tooling adequacy but organisational bandwidth — teams are stretched across an expanding scope that now includes SaaS licensing, private cloud, and AI cost governance alongside conventional IaaS optimisation.
The vendor ecosystem is consolidated and competitive. Apptio/IBM Cloudability and Flexera anchor the market, with Flexera's acquisitions of ProsperOps and Chaos Genius in early 2026 signalling a shift from recommendation dashboards toward autonomous execution. AWS continues expanding native tooling through Compute Optimizer and Cost Optimization Hub, while CAST AI and similar specialists target Kubernetes and container workloads. Traditional optimisation tactics — rightsizing, committed-use discounts, Spot Instances — remain effective for conventional IaaS, delivering the 30–52% savings the discipline is known for. Capital One, McDonald's, and Siemens have each documented significant reductions through systematic commitment management.
The FinOps Foundation's 2026 survey confirms the discipline's scope has expanded well beyond cloud infrastructure: 90% of practitioners now manage SaaS costs (up 25 points), 64% cover software licensing (up 15 points), 57% handle private cloud (up 18 points), and 98% manage AI/ML workloads (up from 31% in 2024). That expansion has revealed structural limits. AI spending patterns violate core FinOps assumptions: costs are burst-driven, token-based, experiment-heavy, and shared across teams in ways that defeat traditional allocation models. A CloudZero survey of 475 organisational leaders found AI had disrupted established cost governance; Gartner's April 2026 infrastructure survey documented 72% of AI infrastructure projects failing to deliver ROI (only 28% succeeding), with 77% of failures organisational rather than technical—no ownership, misalignment, or post-deployment measurement. Wasabi's 2026 storage index found 49% of organisations exceeded budgets due to fee complexity, with 72% harbouring unmeasured dark data. Most critically: FinOps adoption has grown sharply (80% of organisations now run formal programs) yet cloud efficiency collapsed 15 percentage points (from 80% to 65%), marking the first waste reversal in five years. Waste ticked back up to 29% in 2026 after years of decline. Engineers are beginning to embed cost gates directly into CI/CD pipelines, blocking pull requests on spend thresholds—a cultural shift toward distributed ownership—but automation remains limited; only 17% of Kubernetes teams run continuous optimization in production, with 71% requiring human review before changes. The practice has hit a maturity ceiling: teams with fully automated FinOps achieve 25–30% higher savings than manual approaches, yet mature teams face a hard wall around 97% optimization efficiency, beyond which forecasting and AI cost attribution become the limiting factors.
— AI/ML workloads accelerated from 2.42% to 5.86% of total cloud spend in five months; 80% of CEOs say their role at risk if company fails to deliver measurable AI ROI by end of 2026; 40% spending $10M+ on AI with no clarity on ROI—signals cost governance emergence.
— FinOps evolution into AI-native practice: 60% of enterprises using AI/automation in FinOps workflows; AWS, Google Cloud, Azure all deployed AI cost tools; includes real-time governance enforcement, unit economics maturity, and Kubernetes cost optimization—ecosystem-wide shift.
— Mid-to-large enterprises spending $1.8M+ annually on LLM infrastructure with no attribution mechanism; three chargeback models detailed (92% precision dynamic attribution); autonomous agent looping amplifies costs 400%; Gartner: 80% of enterprises require AI cost attribution by 2026.
— Critical paradox: FinOps adoption jumped 31% (2024) to 70% (2026) yet waste stayed flat 27-32%. Structural causes: vendor opacity (42% of EC2 missing discounts), visibility gaps (61% can't attribute 80%+ costs), overprovisioning culture. Vendor incentives fundamentally misaligned with efficiency.
— Concrete deployment: Llama 4 Scout quantization/pruning achieved 73% cost reduction ($32.7K/month); L4 hardware vs H100 cuts 60-70%; Spot strategies yield 60-90% discounts. Demonstrates AI infrastructure cost optimization now extends beyond compute right-sizing to model optimization.
— SaaS median cloud spend 11.5% of revenue with 27% average waste; well-managed teams achieve 10-15% waste via monthly reviews (discipline > tooling). Reserved Instance paradox: despite 50-72% savings promise, adoption low because modern architectures evolve faster than multi-year commitments.
— $37B enterprise GenAI spending in 2025 yet 80% report no measurable EBIT impact; AI cost attribution requires extending cloud governance (tagging, chargeback, anomaly detection) to dynamic token/API patterns rather than static resources—emerging practice expansion.
— Critical limitation signal: waste reversed 5-year decline to 29% despite 71% CoE/63% dedicated teams. AI workloads (22% spend) break traditional optimization; GPU waste 30-50%. Organizations addressing GPU directly achieve 40-70% reductions—signals practice ceiling and new requirements.