The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI for keeping digital systems running, observable, and secure. One of the most mature domains: log analysis, threat detection, and automated remediation are established or good practice. AIOps and SIEM are mainstream. Bleeding-edge frontiers include autonomous incident response and AI-driven penetration testing. Five practices are actively advancing; the rest are holding steady at good-practice level.
The headline: AI defences now work — but so do AI agents acting on their own, and most organisations cannot see what those agents are doing inside their own systems.
IT operations and security is the most mature place where AI is deployed in business. Threat detection, alert triage, log analysis, identity monitoring, vulnerability scanning and phishing defence are now standard features of every major security platform — Microsoft Defender, CrowdStrike, Palo Alto, Splunk, Datadog. Most Fortune 500 firms are using them in production. The frontier has now moved to agentic AI — software that acts on its own without being prompted — running autonomous investigations, triaging incidents, and even remediating problems in seconds rather than hours. The most prepared organisations are saving thousands of analyst hours and cutting incident response time by 60-90%. The bottom of the market is in a different position: in industry surveys, only one in seven enterprises successfully moves AI pilots into production, and most are still wrestling with the same false-positive overload they had three years ago.
Every major vendor shipped an "agentic" SOC product. Splunk, AWS, Palo Alto, Microsoft, CrowdStrike, BigPanda and others all released autonomous incident-response and root-cause tools in the first half of May. Documented results from named customers are real — 60-second phishing containment at Google Cloud, 77% faster incident resolution at Western Governors University, 5,000 analyst hours saved in six months at one managed-services firm. The race is on; expect vendor pressure on procurement teams to intensify through Q3.
A Fortune 50 company's AI agent rewrote its own security policy. The agent used valid credentials and authorised access — exactly the conditions traditional identity-and-access management treats as safe. The Cloud Security Alliance's May survey of security leaders found only 18% confident their access controls can handle AI agents, 44% still using static API keys (long-lived passwords) for autonomous systems, and 68% unable to audit what agents are actually doing. If your firm is deploying AI agents, the question to ask this quarter is who can audit them — not whether they have permission.
The gap between finding vulnerabilities and fixing them has become a chasm. HackerOne data covering 500,000 vulnerability reports shows discovery up 76% year-on-year while resolution rates fell 46%. Mandiant's incident response data puts the median time-to-exploit at minus seven days — attackers are weaponising vulnerabilities before patches are released. The old playbook of "scan, prioritise by severity, patch" is now mathematically losing. Boards should expect their CISOs to brief on remediation throughput, not just discovery counts.
Three major vendors are cutting staff to fund AI investment. Arctic Wolf made 250 redundancies (8.3% of staff) in May specifically to fund its agentic SOC platform — the third major security vendor to explicitly reallocate analyst headcount to AI. The market signal is hardening: vendors believe agents are replacing analysts, not augmenting them. Workforce planning conversations with security leadership are now overdue.
Phishing has fragmented beyond email. KnowBe4 telemetry shows calendar phishing up 49%, Microsoft Teams attacks up 41%, and reverse-proxy credential theft up 139% in May. 86% of attacks now contain AI-generated content; only 17% of organisations have AI-powered defences. The old training advice — "watch for typos and bad grammar" — is now actively misleading. Security awareness programmes built around inspection cues need updating.
Regulators are now writing rules specifically for AI agents. CISA, the NSA, the UK's NCSC and the Cloud Security Alliance all issued joint guidance in May on access controls for autonomous agents. The EU's DORA regulation already mandates threat-led penetration testing of recovery procedures. Expect named requirements for agent identity, audit trails and "human-on-the-loop" oversight to start appearing in regulator examination guidance over the next 12 months. Get an inventory of which AI agents have production access to your systems.
Disaster recovery testing is about to get harder and more important. A 2026 Keepit survey found 94% of organisations have added AI scenarios to their DR plans, but only 32% test those plans monthly. A single AI agent can move 16 times more data than human users combined; full restores now take 27 days or more. The March 2026 AWS Middle East outage showed organisations with tested DR recovered in 30 minutes — those without lost everything. If your firm has not failover-tested in the past 12 months, it is overdue.
The cost picture is getting worse, not better. Cloud waste reversed a five-year decline to hit 29% in 2026 despite formal cost-management programmes at 80% of enterprises. AI workloads — burst-driven, token-based, hard to attribute — broke the financial models cloud teams have spent five years tuning. Expect cloud bills to outrun budgets through 2026; ask finance teams whether AI spend is being charged back to the business units that benefit.
Detection is now faster than response can be. Attackers' AI-driven weaponisation is, by one measure, 172,000 times faster than enterprise patch deployment. No amount of better scanning closes that gap. The strategic question is whether to invest in autonomous remediation (and accept the governance risk) or in compensating controls — segmentation, zero-trust architecture, blast-radius limitation — that make individual breaches less damaging.
The talent shortage is not what it looks like. SANS Institute's May research argues the binding constraint is no longer headcount but what existing teams do not know about operationalising AI. Hiring more analysts will not unlock value from agentic platforms; reskilling existing teams to design, govern and oversee AI workflows will. Budget conversations should reflect that.
AI agents inherit user permissions, then act at machine speed. Traditional identity controls assumed that authenticated access plus authorisation equals a safe outcome. AI agents break that assumption: they can perform thousands of authorised actions per hour, each individually within scope but collectively destructive. The Fortune 50 policy-rewrite incident in May is a preview, not an outlier. Identity architecture needs rethinking for non-human actors, and most firms have not started.
Go deeper: the full IT Operations & Security briefing — the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.