The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
Practices for evaluating, governing, and ensuring the responsible deployment of AI systems. Deeply polarised: model evaluation and bias auditing are good practice, but nearly half the domain is bleeding-edge — alignment research, interpretability, and AI safety benchmarking lack production-grade tooling. Regulatory pressure is accelerating adoption of the mature practices while the frontier remains largely academic.
The headline: AI rules now have teeth, and three out of four companies aren't ready. EU fines of up to 7% of global revenue start landing in August.
Most organizations spent the last two years preparing for AI regulation. This fortnight, the preparation phase ended: enforcement is live, the fines are real, and a small group of early movers (around a quarter of large firms) are pulling ahead — both in operating efficiency and in what acquirers will pay for them. The rest face a closing window. The EU's high-risk deadline lands in August 2026; Colorado's begins in June; cleaning up an AI system after the fact takes 6–12 weeks of forensic engineering per system. If you can't produce a list of every AI system your company uses, you're already behind.
Good AI governance is now worth real money in M&A. A €90M acquisition collapsed this quarter because the target couldn't show how it managed AI risk. A separate deal paid a 1.5–2× revenue premium for a company that could. If you're on either side of a deal in the next 12 months, AI governance documentation is now a diligence item, not a nice-to-have.
Courts are fining people for AI-invented "facts." "Hallucination" is the term for when an AI tool confidently makes things up — fake citations, fake quotes, fake numbers. Major legal-AI tools get this wrong between 1-in-5 and 1-in-3 of the time. US courts levied $145K in sanctions in the first quarter alone, with 5–6 new cases reported daily. Anything AI writes that goes to a regulator, a court, or a customer needs a human to check it. No exceptions.
A leading AI lab kept its newest model on a tight leash for safety reasons. Anthropic restricted its latest model to 40 vetted partners after internal testing turned up thousands of vulnerabilities — a commercially expensive call that no other lab has matched. Don't assume "newer model" means "safer model." Vendor safety practices vary widely, and the burden of asking the right questions falls on the buyer.
Companies are deploying AI "agents" much faster than they can govern them. An "agent" is software that doesn't wait to be told what to do — it takes actions on its own (sending emails, moving money, updating records). Eight in ten tech firms are now scaling agents into production; only one in five has the controls to manage them. The most aggressive adopters run 300+ separate AI tools internally. If you're piloting agents anywhere — sales, IT, finance — get a written policy on what they can and can't do before you scale, not after.
US banking regulators issued new AI rules. The Federal Reserve, FDIC, and OCC (the three main US banking supervisors) jointly replaced their 2011 model risk guidance. Every US bank above $30B in assets must now keep a live inventory of its AI models, validate them, and monitor them continuously. If you're a regulated lender of any size, expect your supervisor to ask the same questions within a year, even below the $30B threshold.
August 2026: EU enforcement begins on "high-risk" AI systems — broadly, anything that affects hiring, credit, insurance, education, or critical infrastructure. The technical standards companies need to follow won't be finalized until late 2026, but enforcement starts anyway. Systems deployed before December 2027 are grandfathered "unless substantially modified," which is creating a perverse rush to lock in deployments now. Start your AI inventory and risk classification now. Waiting for the standards is a losing strategy.
The first big agent-caused loss is coming. 96% of organizations using AI agents already report "agent sprawl" — too many agents, no one in charge — and only 12% have centralized oversight. Insurers are responding by excluding AI losses from policies entirely rather than pricing them. Assume your current cyber insurance does not cover AI-agent failures. Read the exclusions and budget for the gap.
State-by-state AI rules are stacking up. Colorado starts enforcing in June, with California, Illinois, Texas, and New York close behind, each with its own per-violation fines. A new federal AI Accountability Act adds a fifth layer with an 18-month compliance window. If you operate in multiple states, no single compliance program currently covers all five regimes — you'll need a patchwork or a high-water-mark approach.
The penalties arrived before most companies could prepare. The deadlines are fixed and the fines are quantified, but 83% of large enterprises still can't produce a basic list of the AI systems they're running. There is no quick fix, and the cost of catching up is rising every quarter.
The fastest adopters are accumulating the most risk. 70% of companies pilot AI; fewer than 20% reach production. The blocker is governance, not technology. The teams shipping fastest are also building the largest hidden compliance debt.
The tools work; the measurements don't. Off-the-shelf "AI safety" platforms now exist, but independent testing shows their guardrails can be bypassed more than three-quarters of the time, and benchmark scores don't predict real-world failures. Buying a governance tool is not a governance strategy.
Go deeper: the full AI Governance & Safety briefing — the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.