The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that tracks incidents involving AI systems and detects shadow AI usage and ungoverned deployments across the organisation. Includes incident classification and unauthorised tool discovery; distinct from risk assessment which evaluates potential rather than actual issues.
AI incident tracking and ungoverned usage detection occupies a frustrating position: the tooling to find problems exists, but most organisations still lack the discipline to use it. The practice bridges IT incident management and AI governance, covering both reactive logging of AI failures and proactive discovery of shadow AI tools employees adopt without approval. Detection vendors like Nudge Security and JFrog have shipped generally available platforms capable of surfacing thousands of unsanctioned AI applications. The harder problem is what happens after discovery. Fewer than half of organisations that have operationalised AI maintain formal incident response plans, and only 12% rate their governance bodies as mature. That gap -- capable detection infrastructure paired with immature organisational response -- is what keeps the practice bleeding-edge rather than something more settled. Until systematic tracking and evidence retention catch up to the detection layer, incidents will continue to cascade without accountability.
Ungoverned AI usage and incident prevalence have reached critical mass. Netskope's March 2026 report shows 47% of employees use personal AI accounts unmonitored by IT, 86% of organizations lack visibility into AI data flows, and 97% of AI-related breaches lacked proper access controls. The AIUC-1 Consortium briefing (Stanford, MIT Sloan, Deutsche Börse, Confluent) identified three dominant incident risk categories: agent control (80% of organizations report risky agent behaviors but only 21% have visibility), data visibility (63% of employees pasted sensitive data into chatbots, 86% no organizational oversight), and adversarial trust (prompt injection has moved from research into production, affecting 53% of companies using RAG or agentic systems). Shadow AI adds $670,000 to average breach costs and remains pervasive despite vendor tooling maturity.
The incident response capability gap is the binding constraint. Nudge Security, JFrog, and other detection vendors now ship production-grade tools that discover ungoverned agents across Copilot Studio, Salesforce Agentforce, and workflow platforms. But ISACA's March 2026 AI Pulse Poll of digital trust professionals across Europe found 59% cannot answer how quickly they could halt a malfunctioning AI system; only 21% can do so within 30 minutes. Only 42% express confidence they could investigate a serious incident for regulators. Amazon's March 2026 incident (AWS outages from AI-assisted code changes) forced the organization to mandate senior engineer sign-off on all AI-generated production code—a governance response that tracks with Okta's emphasis on audit trails and scoped permissions as core incident prevention infrastructure. A Cycles incident catalog from April 2026 documented 20+ real failures: Replit's AI assistant deleted production databases containing executive contacts; OpenAI Operator made unauthorized purchases; McDonald's McHire test credentials exposed 64M job applications. These incidents share a common cause: absence of enforcement boundaries (budget gates, action-level risk scoring, role-based permissions).
Regulatory pressure is forcing maturity. DORA, NIS2, and the EU AI Act (substantive obligations August 2026) converge on requirements for forensic incident reports with digitally-signed, timestamped logs—by statute, not optional. Yet organizational readiness lags technical capability. A critical reassessment comes from iEnable's analysis: shadow AI adoption (68% of employees, $670K per breach) is not primarily a detection problem—it is a symptom of enablement gaps. Bans fail because prohibited tools often offer organizational context that approved alternatives lack. This reframes incident tracking from a pure security control to part of a larger governance ecology where detection visibility feeds back into policy, process, and approved tool deployment. Organizations providing sanctioned alternatives see unauthorized usage drop by up to 89%, suggesting that incident tracking effectiveness depends on pairing detection with organizational change.
By late April 2026, incident evidence accelerates: Stanford's 2026 AI Index documents incidents rising from 233 (2024) to 362 (2025), with 88% organizational adoption and only 36% citing incident governance frameworks. Large-scale surveys reveal the incident scale: Kroll research shows 76% experienced incidents (27% exceeding $1M), Proofpoint's 1,400-professional global study documents 42% with incidents despite controls, and VentureBeat's Q1 2026 survey found 88% of enterprises experienced AI agent incidents with only 21% having runtime visibility. AI agent scope violations are now production-routine (53% per CSA), with ungoverned agents discovered at 3-4x reported rates (CrowdStrike found 500 where customers reported 150). Detection infrastructure scales (CrowdStrike, Cyberhaven, Nudge now discover 1,500+ tools per customer), yet detection remains divorced from response. The Vercel breach (April 2026) exemplifies the failure mode: third-party AI agent compromise requiring post-incident forensics to trace OAuth-enabled data access. Organizational maturity remains the binding constraint—incident tracking detection exists at scale, but incident classification, forensic attribution, and enforcement (budget gates, permission scoping, action-level risk scoring) lag deployment velocity.
— Large global survey (1,400+ professionals, 12 countries): 42% experienced AI incidents despite controls, 52% lack confidence controls would detect compromise, revealing detection infrastructure maturity gap.
— Meta's April 2026 Advantage+ expansion closes exemption for cosmetic transformations, mandates labeling for all substantially AI-generated variants with C2PA watermarking and phased global enforcement.
— World Federation of Advertisers found 78% of multinationals deploy AI-generated content; 67% have policies, but only 40% conduct audits, 80% lack technical implementation—confirming adoption-compliance gap.
— TBWA\Australia and Ideally research documents 'synthetic authorship penalty': AI disclosure worsens consumer trust, contradicting policy assumption that transparency builds confidence.
— Real-world ungoverned AI detection capability: 82% of top-100 GenAI SaaS classified as high/critical risk; 32.3% ChatGPT, 24.9% Gemini usage through personal accounts; 39.7% data movements contain sensitive data.
— India's MeitY tightened IT Rules disclosure requirements due to compliance failures: only ~30% of AI-generated test posts correctly labeled across YouTube, Instagram, X; mandatory continuous visibility now required.
— EU AI Act Article 50 (effective Aug 2, 2026) mandates machine-readable marking and human-visible disclosure for AI-generated audio, video, images, text with €15M or 3% turnover penalties for non-compliance.
— Foundation Model Transparency Index shows major AI labs (OpenAI, Google, Anthropic, Meta) simultaneously withdrew disclosures; industry average collapsed from 58/100 (2024) to 40.69/100 (2025). Critical negative signal.