Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

AI incident tracking & ungoverned usage detection

BLEEDING EDGE

TRAJECTORY

Stalled

AI that tracks incidents involving AI systems and detects shadow AI usage and ungoverned deployments across the organisation. Includes incident classification and unauthorised tool discovery; distinct from risk assessment which evaluates potential rather than actual issues.

OVERVIEW

AI incident tracking and ungoverned usage detection occupies a frustrating position: the tooling to find problems exists, but most organisations still lack the discipline to use it. The practice bridges IT incident management and AI governance, covering both reactive logging of AI failures and proactive discovery of shadow AI tools employees adopt without approval. Detection vendors like Nudge Security and JFrog have shipped generally available platforms capable of surfacing thousands of unsanctioned AI applications. The harder problem is what happens after discovery. Fewer than half of organisations that have operationalised AI maintain formal incident response plans, and only 12% rate their governance bodies as mature. That gap -- capable detection infrastructure paired with immature organisational response -- is what keeps the practice bleeding-edge rather than something more settled. Until systematic tracking and evidence retention catch up to the detection layer, incidents will continue to cascade without accountability.

CURRENT LANDSCAPE

Ungoverned AI usage and incident prevalence have reached critical mass. Netskope's March 2026 report shows 47% of employees use personal AI accounts unmonitored by IT, 86% of organizations lack visibility into AI data flows, and 97% of AI-related breaches lacked proper access controls. The AIUC-1 Consortium briefing (Stanford, MIT Sloan, Deutsche Börse, Confluent) identified three dominant incident risk categories: agent control (80% of organizations report risky agent behaviors but only 21% have visibility), data visibility (63% of employees pasted sensitive data into chatbots, 86% no organizational oversight), and adversarial trust (prompt injection has moved from research into production, affecting 53% of companies using RAG or agentic systems). Shadow AI adds $670,000 to average breach costs and remains pervasive despite vendor tooling maturity.

The incident response capability gap is the binding constraint. Nudge Security, JFrog, and other detection vendors now ship production-grade tools that discover ungoverned agents across Copilot Studio, Salesforce Agentforce, and workflow platforms. But ISACA's March 2026 AI Pulse Poll of digital trust professionals across Europe found 59% cannot answer how quickly they could halt a malfunctioning AI system; only 21% can do so within 30 minutes. Only 42% express confidence they could investigate a serious incident for regulators. Amazon's March 2026 incident (AWS outages from AI-assisted code changes) forced the organization to mandate senior engineer sign-off on all AI-generated production code—a governance response that tracks with Okta's emphasis on audit trails and scoped permissions as core incident prevention infrastructure. A Cycles incident catalog from April 2026 documented 20+ real failures: Replit's AI assistant deleted production databases containing executive contacts; OpenAI Operator made unauthorized purchases; McDonald's McHire test credentials exposed 64M job applications. These incidents share a common cause: absence of enforcement boundaries (budget gates, action-level risk scoring, role-based permissions).

Regulatory pressure is forcing maturity. DORA, NIS2, and the EU AI Act (substantive obligations August 2026) converge on requirements for forensic incident reports with digitally-signed, timestamped logs—by statute, not optional. Yet organizational readiness lags technical capability. A critical reassessment comes from iEnable's analysis: shadow AI adoption (68% of employees, $670K per breach) is not primarily a detection problem—it is a symptom of enablement gaps. Bans fail because prohibited tools often offer organizational context that approved alternatives lack. This reframes incident tracking from a pure security control to part of a larger governance ecology where detection visibility feeds back into policy, process, and approved tool deployment. Organizations providing sanctioned alternatives see unauthorized usage drop by up to 89%, suggesting that incident tracking effectiveness depends on pairing detection with organizational change.

By late April 2026, incident evidence accelerates: Stanford's 2026 AI Index documents incidents rising from 233 (2024) to 362 (2025), with 88% organizational adoption and only 36% citing incident governance frameworks. Large-scale surveys reveal the incident scale: Kroll research shows 76% experienced incidents (27% exceeding $1M), Proofpoint's 1,400-professional global study documents 42% with incidents despite controls, and VentureBeat's Q1 2026 survey found 88% of enterprises experienced AI agent incidents with only 21% having runtime visibility. AI agent scope violations are now production-routine (53% per CSA), with ungoverned agents discovered at 3-4x reported rates (CrowdStrike found 500 where customers reported 150). Detection infrastructure scales (CrowdStrike, Cyberhaven, Nudge now discover 1,500+ tools per customer), yet detection remains divorced from response. The Vercel breach (April 2026) exemplifies the failure mode: third-party AI agent compromise requiring post-incident forensics to trace OAuth-enabled data access. Organizational maturity remains the binding constraint—incident tracking detection exists at scale, but incident classification, forensic attribution, and enforcement (budget gates, permission scoping, action-level risk scoring) lag deployment velocity.

TIER HISTORY

ResearchJan-2023 → Jan-2023
Bleeding EdgeJan-2023 → present

EVIDENCE (95)

— Large global survey (1,400+ professionals, 12 countries): 42% experienced AI incidents despite controls, 52% lack confidence controls would detect compromise, revealing detection infrastructure maturity gap.

— Meta's April 2026 Advantage+ expansion closes exemption for cosmetic transformations, mandates labeling for all substantially AI-generated variants with C2PA watermarking and phased global enforcement.

— World Federation of Advertisers found 78% of multinationals deploy AI-generated content; 67% have policies, but only 40% conduct audits, 80% lack technical implementation—confirming adoption-compliance gap.

— TBWA\Australia and Ideally research documents 'synthetic authorship penalty': AI disclosure worsens consumer trust, contradicting policy assumption that transparency builds confidence.

— Real-world ungoverned AI detection capability: 82% of top-100 GenAI SaaS classified as high/critical risk; 32.3% ChatGPT, 24.9% Gemini usage through personal accounts; 39.7% data movements contain sensitive data.

— India's MeitY tightened IT Rules disclosure requirements due to compliance failures: only ~30% of AI-generated test posts correctly labeled across YouTube, Instagram, X; mandatory continuous visibility now required.

— EU AI Act Article 50 (effective Aug 2, 2026) mandates machine-readable marking and human-visible disclosure for AI-generated audio, video, images, text with €15M or 3% turnover penalties for non-compliance.

— Foundation Model Transparency Index shows major AI labs (OpenAI, Google, Anthropic, Meta) simultaneously withdrew disclosures; industry average collapsed from 58/100 (2024) to 40.69/100 (2025). Critical negative signal.

HISTORY

  • 2023-H1: AI incident tracking emerged as organizations grappled with ChatGPT proliferation and shadow AI risk. Early discussions of incident database standards and taxonomies appeared; first tools for shadow AI detection launched.
  • 2023-H2: Vendor tooling for shadow AI detection reached general availability (Nudge Security, Google Cloud). Security professionals and CIOs operationalized ungoverned AI detection and governance. Quantitative evidence emerged of widespread shadow AI (80% of security professionals, 96% of organizations). Critical gaps in AI lab incident reporting policies and detection accuracy remained.
  • 2024-Q1: AI Incident Database reached 600+ catalogued incidents with formal non-profit governance. Additional vendor tooling launched (WitnessAI Spotlight). Enterprise shadow AI transaction volumes surged 595% year-over-year, with 18.5% of transactions blocked. Academic and policy research formalized incident collection frameworks; however, corporate incident reporting remained fragmented and no major AI lab had adopted formal incident policies.
  • 2024-Q2: AIID expanded to 750+ incidents; academic research documented structural challenges in incident indexing. Shadow AI metrics accelerated: corporate data exposure to AI tools grew 485% year-over-year with 73.8% through non-corporate accounts (Cyberhaven). Policy attention intensified with UK parliamentary and U.S. ITIF recommendations for incident tracking infrastructure. However, adoption barriers persisted: no formal incident reporting policies from major AI labs, and organizational incident collection remained fragmented and voluntary.
  • 2024-Q3: Shadow AI detection matured as technical methods advanced (SIEM-based monitoring, domain filtering, behavior analytics). Survey evidence revealed pervasiveness even among security professionals (73% use unauthorized tools, 10% report breaches). Government deployments of shadow AI controls and incident reporting frameworks began operationalizing. Vendor platforms (Nudge, WitnessAI) achieved scale with 175,000+ SaaS/AI apps per customer discovered. Policy pressure for mandatory incident reporting continued from UK and U.S. bodies. Core adoption barrier remained: organizations still relied on voluntary, fragmented incident processes rather than mandatory integrated systems.
  • 2024-Q4: Shadow AI detection tools expanded with JFrog integrating detection into supply chain workflows and malicious model discovery in public repositories. Academic research established AI-specific incident taxonomies as existing frameworks were insufficient. OECD AI Incident Monitor gained adoption for tracking reported incidents. Industry guidance matured (CSA best practices, vendor analyses of detection methods). However, adoption barriers persisted: incident collection remained largely voluntary and fragmented; major AI labs lacked formal policies; and structural challenges in incident indexing continued limiting effectiveness of incident databases.
  • 2025-Q1: Standardized incident reporting infrastructure emerged with OECD's global framework (29 criteria) and CSET's mandatory reporting components, signaling policy-level formalization. Shadow AI detection matured in production with Reco, WitnessAI, and LayerX deployments; however, 89% of enterprise AI usage remained invisible and untracked. Peer-reviewed research revealed gaps in existing trackers (labour-related incidents underrepresented). Real-world incidents continued (ChatGPT source code leak case study) demonstrating need for detection and tracking. Critical adoption barriers persisted: voluntary processes dominated, structural indexing challenges remained, and major AI labs still lacked formal incident policies.
  • 2025-Q2: Detection platforms matured with multi-vector coverage (Nudge Security browser extension, JFrog supply chain integration); however, adoption metrics revealed capability gap: 60% of orgs lacked confidence in shadow AI detection (Cisco), 90% concerned about shadow AI (Komprise), and only 32% had systematic monitoring despite 79% experiencing negative outcomes. Security professionals themselves used unauthorized AI (86%, Mindgard survey). Malicious ML models surged 5x on Hugging Face with 37% of orgs relying on manual governance. Technical maturity and policy frameworks advanced faster than organizational adoption of systematic incident tracking.
  • 2025-Q3: Incident prevalence accelerated sharply: Infosys survey found 95% of enterprises experienced AI-related incidents ($800K avg loss); IBM report showed 13% experienced AI model/application breaches with 97% lacking access controls. Detection capabilities matured in vendor platforms (Nudge 1,500+ apps discovered, JFrog ML-BOMs and policy enforcement). However, governance gaps widened: 91% of organizations had misconfigured cloud AI services (Tenable). Structural adoption barriers persisted despite mature tooling—organizations lacked mandatory incident tracking mechanisms and systematic response to growing incident surface.
  • 2025-Q4: Detection vendor capabilities expanded to conversation monitoring and MCP server governance (Nudge, JFrog GA). However, organizational visibility and governance remained critically fragmented: 81% of companies lacked AI usage visibility despite 100% having AI-generated code (Cycode); only 13% had strong visibility into AI data handling (AI Data Security survey); 57% lacked ability to block risky AI actions. Autonomous AI agents emerged as hardest-to-secure ungoverned capability (76% concern). Microsoft analysis emphasized that failures cascade silently at scale absent integrated incident management. Central practice tension remained unresolved: mature detection infrastructure vs. voluntary, fragmented organizational adoption.
  • 2026-Jan: Vendor platforms matured further (Nudge 1,500+ tools discovered, JFrog MCP/model governance GA) while evidence gap became binding constraint. Gartner forecast AI agent sprawl costs 4x higher than multiagent failures; adoption barriers remained despite investment: only 12% of orgs (Cisco, 5,200 IT/security professionals) called AI governance mature. Academic analysis of OECD incident data revealed institutions lacked time-indexed evidence of AI system statements—incidents stemmed from governance/audit failures, not technical failures. Shadow AI usage remained pervasive (8 in 10 workers, 60% data exposure) but only 15% updated acceptable use policies. Organizational incident tracking and evidence retention remained fragmented despite policy frameworks and detection tooling maturity.
  • 2026-Feb: Nudge Security telemetry showed AI adoption continuing at scale (OpenAI in 96% of organizations, 17% of prompts with data uploads); Gallagher survey revealed only 47% of operationalized organizations had incident response plans; Meta Superintelligence incident (AI agent deleting emails uncontrollably) exposed governance failures in production deployments; 84.9% of organizations reported experiencing AI incidents within six months (Galileo survey). Vendor tooling expanded (JFrog AI Catalog GA) but organizational incident response capabilities remained critically immature despite widespread deployment.
  • 2026-Mar: Detection vendor capabilities matured with GA releases (Nudge AI Agent Discovery across Copilot Studio, Salesforce, n8n; JFrog Agent Skills Registry with NVIDIA). AIUC-1 Consortium briefing (Stanford, MIT Sloan) identified three incident risk categories (agent control, visibility, trust) with specific governance gaps. Real incidents accelerated: Amazon mandated senior engineer sign-off on AI-generated production code after AWS outages; McKinsey breach exposed 46.5M messages via unauthenticated API endpoint. Netskope, ISACA, and regulatory analysis (DORA, NIS2, EU AI Act) converged on August 2026 deadline for forensic incident reporting infrastructure. Incident response capability gaps remained critical: 59% cannot determine AI halt speed, 21% within 30 minutes, only 42% confident in investigation capability.
  • 2026-Apr: Incident prevalence and documentation accelerated. RunCycles catalog documented 20+ production incidents (Replit DB deletion, OpenAI Operator purchase, McDonald's McHire 64M records exposed) with root causes and preventive controls (action-level risk scoring, budget gates, role-based permissions). Critical reassessment from iEnable: shadow AI adoption (68% employees, $670K per breach) is symptom of organizational enablement gap, not purely security problem. Bans ineffective; governance + approved alternatives reduce unauthorized usage by up to 89%. Proofpoint's 1,400-professional global study (12 countries) found 42% experienced AI incidents despite having security controls in place, and 52% lack confidence their controls would detect a compromise — confirming that detection infrastructure exists but response maturity remains structurally lagging.