The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
Maintaining auditable records of AI-assisted decisions including inputs, outputs, confidence levels, and human overrides. Includes decision logging and override tracking; distinct from general audit trail analysis which examines process rather than AI-specific decision records.
Audit trail tooling for AI-assisted decisions is production-ready, but most organisations are not. Forward-leaning firms in finance — JPMorgan Chase, Goldman Sachs — have demonstrated measurable ROI from embedded decision logging, and regulators across the EU, UK, and US now explicitly require auditable AI records. The infrastructure question is settled: vendors ship cryptographic provenance, immutable logs, and compliance-mapped frameworks as GA products. What remains unsettled is the organisational question. Surveys consistently show that fewer than 10% of enterprises running AI in production have mature governance, and the audit profession itself lacks the technical skills to use the tools now available. This gap — between what the technology can record and what institutions can operationalise — defines the practice's leading-edge status and its central risk.
Regulation is now the primary forcing function. EU AI Act penalties of up to 35 million euros or 7% of revenue take effect in August 2026, the UK ICO formalised audit trail expectations under GDPR in late 2025, and the SEC's 2025 examination guidance requires explainable, auditable AI decision-making in regulated finance. These mandates are converging on a common expectation: organisations deploying AI must be able to reconstruct how decisions were made.
The vendor ecosystem has matured. Microsoft ships AI-specific audit logs for Copilot and Agent systems in Microsoft 365 (AIExecuteTool, AIInvokeAgent, AIInferenceCall events). CertifiedData launched cryptographically verifiable audit trails with SHA-256 hashing, Ed25519 signing, and hash-chained records explicitly targeting Article 12 compliance. Audital's GA product delivers cryptographically verifiable decision records. VeritasChain published an architecture specification and open-source implementation. The IETF's Internet-Draft for a Verifiable AI Provenance framework signals that formal standardisation is underway. The enterprise AI governance and compliance market is projected to grow from USD 2.20B in 2025 to USD 11.05B by 2036 at 15.8% CAGR, with automated audit trail capability as a key competitive differentiator. BlackLine's financial operations platform and similar domain-specific solutions demonstrate that audit trail infrastructure now ships as GA product feature, not future roadmap.
Yet implementation deficiencies persist. Datadog Security Labs disclosed a 28-day gap where Copilot Studio failed to log four documented administrative activities despite being listed as logged in Microsoft documentation—plus post-remediation regression. A Lovable breach in April 2026 exposed 48 days of undetected cross-account access due to missing audit trails, violating GDPR 72-hour breach notification and EU AI Act Article 50 transparency obligations. A Thoropass survey of 536 security and compliance leaders found 69% report AI adoption outpacing controls, and 53% cite evidence collection as an audit bottleneck. Among broader organisations, only 21% have mature governance models for autonomous AI agents despite 74% expecting moderate-to-full agentic AI adoption within two years. Market data shows 42% of companies have scrapped AI initiatives before production, primarily due to governance gaps and inability to operationalise audit trail infrastructure. The core issue: organisations have governance policies but lack operational implementations. The binding constraint is no longer whether audit trail infrastructure exists—it does, in production-ready form—but whether organisations can operationalise it at the governance and engineering level, and whether internal audit teams possess the technical expertise to use it effectively. Regulatory requirements are now a forcing function, but execution capability remains distributed and uneven.
— CertifiedData launched cryptographically verifiable audit trails with SHA-256 hashing, Ed25519 signing, and hash-chained records for EU AI Act Article 12 compliance; live public ledger demo validates tamper-evident logging feasibility.
— Market analysis: 42% of companies scrapped AI initiatives before production (vs 17% prior year); root cause identified as governance gap—audit trail infrastructure is available but organizational capability to operationalize it is severely constrained.
— Vendor guidance on audit trail requirements for financial operations with SOX/IFRS/GAAP compliance; dual-governance model where AI agents operate under same audit controls as human users with ISO/IEC 42001 certification.
— Real-world incident (Lovable April 2026 BOLA vulnerability) exposed 48-day undetected cross-account access due to missing audit trails; violated GDPR 72-hour breach notification and EU AI Act Article 50 transparency obligations.
— EU AI Act compliance analysis identifies three critical logging gaps: observability vs compliance logging (require separate pipelines), agentic AI multi-step schemas, and human oversight quality metrics missing from current implementations.
— Technical practitioner details EU AI Act Articles 9-15 as engineering requirements with 8-14 month implementation timelines; specifies required event schema capturing decision details, inputs, explainability data, system state, and oversight events.
— 71% of AI teams cannot produce complete audit trails; regulatory enforcement documented (Dutch €2.75M fine, Klara €750K fine); identifies data lineage as core operational mechanism for AI auditability with 60-70% audit prep time reduction.
— Production data shows visible agent traces improve ticket deflection (50% vs 23%), reduce P1/P2 resolution time (60%), and boost first-contact resolution (80% vs 45%), repositioning audit trails as operational feature driving adoption metrics.
2022-H1: Five evidence items document regulatory and academic recognition of audit trails in AI governance. UK regulators outlined audit landscape gaps; real-world case study shows production deployment in e-commerce; academic frameworks propose audit trail integration; empirical survey identifies standardization challenges.
2022-H2: Six evidence items signal maturation from concept to compliance requirement. Financial regulators confirm large-scale ML deployment (72% of UK financial services firms) and formalize audit trail mandates (Bank of England DP5/22). Academic research reveals deployment challenges: clinical AI performance drift and nascent state of audit tools. Professional auditing bodies establish audit standards. Regulatory pressure accelerates globally (EU AI Act, NYC Local Law 144).
2023-H1: Six evidence items demonstrate transition from mandate to implementation phase. EDPB completed practical auditing project with assessment tools and checklists (Feb 2023). Business surveys show 82% of organizations managing data integrity risks and 77% of leaders prioritizing data reliability, driving audit trail adoption. Real-world deployment emerges: Fujitsu and Hexagon implemented blockchain audit trails in critical infrastructure. Academic research continues identifying transparency and 'black-box' challenges in auditing practice. Regulatory expansion continues: India mandates audit trails in accounting software (April 2023). Critical incident (NHS Office Scripts audit trail failure June 2023) underscores operational dependency on audit infrastructure.
2023-H2: Two evidence items reveal the deployment-regulation gap. NYC Local Law 144 (effective July 2023) mandated bias audits for AI employment tools, yet only 5 companies published required results despite 75% of large firms using such tools—exposing enforcement and compliance challenges. Technical maturity advanced: peer-reviewed research demonstrated tamper-evident logging systems achieving ≈100% tampering detection and >10k events/s throughput, validating audit trail feasibility in high-stakes domains. Regulatory consolidation continued: EU AI Act finalized (December 2023) and US Executive Order drove audit requirements. Professional guidance multiplied across ISACA, KPMG, and Grant Thornton. However, persistent gaps emerged: standards fragmentation, black-box challenges in capturing ML decision context, and critical scarcity of auditors with technical expertise.
2024-Q1: Six evidence items document commercial tooling arrival and persistent organizational adoption gaps. GuardRails and Nomad Data deployed enterprise audit logging platforms; ISACA published formal AI audit guidance; but Deloitte's survey revealed only 13% of organizations had formalized AI oversight despite 94% recognizing AI's business value—exposing a widening deployment-governance gap. Expert assessments highlighted critical accountability challenges: opaque ML systems, non-verifiable decision-making, and high-profile failures (Zillow's $300M write-down) underscored the urgency of audit trail infrastructure. Practical operationalization frameworks emerged (Dawgen Global) addressing the gap between regulatory mandates and production-ready audit-ready systems.
2024-Q2: Six evidence items demonstrate production deployments alongside critical challenges. Trail GmbH deployed automation for audit trail creation (97.5% time savings). Adobe released Journey Optimizer's GA audit logging for AI-driven marketing decisions. Academic research validated blockchain-based immutable audit trails. Real-world government deployment (IRS tax audit case selection using AI for 4,000+ returns) exposed documentation deficiencies via GAO audit. Social services audit framework (ADM+S) road-tested practical assessment toolkit across child/family services deployments. Practitioner perspectives highlighted persistent auditability challenges in legal operations. The window reveals audit trail infrastructure maturing in commercial products while real-world deployments continued to expose implementation gaps.
2024-Q3: Major vendors shipped production audit trail tooling: KPMG Clara (90,000 auditors), Adobe Customer AI audit logs, and Thomson Reuters Audit Intelligence (30 min–2 hour efficiency gains). Real-world financial services deployments validated LLM-based audit search and analysis. EDPB published formal AI audit guidance emphasizing traceability. Professional bodies' adoption metrics (CAQ survey: one in three audit partners deploying AI, yet 66% of committees insufficient on AI governance) revealed growing capability-oversight gap. Vendor tooling ecosystem matured while organizational governance readiness remained distributed and uneven.
2024-Q4: Vendor ecosystem expanded: Valohai launched GA audit log features; FIO Labs documented 90% compliance improvement through automated audit trail creation. Peer-reviewed incident analysis of 202 real-world AI failures identified organizational/governance causes (58%), signaling audit trail infrastructure was available but incident governance remained nascent. Critical adoption gap emerged: AuditBoard survey revealed 61% of audit leaders lack AI expertise and <1% use AI in planning, despite 55% of organizations implementing AI—exposing mismatch between audit trail availability and organizational capability to use it effectively.
2025-Q1: Persistent organizational adoption challenges dominate evidence. Peer-reviewed study of 22 audit professionals identified transparency, explainability, and auditor overreliance as critical barriers to adoption. Critical perspective emerged questioning AI's auditability in regulated domains (medical imaging, financial auditing). Production-scale deployments validated technical feasibility (Goldman Sachs 20B+ daily events), yet industry surveys revealed 70% of companies struggle with compliance implementation and only 23% prepared for AI compliance risks. Window signals sustained tension between mature technical capability and organizational/regulatory adoption barriers.
2025-Q2: Regulatory momentum accelerates adoption pressure while critical barriers persist. SEC 2025 examination guidance mandates explainable and auditable AI decision-making, prompting finance sector automation of audit trail workflows. Vendor ecosystem expands (Nebius cloud platform adds audit logging) alongside grassroots innovation (AuditMyAI open-source framework). Yet McKinsey data reveals 80%+ of companies see no significant AI ROI, and UC Berkeley study shows 68% struggle to move GenAI from pilot to production due to reliability and security concerns. Window confirms audit trail infrastructure maturity but exposes widening gap between technical capability and organizational/regulatory adoption readiness.
2025-Q3: Vendor momentum accelerates while real-world implementation gaps persist. Cloud platforms converge on audit trail capabilities: Microsoft Azure Databricks, Google Firebase, and Oracle AI Data Platform all ship audit logging features (July–September). UK ICO formalizes audit trail expectations under GDPR (September), establishing baseline regulatory standard. Market demand surges: Gartner reports 68% of finance SaaS buyers require auditable AI. Yet critical Microsoft 365 Copilot audit logging failure (months-long gaps) reveals vendor quality assurance challenges and persistent deployment risks. Window signals practice inflection: audit trail capability is unambiguously production-ready, but audit trail confidence in deployed systems requires meticulous implementation and vendor accountability.
2025-Q4: Vendor ecosystem matures while organizational adoption gaps dominate. Dynatrace, Hedera, and Validaitor release/advance audit trail capabilities with framework compliance (NIST, ISO 42001). Named financial deployments validate ROI: JPMorgan Chase and Goldman Sachs show 27% profitability lift and 79% adoption jump (October). Yet Ajith's analysis reveals only 9% of enterprises with AI in production have mature governance; AuditBoard survey finds only 4% of internal audit leaders achieved substantial progress despite 55% of organizations deploying AI. eDiscovery survey (64% integrating LLMs) highlights accuracy concerns over audit trail adoption. Window confirms practice paradox: audit trail infrastructure is production-ready and ROI-validated, but organizational capability, auditor expertise, and governance maturity remain constraint—suggesting leading-edge infrastructure without corresponding organizational readiness for effective deployment.
2026-Jan: Vendor ecosystem expands with structured frameworks (IntelliHuman six-layer proof stack, VeritasChain cryptographic VAP specification and open-source VCP v1.1 implementation). Regulatory escalation: EU AI Act penalties enforcement begins August 2026; SEC examination guidance mandates auditable AI. Market readiness divergence emerges: Mayfield survey shows 42% of Fortune 50–Global 2000 with AI agents in production, yet 60% lack formal governance despite 84% requiring compliance. Informatica data leader survey (n=600) shows 70% GenAI adoption but 75% governance lag. Organizational barriers persist: 61% of internal auditors lack expertise, only 4% report substantial progress, eDiscovery professionals prioritize speed over audit. Window signals practice entering mandatory adoption phase: infrastructure is production-ready and increasingly regulatory-mandated, yet organizational governance readiness and auditor expertise remain critical constraints on effective deployment.
2026-Feb: Vendor momentum accelerates and formal standardization advances. IETF publishes Internet-Draft for Verifiable AI Provenance (VAP) framework (Feb 2026), specifying cryptographic audit trails with conformance levels and external RFC 3161 anchoring. Audital GA product launches (Feb 2026) with cryptographically irrefutable decision records targeting FCA, EU AI Act, and ISO 42001. Organizational adoption barriers intensify: IIA/AuditBoard survey (Feb) shows only 40% of 370+ audit leaders adequately prepared for AI-enabled fraud, with 57% lacking tools and 55% lacking skills. IDC survey (Feb) shows 66% adoption of AI in audit strategy but 64% insist on validation of outputs, emphasizing human oversight necessity. Internal Audit Collective survey reveals less than 25% of 113 auditors use AI extensively due to governance concerns and skill gaps. Critical perspective (Feb) highlights specific auditability failures: Massachusetts lender fined $2.5M, Cigna litigation, EY data showing 99% of organizations reported AI-related losses. Window signals inflection point: audit trail tooling and standardization are rapidly maturing, yet organizational capability—auditor expertise, governance readiness, and confidence in mission-critical deployments—remains the binding constraint limiting broader adoption despite regulatory pressure and vendor innovation.
2026-Mar–Apr: Production deployments and critical implementation gaps converge, exposing the practice's true bottleneck. Datadog Security Labs discloses Copilot Studio audit logging failures (28-day gap, administrative actions unlogged despite documentation). A Lovable breach exposed 48 days of undetected cross-account access attributable to missing audit trails, violating GDPR 72-hour notification and EU AI Act Article 50 obligations—demonstrating the direct regulatory liability cost of audit trail gaps. CertifiedData launched a GA cryptographic audit trail product (SHA-256 hashing, Ed25519 signing, hash-chained records) explicitly targeting EU AI Act Article 12 compliance, with a live public ledger as validation of tamper-evident logging feasibility. BlackLine's financial operations platform operationalized a dual-governance model requiring AI agents to operate under identical audit controls as human users with ISO/IEC 42001 certification. Bradesco (Brazil's largest bank) successfully deployed audit trail infrastructure for agentic AI, achieving 100% audit trail coverage with 83% resolution rate and 30% cost reduction, validating technical feasibility at scale in regulated banking. Market adoption accelerates: 72% of Global 2000 companies operate agentic AI in production (vs. <5% in 2025), with audit/escalation trails identified as prerequisites for production deployment. Yet organizational readiness remains uneven: 42% of companies scrapped AI initiatives before production due to governance gaps; 69% of compliance leaders report AI adoption outpacing controls; 53% cite evidence collection as bottleneck; only 21% have mature governance for autonomous agents despite 74% expecting full agentic AI integration within two years. Ernst & Young study shows only 10% of companies fully prepared to audit AI systems. Independent critical assessments identify audit trail infrastructure availability vs. organizational operationalization as the defining tension—infrastructure is production-ready, but implementation discipline and auditor expertise remain severely constrained. The window reveals practice paradox sharpening: technical capability proven; regulatory mandate imminent (August 2, 2026); organizational execution lagging critically.