The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that supports financial audits by detecting anomalies in transactions and analysing audit trails for irregularities. Includes journal entry testing and continuous auditing; distinct from general audit support in legal which covers non-financial audits.
AI-driven audit anomaly detection has moved well beyond research into real deployments at forward-leaning firms, but the profession as a whole has not followed. That gap defines the practice's position on the maturity curve. A handful of Big Four and top-25 firms now run continuous, population-wide transaction analysis in production, replacing sample-based methods with 100% coverage and documenting measurable efficiency gains. The promise is substantial: ML-based approaches achieve roughly 85% fraud detection accuracy versus 60% for traditional techniques, and early adopters report double-digit reductions in sample sizes and audit hours.
Yet most organisations have not started. Only about a third of financial institutions have AI in production for compliance-related anomaly detection, and surveys consistently show a wide gap between strategic intent and execution -- two-thirds of audit professionals say AI is part of their strategy, but fewer than one in six have a defined implementation plan. The barriers are concrete: 40-60% implementation failure rates, hallucination risks that have already triggered six-figure client reimbursements, and infrastructure gaps that leave most firms unable to operationalise what the leading platforms offer. This is a practice where the vanguard is getting real value while the mainstream watches and waits.
Deployment scale is reaching critical mass. EY announced in April 2026 a global rollout of enterprise-scale agentic AI supporting 160,000 audit engagements, signaling progression from pilots to active decision-making at the Big Four level. KPMG Clara continues operating across 95,000+ auditors, while a new wave of specialized vendors is entering production: Modus, an AI-native audit startup, raised $85M Series A in April and deployed its anomaly detection and reconciliation platform at a top-200 accounting firm. MindBridge remains a core deployment platform across mid-market and regional firms. BDO USA is deploying proprietary GenAI platforms with anomaly detection as a core component, backed by a $1B global five-year AI investment announced in May 2025. The shift is also evident in operational metrics: practitioners now document 75% automation of financial statement analysis and 58% of internal control testing via continuous monitoring systems.
Yet a critical valuation gap has emerged. An April 2026 benchmark study of 2,048 enterprise decision-makers found that 79% of organizations report no measurable EBIT impact from GenAI adoption despite 70% already deploying it. In financial services, fraud detection agents are cited as a leading use case, but actual value realization lags adoption rates. A mid-sized UK manufacturer's case illustrates the leading-edge model: deploying agentic AI to monitor revenue recognition in real-time with automatic anomaly flagging and audit memo generation. By contrast, only 28% of organizations effectively track model changes and audit trail decisions, leaving the majority exposed to governance and compliance audit failures.
Regulatory frameworks are solidifying. The UK's Financial Reporting Council published the first regulatory guidance on deploying generative and agentic AI in audit in March 2026, codifying quality control expectations and embedding AI governance into professional standards (ISQM 1). This signals a structural shift: regulators are moving from skeptical observation to active framework-setting. The promise of anomaly detection remains: peer-reviewed research confirms ML achieves 85% fraud detection accuracy versus 60% for traditional sampling. The execution challenge is real: peer-reviewed evidence confirms 40-60% implementation failure rates when firms attempt to scale beyond pilots.
The risks remain material. One major audit firm paid AUD 440,000 to reimburse a client after AI hallucinations produced fabricated citations in an audit report. Only 14% of audit firms have a defined AI strategy, just 25% have trained staff, and governance maturity for autonomous agents remains concentrated among leading-edge early adopters. Autonomous AI agents create accountability gaps that traditional authorization chains were not designed to handle. Model drift poses a second critical risk: undetected changes in anomaly detection accuracy over time (whether from data drift, concept drift, or model behavior shifts) are an audit governance failure, not merely a technical one. Independent audit frameworks must verify that detection systems have adequate monitoring, threshold definition, and escalation ownership. FERF's April 2026 survey shows auditors are deploying anomaly detection broadly, but opinion is evenly split on whether it has improved audit quality, with CFOs expressing skepticism about whether claimed cost savings are being realized.
— IRM 10.24.1 (Feb 2026) formally authorizes IRS AI-driven audit selection using pattern-matching models with mandatory human oversight and documentation trail. Demonstrates institutional adoption of anomaly detection at scale with governance framework.
— CFOs report tangible audit efficiency from anomaly detection—PwC 20-40% productivity gains, faster detection, reduced manual sampling. Named deployments (Soba, Zuora, Procurify) with fee recognition demonstrate broad real-world ROI at scale.
— International government audit bodies (60+ Supreme Audit Institutions) confirm adoption of AI anomaly detection with case studies from 13 SAIs on fraud risk analysis, pattern detection, and continuous monitoring. Hybrid human-AI model validated as most effective.
— Finance-specific governance guidance establishing audit-trail-per-agent-action mandate for reconciliation agents and transaction flagging under EU AI Act and DORA. Addresses regulatory requirement for audit trail integrity by August 2026.
— Litepaper charting audit evolution from Audit 2.0 (AI-assisted anomaly detection via EY, Deloitte, PwC) to Audit 3.0 (autonomous agent auditing). Leading-edge signal on trajectory from anomaly detection to fully autonomous agent-to-agent audit protocols.
— Novel peer-reviewed research proposes graph neural networks for unsupervised anomaly detection in ledger/voucher entries. Achieves improved discrimination without labeled training data, addressing key challenge where fraud datasets are expensive and domain-specific.
— IDC survey of 1,000+ US audit decision-makers shows 66% have embedded AI into strategy/operations/pilots, with explicit recognition that AI identifies anomalies more efficiently. Signals profession-wide shift from adoption to governance phase.
— BDO USA deployed proprietary GenAI platform with anomaly detection models as core component across 70+ locations; $1B global AI investment announced May 2025; governance framework emphasizes ROI measurement and rapid iteration from pilots to production.
2019: Industry guidance published (IIA GTAG, ISACA Journal) and first documented deployments at small CPA firms using MindBridge Analytics; academic research advancing detection methods (adversarial autoencoders, process mining, textual anomaly detection) emerged throughout the year, establishing the research-to-practice pipeline.
2020: Real-world deployments accelerated across diverse sectors — steelworks, government procurement, multinational expense management — with quantified outcomes (90% anomaly capture at scale, weeks-to-minutes risk reporting); neural sampling and semi-supervised frameworks advanced technical foundations; auditing profession began formalizing guidance, though adoption remained concentrated among forward-leaning organizations; project failure risks and skepticism about AI alerts continued limiting factors.
2021: MindBridge platform scaled at leading firms (GRF CPAs at ~15% of engagements, MNP LLP nationwide across 90+ offices); professional adoption surveys showed growing institutional commitment (52% of firms planning data analytics adoption, 36% planning AI adoption); academic research progressed with active learning for key item selection (ICPM 2021) and continual learning frameworks for journal entry monitoring (AAAI 2022 workshop); market transitioned from early pilots to organized early majority, though integration challenges and auditor confidence barriers remained limiting factors for mainstream adoption.
2022-H1: Empirical evidence from 36 largest audit firms documented quantified returns — 5.0% reduction in audit restatement likelihood, 0.9% drop in audit fees — across centrally developed and widely deployed AI systems. Real-world implementations extended beyond Big Four and regional leaders to mid-market CPA firms with demonstrated efficiency and effectiveness gains; audit partner interviews confirmed broad production deployment with quality improvement as primary goal. Professional adoption intent remained strong (52% planning data analytics, 36% planning AI adoption), though organizational barriers and ambiguous regulatory guidance continued limiting mainstream rollout.
2022-H2: Ecosystem maturity expanded with blockchain-enabled continuous auditing frameworks integrating anomaly detection, federated learning enabling decentralized audit deployments, and major accounting vendors launching competing AI-analytics products. Emerging market analysis revealed structural adoption barriers: regulatory gaps, Big 4 dominance, and client perception challenges limiting developing economy adoption. Academic literature and vendor guidance highlighted persistent ethical concerns (bias, transparency, accountability) and implementation barriers beyond technical feasibility. Mainstream adoption remained constrained by auditor confidence fragility, legacy system integration challenges, and regulatory ambiguity despite documented ROI among early adopters.
2023-H1: Big Four formalized adoption with KPMG's May 2023 global rollout of MindBridge integration in KPMG Clara; empirical research (n=454 accountants) confirmed AI's fraud detection impact; mid-market firms (SCG) published deployment case studies with productivity metrics. Adoption intent remained high but execution barriers (auditor confidence, integration complexity, regulatory ambiguity) continued limiting mainstream progression from early adopters.
2023-H2: Technical maturity expanded with peer-reviewed research on ML techniques for log-based anomaly detection (comparing traditional vs. deep learning trade-offs for audit trail analysis); KPMG's continued investment in AI for financial reporting reinforced Big Four positioning. Market remained bifurcated: leading firms integrated anomaly detection into operational workflows, while mainstream adoption still hindered by integration complexity and organizational skepticism despite documented ROI.
2024-Q1: Deployment evidence continued with Cherry Bekaert publishing case study using MindBridge for production anomaly detection; IIA reported striking escalation in AI adoption in internal audit between 2023-2024. Applied research advanced with peer-reviewed work on unsupervised ML for enterprise purchase auditing. Practitioner surveys showed growing awareness of AI as emerging risk (only 12% adoption despite broad awareness), indicating gap between strategic intent and organizational implementation capability.
2024-Q2: MindBridge continued demonstrating production deployments with documented error detection examples ($100K+ catches in financial data). Market remained bifurcated between early adopters with measurable ROI (leading audit firms) and mainstream firms still evaluating adoption, with implementation barriers persisting around integration complexity and organizational confidence.
2024-Q3: Ecosystem expansion accelerated with major platform launches and deployments: KPMG rolled out generative AI anomaly detection capabilities across KPMG Clara to 90,000 auditors globally (July); Thomson Reuters launched Audit Intelligence Analyze tool with 50% sample reduction claims (September); SCG (Ghana) and other regional firms published production deployments with documented efficiency gains. Adoption signals strengthened — KPMG survey of 1,800 companies found 72% piloting or using AI in financial reporting with 64% expecting auditors to evaluate AI controls. However, end-to-end automation remained in testing phase per regulatory assessment, and implementation barriers (integration complexity, auditor confidence, regulatory guidance) continued constraining mainstream adoption despite growing vendor momentum.
2024-Q4: Consolidation into mainstream finance operations with strong adoption metrics: Global KPMG survey found 71% of 2,900 companies using AI in finance (41% moderate/large scale); US firms reported 62% AI adoption in finance with 92% meeting/exceeding ROI expectations; Bank of England/FCA confirmed 75% of UK financial firms deploying AI with fraud/AML detection as top benefit. Leadership consensus firmed with 83% of financial reporting executives expecting auditors to use AI for anomaly detection. However, production-readiness barriers persisted—Economist Impact survey showed only 22% of enterprises confident IT architecture supports new AI, with 60% of UK firms unable to move GenAI to production due to governance and quality concerns. The market remained bifurcated between leading-edge firms with documented scale ROI and mainstream firms blocked by implementation barriers, signaling transition to mainstream adoption constrained by legacy integration complexity and regulatory ambiguity.
2025-Q1: Field evidence and adoption surveys exposed the "simple vs. complex AI" gap: peer-reviewed research confirmed "simple AI" (extraction, matching) widely adopted but "complex AI" (anomaly detection, autonomous testing) still in development. Platforms continued deployment—KPMG reported 72% of companies piloting or using AI for audit tasks (February 2025)—but adoption disparities widened: only 33% of auditors use AI versus 76% of finance professionals, with significant manual data extraction bottlenecks persisting. Leadership confidence gaps emerged: only 35% of CAEs confident in achieving their data/analytics goals despite 76% ranking as top priority; only 29% assured over generative AI. High-profile skepticism (Microsoft CEO Nadella) questioned whether AI had generated measurable economic value yet, providing critical counterpoint to vendor adoption claims.
2025-Q2: Platform expansion and regulatory acceleration signaled inflection point: KPMG advanced Clara AI platform to 95,000+ auditors with AI agents for anomaly detection (April); Microsoft released Azure Anomaly Detector as GA service (June); SEC issued 2025 guidance requiring explainable AI audit trails, creating compliance tailwind. Adoption momentum surged with Wolters Kluwer survey (4,214 internal auditors) showing 39% already using AI and 41% planning adoption within 12 months, projecting 80% adoption by 2026. However, practitioner analysis highlighted critical limitations: data privacy/security risks, algorithmic bias, hallucination concerns, and lack of professional judgment in AI decisions. Field evidence confirmed "complex AI" remained in development despite mainstream pilot adoption; Chief Audit Executive confidence gaps persisted. Practice trajectory showed acceleration driven by platform maturity and regulatory drivers, but organizational readiness and skepticism over measured economic value continued constraining mainstream production deployment beyond early adopters.
2025-Q3: Market bifurcation persisted as deployment accelerated alongside persistent adoption barriers. New deployments signaled momentum: Buzzacott (UK top-50 firm) partnered with MindBridge for 100% transaction analysis and anomaly detection; 60% of large organizations reported using AI for compliance/audit (up from 25% in 2022). However, practitioner surveys exposed implementation challenges: Thomson Reuters found 79% expect transformational impact but only 14% have defined AI strategy; only 25% of firms trained staff on GenAI. Critical failures emerged: Aveni analysis documented 56.4% spike in AI incident reports (2023-2025), with high-profile cases (Apple Card $89M bias penalty, Knight Capital $440M loss) highlighting real-world audit risks. Systematic review of 35 studies showed ML achieves 85% fraud detection vs. 60% traditional, but implementation failure rates of 40-60% remained underreported. FinTech Global survey confirmed 67% of audit functions use analytics but regulatory pressure (Fed, FDIC, SEC) as primary adoption driver. Organizational barriers persisted: integration complexity, staff training gaps, and measured economic value skepticism continued constraining production rollout despite platform maturity and regulatory tailwinds.
2025-Q4: Major platform acceleration and real-world failure evidence marked the quarter. Leading-edge deployments achieved new scale: KPMG Clara advanced AI agents to 95,000+ auditors across 140 countries (October-December); Thomson Reuters integrated anomaly detection partnerships for >50% testing reduction (December); Cherry Bekaert published 66% sample reduction ROI metrics (December); PwC announced end-to-end automation roadmap for 2026. Adoption metrics strengthened: AuditBoard reported 8%-to-21% year-over-year growth with 8,000 hours annual savings; 80% of internal auditors projected to adopt AI by 2026 (Wolters Kluwer). However, production failures crystallized risks: October incident documented AUD 440,000 reimbursement for AI hallucinations in major audit firm report (fabricated citations, fictitious references). Practitioner analysis highlighted critical vulnerabilities: undocumented AI estimates, black-box process dependencies, algorithmic bias, and governance gaps. Academic research confirmed 40-60% implementation failure rates. Organizational barriers widened despite platform maturity: only 14% of firms had defined AI strategy, 25% provided training, 22% confident in IT infrastructure readiness. Practice trajectory showed leading-edge firms achieving measurable scale ROI while mainstream organizations remained blocked by integration complexity, governance ambiguity, and measured value skepticism despite regulatory tailwinds.
2026-Jan: Continued platform deployment momentum with new real-world adoption signals. KPMG Clara confirmed ongoing GA with 100% transaction scoring for anomaly detection; Nasdaq-listed digital operator VEON announced strategic partnership with MindBridge for deployment of Central Insights Factory across operating companies for real-time transaction analysis and continuous auditing. IDC study confirmed 66% of 1,000+ audit professionals have AI embedded in strategy, with 53% agreeing AI enhances quality. However, critical production risks surfaced: security analysis documented data poisoning and adversarial attack vectors threatening audit anomaly detection systems in high-stakes environments. Technical framework advances (Verifiable AI Provenance for cryptographic audit trails) emerged to address audit trail integrity concerns. Practitioner perspective emphasized shift from sampling to total-visibility auditing while highlighting adoption barriers around AI ethics, human oversight, and regulatory compliance. Practice bifurcation persisted: leading-edge firms advancing deployment while mainstream organizations manage integration complexity and security/governance concerns.
2026-Feb: Specialized audit trail infrastructure entered production with Audital platform launch providing FCA-regulated firms cryptographically verified audit trail governance for AI systems. Banking sector adoption metrics confirmed 31.8% of financial institutions with AI in production for compliance/anomaly detection. Professional surveys (1,005 audit practitioners) showed 66% with AI embedded in strategy, though 64% required human validation of AI outputs, indicating quality assurance emphasis. Government audit institutions (U.S. GAO, UK NAO, India CAG) advanced from pilot to scaled deployment with identified barriers in skills and algorithmic transparency. Critical exposure: autonomous AI agents were documented to create audit trail accountability gaps in 2026 production environments, with governance failures when traditional authorization chains break down. Practice remained bifurcated: leading-edge deployment infrastructure maturing while mainstream firms navigated autonomous agent governance and audit trail integrity challenges.
2026-Q1 (Mar 26): Systematic evidence of leading-edge deployment widening: Dawgen Global (Caribbean audit firm) deployed full-population anomaly detection on 28,000 transactions, identifying $186K in fraud and $94K in duplicate payments undetected in 6 years of sampling-based testing. Independent systematic review of 100 audit AI studies (DevDiscourse/Account Audit) documented detection improvements of 20-70% vs. manual sampling, though gains dependent on data quality and organizational maturity. Regulatory drivers accelerating: Ontario's 2026 AI governance framework establishing audit trail and documentation requirements for CPAs to remain professionally liable for AI outputs. Market adoption divergence crystallized: AICPA/CIMA survey (1,735 professionals) showed only 24-27% with adequate talent, IT readiness, or regulatory preparedness; early adopters with deliberate capability building gaining competitive advantage. However, performance ceiling emerged: independent benchmark (DualEntry) of 19 AI models on 101 accounting tasks showed top performer (Gemini 3.1 Pro) achieved only 66% accuracy, with no model exceeding 70%—constraining autonomous AI in structured financial workflows. Enterprise AI integration barriers persisted: synthesis of 8 major surveys (60K+ respondents) identified data readiness (60% projects fail), pilot-to-production scaling (95% fail), and governance maturity (21% have mature autonomous agent controls) as binding constraints. Practice trajectory showed accelerating deployment at leading firms alongside widening organizational readiness gaps, with performance and governance barriers increasingly binding for mainstream adoption.
2026-Q2 (Apr 09 – May 07): Regulatory acceptance milestone and deployment scale expansion marked the quarter. FRC (UK's primary audit regulator) published the first-ever guidance from a major audit regulator on deploying generative and agentic AI in audits (March 2026), codifying quality control expectations and embedding AI governance into ISQM standards. EY announced enterprise-scale agentic AI rollout supporting 160,000 audit engagements globally (April 2026), signaling Big Four adoption at operational scale; BDO USA, backed by a $1B global AI investment, deployed proprietary GenAI platforms with anomaly detection as a core component across 70+ US locations. Modus, an AI-native audit startup, raised $85M Series A and deployed anomaly detection platform at top-200 accounting firm with projected doubling of organic growth in 2026. Practitioner analysis documented 75% automation of financial statement analysis and 58% of control testing via continuous monitoring. However, value realization gap crystallized: AIMG benchmark (2,048 decision-makers) found 79% of enterprises report no measurable EBIT impact from GenAI despite 70% adoption. Model drift emerged as a documented governance failure mode — real-world AI collapses (Zillow $881M, Knight Capital $440M) illustrate the audit risk when anomaly detection models shift without adequate monitoring, threshold definition, or escalation ownership. FERF's April 2026 survey found auditor opinion evenly split on whether AI improved audit quality, with CFOs skeptical about claimed cost savings. Critical governance gap persisted: only 28% of organizations track model changes and audit decisions effectively, leaving the majority unprepared for compliance audits. By May 2026, international adoption signals solidified: 60+ Supreme Audit Institutions (INTOSAI) reported AI anomaly detection in production across government audit bodies, with 13 SAIs publishing case studies on fraud detection and pattern recognition. The IRS formally authorized AI-driven audit selection (IRM 10.24.1, Feb 2026) with mandatory human oversight and documentation trails. IDC's US audit professional survey (1,000+ respondents) confirmed 66% of firms have embedded AI into strategy/operations/pilots, with explicit recognition of anomaly detection efficiency. Regulatory drivers for audit trail compliance emerged: EU AI Act Article 12 and DORA framework required audit-trail-per-agent-action mandates by August 2026. CFOs reported tangible efficiency gains—PwC achieving 20-40% productivity improvements with faster anomaly detection and reduced manual sampling, with fee negotiations reflecting AI cost savings. Technical advances continued: novel graph neural network research (arxiv, Apr 2026) demonstrated unsupervised anomaly detection in ledger structures without labeled fraud datasets. Autonomous agent protocols progressed toward Audit 3.0 models with agent-to-agent auditing. Despite broad adoption signals and efficiency gains, the profession remained bifurcated: leading-edge firms with mature capability building and governance frameworks continuing to drive measurable value, while mainstream organizations managed integration complexity, governance ambiguity, and skepticism over measured economic value realization.