The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that detects anomalies in audit data and reconstructs audit trails for compliance and forensic investigation. Includes pattern-based exception identification and timeline reconstruction; distinct from financial audit in Finance & Accounting which targets specifically financial rather than general organisational audits.
AI-powered anomaly detection in auditing is a proven practice with a mature vendor ecosystem, GA tooling from major platforms, and documented production value at Big 4 scale. Machine learning enables auditors to move from statistical sampling to full-population analysis, flagging unusual patterns, transactions, and control exceptions across entire datasets. Detection algorithms reliably achieve high recall -- peer-reviewed research on Big 4 audit data reports 95% -- and specialist vendors like MindBridge have secured professional accreditation, enterprise partnerships, and tens of billions of entries analysed. The technology question is settled. The live tension is governance: maintaining transparent, tamper-evident records of AI-driven audit decisions that satisfy regulatory requirements. Most deploying organisations still lack formal AI access controls and governance policies, and autonomous AI agents operating without documented accountability chains have produced real compliance incidents. For organisations evaluating adoption, the challenge is not whether the algorithms work but whether their data infrastructure and oversight processes can support reliable, auditable deployment.
Adoption is accelerating sharply. A Wolters Kluwer survey of 4,214 internal audit professionals found 39% already deploying AI anomaly detection, with 41% planning adoption within twelve months -- a trajectory pointing toward 80% coverage by end of 2026. Ecosystem partnerships are scaling to match: MindBridge partnered with Genpact to embed continuous controls monitoring into enterprise risk consulting, while KPMG's Clara platform with embedded MindBridge anomaly detection reached 90,000 auditors globally. Thomson Reuters launched Audit Intelligence with native anomaly detection, reporting 50% sample reductions in early deployments. Market investment reflects confidence -- the global data anomaly detection market is projected to grow from $5.61 billion in 2025 to $33.32 billion by 2035, with fraud detection commanding 44.7% of financial services spend.
The governance gap, however, remains the binding constraint. IBM Security analysis found 97% of organisations lack proper AI access controls and 63% have no AI governance policies, leaving production systems without the documented accountability chains that regulated industries require. Real incidents -- AI agents acting without authorisation or oversight -- have exposed these deficiencies. New entrants like Audital are targeting the gap directly with cryptographically signed, tamper-evident audit trail infrastructure, but the broader challenge is organisational: building the oversight discipline and data governance maturity to match what the detection technology can already deliver. A 17% contraction in the audit workforce since 2020 adds urgency, compressing the capacity available for manual oversight even as AI-driven workloads expand.
— Academic study of auditor response to client AI adoption (2010-2022 data): process-oriented AI improves reporting discipline and lowers audit fees; product-oriented AI increases detection scrutiny, validating detection capability maturation.
— EY's Canvas platform processing 1.4T journal entries/year across 130K professionals and 160K engagements; demonstrates production deployment of audit anomaly detection and decision logging infrastructure at global scale.
— Big 4 audit firm EY rolling out agentic AI across 160,000 audit engagements globally, including anomaly detection and continuous monitoring, confirming category-level adoption at production scale.
— DFKI research addressing Journal Entry Test false positives through hybrid rule-based and ML anomaly detection, validating technical approaches to reducing false alerts in production audit workflows.
— Technical analysis of EU AI Act Articles 12 and 14 requiring cryptographically signed audit trails for high-risk AI; maps regulatory architecture to audit trail infrastructure for tamper-resistant, compliant AI decision logging.
— Framework for audit-grade AI adoption emphasizing audit trail integrity and process transparency over speed; addresses governance gap by requiring structured workflows, full traceability, and auditor control of final decisions.
— Emerging vendor validated with Top 10/20 audit firm pilots achieving 85% time savings in evidence gathering and testing; market projected at $11.7B by 2033 (27.9% CAGR) confirming accelerating adoption trajectory.
— Compliance publisher critical analysis: 42% of companies abandoned AI initiatives (vs. 17% in 2024); root cause: regulatory rejection of unexplainable systems; OCC, FCA, EU AI Act require explainability; retrofitting costs 2-3x more than building in from start.
2019: Academic foundations established (adversarial autoencoders for journal entry anomaly detection); cloud vendors (AWS) launched ML-powered anomaly detection as standard BI features; specialist audit vendors (MindBridge) achieved production adoption among top accounting firms, but barriers (skill gaps, data complexity, interpretability) limited broader rollout.
2020: Cloud platforms (AWS QuickSight) shipped enhanced anomaly detection with user-configurable thresholds; specialist vendors (MindBridge) achieved ICAEW accreditation and moved to GA with drag-and-drop platforms (no coding required); major UK accounting firms (UHY Hacker Young) deployed AI anomaly detection on client audits at scale; adoption remained limited due to data engineering complexity, governance challenges, and low exec confidence in error detection despite 7,000+ datasets processed globally.
2021: Multi-firm adoption accelerated — MNP LLP deployed MindBridge across 90+ offices nationally, and GRF CPAs operationalized platform in ~15% of audits after four-year phased rollout; industry transparency matured with MindBridge's third-party algorithm audit by UCL achieving green status on explainability, bias, robustness; ISACA recognized AI in audit lifecycle; however, Protiviti survey revealed only 14% of audit executives classified as digital leaders and AI/ML remained among lowest-maturity domains, signaling persistent adoption barriers despite vendor maturity.
2022-H1: Evidence of deployment breadth and quality impact emerged — peer-reviewed study of 36 major audit firms documented AI investment reducing restatements 5% and audit fees 0.9%; MindBridge announced 35+ billion entries analyzed and triple-digit growth; regulatory bodies (UK DRCF) published algorithmic audit guidance; real-world implementation cases included small firm adoption (Garbelman Winslow CPAs), demonstrating technology was no longer restricted to large enterprises.
2022-H2: Research community advanced technical foundations with federated continual learning and benchmarking studies on categorical data handling; MindBridge released Q4 2022 platform updates enhancing anomaly detection explainability and inter-account flow analysis; cloud platforms (AWS Lookout for Metrics) integrated deeper with BI tools; vendor-led industry education initiatives addressed regulatory alignment with traditional audit standards, signaling broader ecosystem maturation despite uneven adoption across firm scale.
2023-H1: Strategic partnerships accelerated with Big 4 entry—KPMG UK embedded MindBridge into KPMG Clara for granular transaction analysis and enhanced explainability; industry analysis highlighted widespread automation adoption across the profession, with firms leveraging transactional-level data analytics for anomaly detection and fraud risk identification, positioning technology as integral to modern audit workflows despite ongoing implementation challenges.
2023-H2: Deployment maturity and market confidence accelerated—EY deployed proprietary ML anomaly detector (Helix GLAD) in production audits with confirmed fraud detection in 2 of 10 companies; MindBridge secured $60M growth equity funding (July 2023, $30M 2023 revenue); AWS expanded anomaly detection into data pipelines (Glue Data Quality preview); peer-reviewed research documented AI adoption by major audit firms. Critical assessments surfaced: KPMG analysis found only 7% of audit tasks automatable by current generative AI, while Gartner placed Gen AI at peak hype cycle, signaling persistent gaps between capability and reliable delivery despite demonstrated deployment success.
2024-Q1: Big 4 commitment accelerated—KPMG announced strategic integration of MindBridge into KPMG Clara (January 2024), positioning specialist anomaly detection as core to global audit practice; AuditBoard launched AI Core with anomaly detection and audit trail analysis (March 2024), expanding product ecosystem; adoption barriers persisted—Protiviti survey found only 12% of organizations adopted AI/ML in audits, citing talent shortage and implementation complexity despite vendor and platform maturity.
2024-Q2: Product innovation and market reassessment continued—MindBridge released next-generation anomaly detection (June 2024) with enhanced error detection capabilities; KPMG global survey (May 2024) showed AI claiming ~10% of IT budgets with 50% of organizations expecting 25% increase by 2025; cloud platforms advanced audit trail analysis with AWS CloudTrail + Amazon Q integration (May 2024) for AI-powered log reconstruction. However, critical market analysis revealed fundamental adoption barriers: MindBridge, despite 9 years of operation and $200M valuation, had only 25,000 users, with ROI challenges and accountant resistance limiting scale despite technical maturity and Big 4 partnerships.
2024-Q3: Platform consolidation and ecosystem expansion—KPMG deployed Clara AI with embedded MindBridge anomaly detection to 90,000 auditors globally (September 2024), representing largest-scale Big 4 rollout; Thomson Reuters launched Audit Intelligence with native anomaly detection (September 2024, case study: 50% sample reduction at RBSK Partners); AWS Glue Data Quality reached GA with ML anomaly detection (August 2024); independent deployments grew with Crowe MacKay discovering $60,000 overpayments via MindBridge. However, critical vendor assessment emerged—AWS QuickSight's anomaly detection showed false positives and missed anomalies in blind testing (Anodot August 2024)—signaling detection quality variability. Adoption sentiment improved (BDO: 54% believe AI improves quality, 63% see enhanced trust) but actual implementation lagged at 12% of organizations, constrained by ROI barriers and data engineering complexity despite demonstrated deployment success at scale.
2024-Q4: Vendor consolidation and organizational readiness gap exposed. AWS sunsetted Lookout for Metrics, consolidating anomaly detection into broader platforms (CloudWatch, QuickSight, Glue); ISACA and KPMG reports (October–November 2024) emphasized importance and potential of ML in audit, but critical implementation barriers surfaced: 78% of CFOs cite data quality as primary AI adoption barrier (CPA Practice Advisor November 2024); internal auditors showed significant readiness gap with 61% lacking AI expertise and only 2-4% of departments reporting substantial progress, despite organizational AI adoption at 55% (AuditBoard November 2024). Practice demonstrated clear vendor maturity (multi-product, Big 4 scale, proven audit improvements) but revealed that adoption friction was organizational and data-governance-related rather than technological.
2025-Q1: Academic and practitioner evidence crystallized dual-track adoption landscape. Peer-reviewed research confirmed efficiency and accuracy gains but documented financial, skill, and data security constraints limiting broader deployment. Practitioner surveys identified persistent friction: 59% of organizations moving to comprehensive control testing (vs sampling), yet skills shortages, system incompatibility, and data quality challenges constrained implementation. Critical debate intensified on explainability—ACCA opinion emphasized AI's black-box nature creates audit trail deficiencies and accountability gaps. Independent case studies showed concrete value (40% false positive reduction in transaction monitoring, 35% accuracy gains), reinforcing pattern: large enterprises with data governance maturity benefit significantly, while mid-market and independent firms face implementation barriers exceeding technology benefits.
2025-Q3: Adoption acceleration and governance challenges surfaced concurrently. Wolters Kluwer survey of 4,214 auditors reported 39% deploying AI anomaly detection with 41% planning near-term adoption, targeting 80% by 2026—a rapid acceleration trajectory. Consultant guidance (Plante Moran) detailed continuous auditing and comprehensive population testing capabilities. However, evidence simultaneously exposed production reliability constraints: detection performance variation across vendors (87% AI vs 59% manual detection but with persistent false negative risks); FinTech Global and industry experts documented hidden risks in production AI compliance systems where tuning for low false positives masked undetected gaps, exposing firms to regulatory penalties. White & Case survey of 265 compliance leaders identified accuracy, governance, and data privacy concerns as significant barriers. The window revealed practice maturation shift from technology viability (confirmed by 2024) to organizational reliability and governance readiness—detection capability had plateaued on the upside while governance and false-negative risks emerged as the new maturation constraint.
2025-Q4: Vendor maturity accelerated with Thomson Reuters and specialist consultancies releasing audit-specific AI tooling and critical evaluation frameworks. Evidence surfaced both deployment success and governance constraints: 90% of financial institutions deployed AI-powered fraud detection with >90% accuracy (Feedzai survey), yet production systems showed critical governance gaps including undocumented AI models, black-box decision logic, and false negative risks that expose firms to regulatory penalties. Industry assessments documented 20-30% time savings and 50% sample reduction in real deployments, alongside warnings of widespread "AI-washing" and persistence of human oversight requirements. Adoption pattern shifted from technology viability (confirmed) to organizational governance readiness—detection algorithms work reliably at scale, but transparency, auditability, and accountability mechanisms remain maturation constraints limiting broader enterprise rollout.
2026-Jan: Enterprise adoption momentum accelerated with NASDAQ-listed VEON deploying MindBridge Central Insights Factory across global operations for comprehensive transaction analysis. Academic research surfaced critical methodology limitations: peer-reviewed study found Benford's Law divergence—a widely-used anomaly detection heuristic—cannot reliably assess financial statement quality or manipulation. Concurrent evidence exposed organizational governance gaps constraining reliable deployment: IBM Security analysis revealed 97% of organizations lack proper AI access controls and 63% lack AI governance policies, with traditional audit frameworks failing to address AI-specific risks including shadow AI systems. Critical assessments of fraud detection models documented specific failure modes (missed coordinated attacks, undetected synthetic identity accounts), reinforcing emerging pattern that detection capability had plateaued while governance, auditability, and systematic false-negative risks had become defining maturation constraints.
2026-Feb: Ecosystem scaling accelerated with MindBridge partnering with Genpact to embed anomaly detection into enterprise risk consulting and continuous controls monitoring, signaling channel expansion and organizational adoption momentum. Market research confirmed sustained investment: global data anomaly detection market projected to reach $33.32B by 2035 (19.5% CAGR), with fraud detection as leading application. Peer-reviewed research validated detection algorithms achieving 95% recall on Big 4 audit data. However, specialized product launches (Audital's cryptographically signed audit trails) and critical analyses documented persistent infrastructure deficiencies: current production systems lack documented, tamper-evident records of AI decisions required by regulated environments. Evidence confirmed technology viability (high-recall detection proven) but exposed governance maturation constraint—building transparent, auditable AI decision chains that satisfy regulatory audit requirements remained an organizational and technical challenge requiring both platform innovation and governance discipline.
2026-Apr: Big Four global scale deployment confirmed across multiple fronts: KPMG's multi-year MindBridge pilot completed and rolled out globally; EY launched agentic AI across 160,000 audit engagements worldwide via its Canvas platform (processing 1.4T journal entries/year across 130K professionals), confirming category-level adoption at Big Four scale. Academic evidence validated and refined detection methods: DFKI research demonstrated hybrid rule-based and ML approaches reducing false positives in Journal Entry Test workflows; UBC study confirmed process-oriented AI improves audit discipline and lowers fees. Regulatory compliance requirements hardened: EU AI Act Articles 12 and 14 mandate cryptographically signed audit trails for high-risk AI decisions, with Audrey AI (pre-seed, $1.8M) and specialist platforms targeting this infrastructure gap with 85% evidence-gathering time savings in pilot deployments. However, critical headwinds persist: 42% of companies abandoned AI initiatives with regulatory rejection of unexplainable systems as root cause; OCC, FCA, and EU AI Act mandate explainability with retrofitting costing 2-3x more than building in from start. Practice maturity pattern confirmed: detection algorithms work reliably at scale; governance, regulatory compliance, and tamper-evident audit trail infrastructure remain defining constraints on broader adoption.