Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

💹 Finance & Accounting

AI for financial operations, reporting, planning, and risk management. Over half the practices are good practice: fraud detection, expense management, invoice processing, and financial forecasting have mainstream adoption. Regulatory compliance and audit automation are advancing. The domain is tightly clustered around good-practice with minimal bleeding-edge — finance favours proven, auditable tools over experimental ones.

16 practices: 1 established, 8 good practice, 6 leading edge, 1 bleeding edge

Finance & Accounting -- Biweekly Brief

The headline: Finance AI works in the lab but is failing in the field -- 93% of CFOs report disappointing AI impact, even as the Big Four and Fortune 500 deploy agentic systems at genuine scale.

The Picture

Most finance functions have bought AI tools. Few are getting value from them. Across invoicing, expense management, cash flow forecasting, and fraud detection, the software is mature and generally available from every major vendor. Supplier and spend analytics has reached near-universal adoption (92% of procurement organizations). But an Oliver Wyman survey of CFOs controlling 12% of global market capitalization found only 8% have deployed AI at scale -- 74% are still planning or piloting. A small group of well-resourced organizations is pulling ahead: EY has deployed AI agents (software that acts on its own without being prompted) to 130,000 auditors, Allstate is using AI to close insurance policies live, and Oracle has shipped over 1,000 agents to its enterprise customers. The rest face a widening gap between what they have purchased and what they can operationalize, driven by data quality problems, governance immaturity, and the absence of ROI measurement infrastructure.

This Fortnight

  • The Big Four made agentic AI operational, not experimental. EY rolled out AI agents across 160,000 audit engagements globally, processing 1.4 trillion journal entries a year. KPMG launched a production close assistant with Google's Gemini integrated into Workday. These are not pilots -- they signal that AI is becoming embedded infrastructure in the firms that audit public companies, which will reshape what your auditors expect from your data and processes.

  • AI accuracy hit a documented ceiling on accounting tasks. An independent benchmark of 19 AI models on 101 real accounting workflows found the best performer scored 79.2% accuracy overall, with month-end close tasks at just 50%. No model cleared 70%. This confirms why 97% of finance professionals still require a person to review each AI output before it ships -- the technology cannot yet be trusted to run autonomously on high-stakes financial workflows.

  • Upstart's model governance became a courtroom issue. A securities class action filed in May alleges Upstart's credit scoring model overreacted to macroeconomic shifts, overstated its accuracy, and caused $70 million in missed revenue. Whatever the outcome, the case establishes that AI model governance failures in financial services carry litigation risk, not just operational risk.

  • US fair-lending rules shifted in opposite directions. The CFPB eliminated disparate impact liability under ECOA, removing a major compliance barrier for AI-driven credit decisions. Simultaneously, the EU AI Act deadline for high-risk credit scoring compliance is now three months away. Organizations operating across both jurisdictions face diverging regulatory architectures.

  • IRS AI enforcement expanded sharply. The IRS now operates 125+ AI/ML models for enforcement, up from 54 in 2024, targeting high-income earners, cryptocurrency, and employee retention credit claims. AI is being deployed on both sides of the tax relationship -- by preparers and by the agency auditing them.

Coming Up

  • EU AI Act enforcement begins August 2, 2026. Credit scoring and insurance underwriting are classified as high-risk, requiring conformity assessments, bias testing, explainability documentation, and human oversight. Penalties reach 7% of global turnover. Organizations with European exposure should have compliance programs in place now, not in planning.

  • Insurance litigation will set AI governance precedent. Federal judges have ordered broad discovery into UnitedHealth's algorithmic claims system spanning 2017 to present. The rulings will establish documentation and transparency standards for AI-driven financial decisions that will extend well beyond insurance. General counsel should be reviewing what discovery of their own AI systems would reveal.

  • Agentic AI in procurement is moving from pilots to production purchasing decisions. Walmart's autonomous agent negotiated 2,000 supplier contracts; Oracle shipped procurement agents across its Fusion customer base; Coupa signed a five-year AWS deal for autonomous spend management. CFOs should evaluate whether their procurement teams are positioned to adopt -- or compete against organizations that have.

What's Hard About This

  • The execution gap is organizational, not technical. Only 8% of CFOs have deployed AI at scale. Root cause analysis identifies "AI without a home" as the dominant failure mode -- technology delivered without operational ownership, process redesign, or measurement infrastructure. Seventy-three percent of enterprise AI projects fail to deliver ROI, and 82% of boards lack any capability to measure AI returns.

  • Accuracy limits are structural, not temporary. Peer-reviewed research has demonstrated that AI hallucinations (when an AI tool confidently makes things up) are mathematically inevitable in current model architectures. Independent benchmarks show 4-19% hallucination rates across leading models, with financial citation accuracy the worst-performing category. For finance workflows requiring 100% precision -- revenue recognition, tax compliance, audit -- this is a hard constraint, not an engineering problem awaiting a fix.

  • Regulatory fragmentation forces architectural decisions. The EU AI Act, Colorado AI Act, CFPB changes, state insurance mandates, and COSO governance frameworks each impose different requirements on the same AI systems. Compliance is no longer a cost you add after deployment -- it determines which AI capabilities you can deploy, in which markets, and under what governance structures.


Go deeper: the full Finance & Accounting briefing -- the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.