Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

Pick a role above to explore practices

BLEEDING EDGE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

LEADING EDGE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
👥 PEOPLE & TALENT
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

GOOD PRACTICE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
👥 PEOPLE & TALENT
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

ESTABLISHED

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💹 FINANCE & ACCOUNTING
👥 PEOPLE & TALENT

⚖️ Legal, Compliance & Risk

AI for managing contracts, regulation, governance, and organisational risk. Contract review and e-discovery are good practice with proven ROI; regulatory monitoring and due diligence are advancing steadily. Most of the domain sits at leading-edge — adoption is constrained by liability concerns and the need for domain-expert validation rather than by tooling gaps.

21 practices: 7 good practice, 13 leading edge, 1 bleeding edge

Where AI Stands in Legal, Compliance & Risk

Legal AI has reached a paradoxical maturity. Adoption metrics are unambiguous -- 70-89% of legal professionals now use AI tools, 87% of general counsel report active deployment, and 78% of Am Law 200 firms have integrated AI into workflows. Contract review, clause extraction, e-discovery, and template-based drafting have crossed into proven practice with validated ROI: 98% accuracy on production contract portfolios, 90% time savings on standard drafting, and triple-digit returns documented by Forrester and independent benchmarks. Thomson Reuters CoCounsel has surpassed one million users across 107 countries. Relativity aiR processes 200M+ document predictions across 250+ customer deployments. The technology argument is settled.

Yet the domain's defining characteristic in mid-2026 is not capability but stagnation at scale. Of 21 practices tracked in this domain, 12 are classified as stalled in their advancement trajectory -- including several that reached mainstream adoption years ago. The pattern is consistent: organisations adopt tools but fail to embed them into workflows. Only 27% have widely embedded AI; 95% of pilots fail to deliver measurable impact without structured governance; and 56% of enterprise AI investments show zero revenue or cost gains. Legal-specific AI tool usage has actually declined from 58% to 40% as practitioners default to general-purpose alternatives. The domain has an implementation crisis, not a technology deficit.

The structural explanation is threefold. First, governance remains catastrophically underdeveloped -- only 7% of legal teams have documented AI governance frameworks despite near-universal adoption, creating compounding regulatory exposure as hallucination-related sanctions exceed $145K in Q1 2026 alone. Second, the billable-hour model creates perverse incentives: law firms capture efficiency gains internally rather than passing them to clients (only 6% share AI savings), while in-house teams -- freed from this constraint -- pull ahead at nearly double the adoption rate. Third, regulatory complexity is accelerating faster than organisational readiness. The EU AI Act reaches full enforcement in August 2026, the Colorado AI Act takes effect June 2026, and US state-level AI regulations now span California, Texas, Illinois, and New York with penalties ranging from $200K to $1M per violation. Organisations must comply with regulations governing their AI tools while simultaneously using those tools to monitor regulatory change -- a recursive compliance burden that only well-resourced institutions can manage.

Financial services compliance operates a generation ahead of the rest of the domain. HSBC, Goldman Sachs, JPMorgan, and Citi all run production-scale AI compliance systems. SymphonyAI's agentic deployment cuts manual AML investigation effort by 90%. Silent Eight processes 100M+ investigations across 150+ markets. RegScale's continuous compliance monitoring platform covers 60+ frameworks. These deployments demonstrate what is achievable -- but they also set regulatory expectations that apply to organisations without comparable resources.

What's New, 2026-04-15 to 2026-04-29

This scan cycle produced significant new evidence that sharpens the domain's central narrative: the adoption-to-implementation gap is widening, not closing. Contract review -- clause extraction and risk assessment -- shifted from a null trend to "advancing," the only trend change in this cycle, driven by converging production evidence: Concord's 98% accuracy across 11 months of live contracts (92 minutes compressed to 26 seconds per contract), Inkvex's independent study of 327 attorney-validated contracts confirming 94% true-positive rates, and 87% of general counsel now reporting active use (up from 44% prior year).

New domain-level evidence documented the adoption plateau quantitatively for the first time: law firm AI adoption flatlined at 79% between 2024 and 2025 after a 19% surge in 2023-2024, and legal-specific tool adoption dropped from 58% to 40%. Axiom's structured 8-week pilot framework -- documented across their 14,000-lawyer network -- provides the first rigorous evidence that pilot failure is a governance and change management problem, not a technology problem, with organisations achieving 20-60% contract review time reduction and 89% quality improvement when following structured approaches.

Workforce anxiety emerged as a material new signal. Axiom's 2026 survey of 544 respondents across 8 countries found 76% of legal professionals using AI daily are anxious about job displacement within five years, despite 93% reporting productivity gains. This creates a hidden implementation barrier: adoption sustainability depends on addressing workforce anxiety, not just capturing efficiency.

On the regulatory front, EU AI Act enforcement slippage to 2027-2028 (due to missing harmonised standards and untrained conformity assessors) reduces near-term deadline pressure but does not resolve the underlying governance readiness deficit. FINRA's 2026 Oversight Report established formal GenAI governance obligations under Rules 3110 and 2210, signalling regulatory enforcement beginning in financial services.

Key Tensions

  • Governance deficit is structural, not transitional. Seven percent governance framework coverage paired with 70-89% adoption creates compounding exposure. Stanford's 2026 AI Index documents AI incidents rising 55% year-over-year (362 in 2025 vs 233 in 2024) while model transparency dropped from 58/100 to 40/100. MIT's governance landscape mapping of 1,000+ documents identifies systematic gaps: consumer-facing sectors under-covered, early-stage data practices neglected, and frameworks too generic to capture system-type specificity. The Deloitte survey of 3,235 enterprise leaders across 24 countries confirms only 21% have mature governance for autonomous agents despite 82% expecting 10%+ job automation. This is not a gap that closes with time; it requires deliberate investment that most organisations are not making.

  • The billable-hour model is fragmenting the domain. In-house legal teams capture AI efficiency directly and build internal capability; law firms face structural resistance where efficiency gains accrue to the firm, not clients. The numbers are stark: 87% in-house adoption versus 46% for law firms; 64% of in-house teams expect to reduce outside counsel spending. Fixed-fee billing has risen to 53% of matters (versus 32% hourly), and 34% of firms now charge a premium for AI-assisted work while only 6% pass savings to clients. DLA Piper has deployed 5,000 Harvey seats; Kira Systems is embedded in 70 of the top 100 global law firms. But mid-market firms without comparable capital face an accelerating capability gap that threatens their competitive position. The largest firms invest; mid-market firms are exposed.

  • Hallucination risk has become a regulatory and liability event, not merely a quality concern. Courts have documented 1,200-1,300 cases globally involving AI-fabricated citations, with 5-6 new cases appearing daily. Q1 2026 sanctions exceeded $145K. The Fourth Circuit admonished a lawyer for briefs citing nonexistent cases. Sullivan & Cromwell submitted a bankruptcy brief with approximately 40 fabricated citations despite formal controls and training. Independent research documents 17-34% hallucination rates across legal AI platforms (Lexis+ AI 17%, Westlaw AI 34%). The structural ceiling is clear: AI systems hallucinate on legal questions at 18.7% rates versus 0.7% on basic summarisation. This is not a model fine-tuning problem -- it reflects fundamental limitations in LLM reasoning about formal legal logic, as documented by Stanford and Caltech research. The liability framework is shifting from user error to tool architecture, with courts now scrutinising whether generative tools are "architecturally sufficient for work requiring verified citations."

  • Regulatory fragmentation creates recursive compliance burden. Organisations must comply with regulations governing their AI tools (EU AI Act, Colorado AI Act, California SB 53, Texas HB 149, FINRA Rules 3110/2210) while simultaneously using those tools to monitor the regulatory landscape. The EU AI Act classifies litigation outcome prediction as high-risk and most autonomous compliance monitoring as requiring conformity assessment. FinCEN's proposed rules codify AI/ML as defensible innovation while demanding auditable reasoning trails. The penalty framework is severe: up to EUR 35M or 7% of global turnover under the EU AI Act, $1M under California SB 53, $200K under Texas HB 149. Vision Compliance's cross-industry assessment found 78% of enterprises unprepared for EU AI Act obligations, with 83% unable to generate basic AI system inventories. Enforcement slippage to 2027-2028 buys time but does not resolve the readiness deficit.

  • Workforce anxiety is creating a hidden implementation barrier. Axiom's 2026 survey found 76% of legal professionals using AI daily worry about role obsolescence within five years, while 93% report productivity gains. This paradox -- high utility paired with high anxiety -- creates retention risk and resistance to deep workflow integration. Only 7% of organisations have clear communication about augmentation versus replacement strategy. The pattern mirrors broader enterprise AI adoption failures documented by Forrester, where organisations capture zero ROI because workforce resistance stalls value realisation. Mid-level lawyer departures are accelerating at firms with high AI deployment but no clear upskilling or role redefinition. Implementation success paradoxically drives retention risk when change management and communication lag deployment velocity.

Top 10 Evidence Items

  1. Oregon Court of Appeals $109,700 penalty for AI-fabricated citations (Ghiorso case) (case-study) — The largest single-attorney AI sanction on record makes the hallucination liability argument concrete: 15 fabricated cases plus 9 invented quotations, with a technical analysis showing these failures are architectural rather than user error, directly underpinning the summary's "liability framework is shifting to tool architecture" claim. https://dev.to/gabrielanhaia/the-most-expensive-hallucination-of-2026-a-court-filing-goes-sideways-1d3b

  2. Systemic analysis of AI hallucination sanctions and enforcement patterns (April 2026) (industry-report) — Ethicore's synthesis of 1,227 documented global hallucination cases, combined with Stanford benchmarks showing 17-34% error rates across named legal AI platforms, provides the empirical foundation for the summary's claim that hallucination risk has become a regulatory and liability event. https://ethicore.substack.com/p/lawyers-are-getting-sanctioned-for

  3. Sullivan & Cromwell Emergency Sanction Letter: Am Law 100 firm's AI verification failure (news-coverage) — An elite firm advising OpenAI itself filing a brief with ~40 fabricated citations despite comprehensive controls and training demolishes the assumption that governance frameworks solve the hallucination problem, adding weight to the structural ceiling argument. https://abovethelaw.com/2026/04/sullivan-cromwell-files-emergency-please-dont-sanction-us-for-all-these-ai-hallucinations-letter/

  4. Why 95% of Legal AI Pilots Fail (And Your Roadmap to Success) (opinion) — Axiom's documentation of pilot failure drivers — use case vacuum, lack of sustained support, capability confusion — from within a 14,000-lawyer network provides the most specific evidence for the summary's central thesis that the domain has an implementation crisis, not a technology deficit. https://www.axiomlaw.com/blog/legal-ai-pilots-fail-success-roadmap

  5. Is AI Contract Review Accurate? What Real Contracts Revealed (research-paper) — Independent study of 327 attorney-validated contracts confirming 94% true-positive rates on high-severity flags is the primary evidence driving contract review's trend shift to "advancing" this cycle, illustrating why this is the one practice bucking the domain's stagnation pattern. https://inkvex.app/blog/is-ai-contract-review-accurate

  6. SymphonyAI agents cut sanctions workload by 90% (case-study) — A production AML deployment achieving 99% false positive reduction and 10x faster review times at a major U.S. bank exemplifies the financial services compliance sector operating "a generation ahead," providing the contrast that makes the governance deficit elsewhere so stark. https://fintech.global/2026/04/27/symphonyai-agents-cut-sanctions-workload-by-90/

  7. Puerto Rico Supreme Court monetary penalty for AI-generated false citations (2026 TSPR 41) (case-study) — The first monetary penalty in the Puerto Rico jurisdiction for AI-fabricated citations, with explicit warning that repeat conduct triggers suspension, shows judicial enforcement of AI verification duties spreading beyond US federal courts to international jurisdictions. https://www.sanjuandailystar.com/post/pr-supreme-court-issues-warning-about-ai-use-after-lawyers-submitted-filing-with-false-citations

  8. Stanford 2026 AI Index Report: Governance gaps widen as adoption accelerates (research-paper) — AI incidents rising 55% year-over-year alongside model transparency dropping from 58/100 to 40/100 provides peer-reviewed quantification for the summary's governance deficit argument, and signals the gap is structural and widening rather than transitional. https://complexdiscovery.com/stanfords-2026-ai-index-highlights-rapid-growth-and-widening-governance-gaps/

  9. FINRA 2026 Oversight: Financial services regulator mandates AI governance with explicit compliance obligations (industry-report) — FINRA asserting that Rules 3110 and 2210 apply fully to GenAI, requiring audit trails and human decision-maker authority, signals the start of regulatory enforcement in financial services — the leading edge of what will eventually reach other sectors still in governance deficit. https://saifr.ai/blog/building-a-genai-governance-framework-takeaways-from-finras-2026-oversight-report

  10. Lex Machina 2026 Class Action Litigation Report: Active use by Hogan Lovells and Fields Han Cunniff (adoption-metric) — Named attorney deployments of outcome prediction analytics in live class action litigation provide grounded evidence that litigation analytics has crossed into production use, contextualising why the EU AI Act's high-risk classification of such tools will create real compliance burden for firms already embedding them. https://www.globenewswire.com/news-release/2026/04/16/3275675/0/en/Lex-Machina-2026-Class-Action-Litigation-Report-Filings-Surge-to-Highest-Level-in-a-Decade-Driven-by-Consumer-Protection-Claims.html