Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Contract review — autonomous assessment & scoring

LEADING EDGE

TRAJECTORY

Stalled

AI that autonomously scores contract risk, generates assessment reports, and recommends accept/reject/negotiate decisions. Includes automated risk scoring and recommendation generation; distinct from risk flagging which highlights issues for human assessment rather than making recommendations.

OVERVIEW

Autonomous contract assessment -- AI that scores risk, generates reports, and recommends accept/reject/negotiate decisions -- has moved from bleeding-edge experiment to leading-edge production standard in high-volume triage, yet the practice faces a hard tier ceiling beyond routine work. Global adoption has normalized rapidly: 92% of lawyers across 10 countries now use AI daily; 87% of general counsel employ AI; 52% of in-house teams actively use or evaluate contract review AI (quadrupled since 2024). Deployments deliver measurable ROI for high-volume screening -- 40-60% efficiency gains, 75%+ time savings, 300-450% reported ROI. Yet the same evidence base reveals binding constraints. Production systems show 17-34% real-world error rates despite 95%+ benchmark claims. Autonomous scoring tools exhibit documented algorithmic bias (corporate-favorable in negotiation scenarios). Hallucination incidents have spiked: 1,200+ documented cases globally, with $145K in court sanctions in Q1 2026 alone, including career-ending attorney discipline. Contractual data-access barriers and governance maturity gaps prevent 78% of agentic AI pilots from reaching production. The tier-defining tension is structural: the practice excels at high-volume first-pass screening where human review is downstream, but cannot advance to autonomous decision-making on complex or contested agreements without solving accuracy-on-edge-cases, fairness risk, and sanctionable failure modes.

CURRENT LANDSCAPE

Production deployments at scale demonstrate the economic momentum. Concord's engine processes 10k+ contracts monthly with 94% autonomous risk-spotting accuracy (vs 85% average for experienced lawyers), compressing review from 92 minutes to 26 seconds per contract and achieving 300-450% reported ROI. Inkvex's independent validation study on 327 real-world contracts confirms capability: 94% catch rate of high-severity flags (vs 85% baseline accuracy), 99% catch on auto-renewal clauses, 95% on liability caps, with only 6% false negatives. Orangetheory cut turnaround to 30 minutes per document (80% time savings); Agristo and ECS report 75% time reductions. Vendor ecosystem consolidation is deepening: Icertis serves 250+ Fortune 500 customers with $350M ARR and 30%+ Fortune 100 penetration (post-Dioptra acquisition); LinkSquares reports 1,300+ teams managing 13M contracts with 800k+ hours saved. Global adoption has normalized: Wolters Kluwer's 810-lawyer survey across 10 countries shows 92% use AI daily, 62% report 6-20% time savings, and 61% are confident in AI-driven workflows.

The advancement barriers, however, are hardening rather than softening. Brittney Ball's April 2026 research documents 1,200+ AI hallucination incidents in legal proceedings globally (roughly 10 per day), with $145K in court sanctions in Q1 2026 alone and indefinite attorney suspension for filing 57 defective AI-generated citations. Thomson Reuters analysis identifies the strategic risk: 80% of legal professionals see AI as transformational, yet only 38% expect near-term organizational change, and Gartner projects over 40% of agentic AI projects will be discontinued by 2027. Real-world deployment data shows 17-34% error rates in production despite 95%+ accuracy benchmarks; governance and infrastructure gaps prevent 78% of agentic pilots from reaching production. The bias vulnerability identified in January 2026 law review research persists: autonomous scoring tools systematically favor corporations over individuals in negotiation, creating direct liability exposure. Contractual data-access restrictions (NDAs and engagement letters from 2023-2024) force reliance on generic models rather than fine-tuned deployment. Autonomous decision-making on complex or disputed agreements remains out of scope for all but the most risk-tolerant teams.

TIER HISTORY

ResearchJan-2024 → Jan-2024
Bleeding EdgeJan-2024 → Jul-2024
Leading EdgeJul-2024 → present

EVIDENCE (69)

— Global survey across 10 countries, 810 lawyers: 92% use AI daily, 62% report 6-20% time savings, 61% confident in AI-driven workflows. Demonstrates leading-edge maturity breadth and normalized adoption.

— Independent research on 327 real contracts (7 types) using California attorney baseline validation. Inkvex achieved 94% catch of high-severity flags, 6% false negatives, 99% on auto-renewal clauses, 95% on liability caps.

— Production deployment metrics: 94% autonomous risk spotting accuracy (vs 85% lawyers), 4 hrs/week savings per lawyer, 31% cost reduction, 300-450% ROI, processing 10k+ contracts monthly.

— Thomson Reuters survey data (53% seeing ROI) with customer outcomes: 75% time savings in contract review; named deployments (Agristo: 2hr to 15min, ECS: 8hr to minutes) demonstrate production autonomous assessment.

— HEC Paris researcher documents widespread AI failures: 1,200+ hallucination incidents, 10+ cases daily by March 2026, $145K Q1 sanctions, Greg Lake suspension for 57 fabricated citations—critical limitation on autonomous assessment without human review.

— Strategic analysis shows awareness-execution gap (80% see AI transformational, 38% expect near-term change). Gartner projection: 40%+ of agentic AI projects discontinued by 2027. Strategic adoption 3.9x more ROI than ad hoc.

— Large-scale Deloitte research (1,100+ respondents, 6 countries) quantifying agentic workflow benefits in AI CLM: 30% higher ROI, deployment benefits across legal and business teams.

— Practitioner framework for contract review/redline AI workflows. Documents control failure modes (clause omission), governance design for autonomous assessment, EU AI Act compliance requirements.

HISTORY

  • 2024-Q1: Autonomous contract assessment emerges with vendor momentum (LinkSquares, ELTEMATE, LawGeex) and early law firm deployments showing 70% efficiency gains. Survey data shows 94% enthusiasm but only 40% organizational readiness. Accuracy-speed trade-off evidenced: AI 8x faster but with significant error rates (up to 90% in complex analyses). Liability concerns unresolved.
  • 2024-Q2: Autonomous assessment capability matures with documented accuracy improvements (Dioptra: 95% first-party, 92% third-party, 94% issue detection) and law firm deployments showing ROI (A&O Shearman: 30% efficiency, 7-hour reduction per review). Product launches accelerate (LegalOn Word add-in, 85% faster reviews). Survey data confirms contract analysis as top AI use case; in-house teams adopt faster than law firms. Trust and organizational readiness barriers persist despite improved accuracy signals.
  • 2024-Q3: Autonomous assessment shows sustained ROI deployments (LinkSquares: 352% three-year ROI, 40% efficiency, 25% cycle-time reduction; Dioptra: 50% workload reduction in 6 months). Ecosystem expands with product GA's (Contract Logix AI analysis). CLM infrastructure adoption strengthens (52-60% of legal ops teams). However, practitioner reports document persistent hallucinations (3-10%), training data errors, and missed legal terms. Gartner projects 50% adoption of AI-enabled risk tools by 2027. Practice remains bottlenecked on accuracy for complex contracts and data validation requirements.
  • 2024-Q4: Autonomous assessment consolidates into mainstream enterprise adoption while accuracy barriers persist. Market validation strengthens: 60% of Fortune 500 actively piloting/deploying AI agents with contract review as top use case; LinkSquares G2 leadership (98% satisfaction); vendor accuracy improvements (Screens 97.5%, Dioptra PromptIQ feedback loops). However, adoption data contradicts enthusiasm: WorldCC surveys show only 9-12% actual adoption of AI contract review despite interest, with ~80% accuracy as realistic benchmark. Practice boundary crystallizes: sustained use for triage and first-pass filtering, but autonomous decision-making on complex contracts remains constrained by hallucination, liability, and trust concerns.
  • 2025-Q1: Enterprise adoption accelerates with strong YoY growth (75% surge in contract review AI use to 14%, 37% now deploying pre-execution AI vs. 19% prior). Production deployments show concrete ROI (Qwen 3 fine-tuning: 95% accuracy, 80% time savings, €380K annual savings; Axiom field testing: 60% efficiency gains). However, critical gaps persist: VALs benchmarking reveals three leading tools (Harvey, Vincent AI, Oliver) failed to identify standard MFN clauses; 63% cite data security barriers; 70% of WorldCC respondents require human review. Adoption-accuracy gap widens: mainstream deployments increase while reliability constraints prevent broader autonomous decision-making without human oversight.
  • 2025-Q2: Vendor product GAs accelerate (LinkSquares Risk Scoring Agent with named Fortune 500 adoption; Dioptra Wilson Sonsini validation: 95% first-party, 92% third-party accuracy). Broader organizational adoption: 56% of legal teams use GenAI, 42% adopt CLM, with 2/3 maintaining dedicated legal tech budgets. However, adoption-trust gap persists: 60% of in-house legal professionals cite lack of trust/quality as top implementation barrier. Tool usability barriers emerge: current solutions show markups without reasoning, interrupting workflow confidence. Practice remains in production triage use with unresolved explainability and tier-advancement blocks.
  • 2025-Q3: Autonomous assessment adoption consolidates into production use but implementation challenges become explicit. Financial services case studies highlight success path (40% cost reduction, weeks-to-hours cycle time) but require strategic discipline; tactical deployments risk failure. Industry analysis reveals systemic production issues: 80% of tools fail operationally despite achieving 39% cycle time and 35% accuracy improvements in controlled settings. Practitioner consensus hardens: experienced attorneys must remain involved due to accuracy risks on edge cases and liability exposure. Executive confidence in autonomous systems rises (81% trust for critical operations) while governance infrastructure lags deployment pace. Practice consolidates in triage and first-pass filtering with measurable ROI, but autonomous decision-making barriers (accuracy-on-complexity, explainability, liability) prevent broader advancement.
  • 2025-Q4: Autonomous assessment enters mainstream production deployment with quantified ROI and organizational governance maturity. GenAI adoption in legal leaps to 52% (from 23% in 2024), with 64% of in-house counsel expecting reduced outside counsel spend. Ecosystem consolidation accelerates: Icertis acquires Dioptra (40% MoM adoption growth), integrating autonomous review and scoring into flagship CLM; LinkSquares reports 1,300+ teams, 13M contracts, 800k+ hours saved. Governance infrastructure strengthens: 85% of law departments establish dedicated AI management. However, contractual barriers emerge as tier-defining constraint: NDAs and engagement letters from 2023-2024 restrict client data use in autonomous assessment, forcing reliance on generic models. Practitioner consensus consolidates: autonomous assessment succeeds in high-volume routine screening with proven efficiency (40-60% cost reduction, weeks-to-hours cycles), but scope remains limited by accuracy ceiling (~80% realistic), explainability gaps, and liability concerns for complex/contested agreements. Advancement to adoption-tier blocked by accuracy-on-complexity limits and contractual/governance barriers.
  • 2026-Jan: Autonomous assessment deployment accelerates with adoption reaching 52% of in-house teams (LegalOn survey, Jan 2026) and active usage quadrupling since 2024; named Fortune 500 deployments (Commvault 50% time savings, Softonic 40% cost reduction, Uber/Shopify/Atlassian via Ivo) confirm production momentum. However, critical research surfaces algorithmic bias: law review study documents that autonomous scoring systems favor corporations over individuals, exposing a fairness vulnerability blocking further tier advancement. Vendor consolidation continues (Icertis/Dioptra, Agiloft leadership in Gartner MQ) with CLM integration standard; enterprises report 180k+ annual staff hours saved. Scope remains bounded: triage and pre-execution screening, not autonomous decision-making on complex agreements due to accuracy ceiling, bias risks, and liability concerns.
  • 2026-Feb: Autonomous assessment deployment accelerates with documented accuracy breakthroughs and methodological maturity: Concord achieves 98% accuracy with 26-second review cycle; Orangetheory reduces turnaround to 30 minutes (80% time savings); LegalOn 2026 report confirms market shift toward mainstream operationalization. Structured risk-scoring methodologies emerge (Pactly, BAZU) featuring weighted clause analysis and automated triage rules. However, contractual use restrictions, algorithmic bias risks, and accuracy-on-complexity ceiling remain tier-advancement barriers despite sustained production adoption and enterprise deployment momentum.
  • 2026-Apr: Mainstream adoption deepens: Wolters Kluwer's global survey (810 lawyers, 10 countries) shows 92% using AI daily with 62% reporting 6-20% time savings and 61% confident in AI-driven workflows; 87% of general counsel now use AI (up 44% YoY). Production accuracy benchmarks strengthen — Inkvex's independent study of 327 real contracts validated 94% catch rate on high-severity flags (99% on auto-renewal, 95% on liability caps); Concord achieves 94% autonomous risk-spotting accuracy vs. 85% for experienced lawyers, processing 10k+ contracts monthly at 300-450% ROI. Critical hallucination research intensified: Brittney Ball documents 1,200+ AI hallucination incidents in legal proceedings globally (roughly 10 per day by March 2026), with $145K Q1 2026 court sanctions and indefinite attorney suspension for 57 fabricated citations — hardening the case against fully autonomous assessment without human review. Thomson Reuters analysis confirms 40% of agentic AI projects will be discontinued by 2027 and that governance gaps prevent 78% of agent pilots from reaching production. Practice remains bounded at triage and first-pass screening with proven 40-60% efficiency gains; algorithmic bias, accuracy-on-complexity, and contractual data-access barriers continue to block autonomous decision-making on complex or contested agreements.