Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Contract review — clause extraction & risk assessment

GOOD PRACTICE

TRAJECTORY

Advancing

AI that extracts, classifies, and flags risky clauses in contracts for human review. Includes automated redline generation and clause-level risk scoring; distinct from autonomous contract assessment which scores entire agreements without human review.

OVERVIEW

AI-driven clause extraction and risk scoring has crossed from early adoption into proven practice. The technology works — production deployments now achieve 98% accuracy on diverse contract portfolios, with empirical validation showing 94% true-positive rates on high-severity clause flags (auto-renewal 99%, indemnification 96%, non-compete 93%). Adoption among in-house legal teams has nearly quadrupled since 2024, with 63% of large corporations explicitly using AI for clause identification. Major CLM vendors bundle clause extraction as a default module, and the market is projected to grow from $2.1B (2025) to $3.9B (2030) at 17.3% annual growth. The practice occupies a well-defined layer between raw document understanding and higher-level contract governance: it identifies, classifies, and flags risky clauses for human review, including automated redline generation and clause-level risk scoring. The remaining challenge is not whether the technology delivers value but how organisations overcome governance and accuracy validation barriers to scale past pilots. Hallucinations remain the defining limitation: AI systems hallucinate on legal questions at 18.7% rates (vs. 0.7% on basic summarization), with courts admonishing lawyers for filing briefs containing fabricated case citations. Independent research in April 2026 documented 800+ U.S. legal decisions marred by AI-generated hallucinations. Integration complexity, data quality concerns, and unresolved liability questions around autonomous decision-making mean that most deployments still cluster in high-volume, lower-stakes workflows. For high-stakes M&A and regulatory work, purpose-built extraction tools with human-in-the-loop remain the default; generative AI alternatives are gaining ground in cost-conscious environments but carry accuracy and liability trade-offs that demand explicit guardrails and rigorous human review.

CURRENT LANDSCAPE

The vendor ecosystem has bifurcated along a clear risk-tolerance line. Purpose-built platforms — Kira Systems (84% penetration among top M&A firms), Luminance (1,000+ customers reporting 90% time savings), and Legartis (>90% F1 scores) — dominate institutional and high-stakes corporate work. Generative AI alternatives like Zuva Analyze, IntelAgree, and ClauseoAI serve cost-conscious in-house teams and SMBs where speed matters more than precision guarantees. Icertis sustains $250M+ ARR with Fortune 100 penetration and recently acquired Dioptra; Agiloft, Icertis, and Ironclad all now ship clause extraction as a bundled default. Recent Q1-Q2 2026 adoption data shows 52% of in-house legal teams actively using or evaluating AI for contract review (up 4x since 2024), with 87% of corporate general counsel now reporting some form of AI use, compared to 44% in prior year.

Current real-world deployments demonstrate production-grade maturity. Concord's production platform achieved 98% accuracy across 11 months and thousands of contracts, with task-specific variance (technology agreements 99%, healthcare 94%, construction 96%, financial services 97%) and processing speed improvement from 92 minutes to 26 seconds per contract. ContractIQ's SME deployment compressed a 200-contract acquisition due diligence from 160 hours to 90 minutes with explicit zero-hallucination guardrails. Kalaam Telecom Group's multi-country Luminance deployment, Trench Group's 80% autonomous handling, and Arvato's DPA review acceleration (45-60 minutes to under ten) all confirm maturity across geographies and use cases.

However, deployment and scaling barriers remain structural. Consilio's survey of 800+ legal professionals found 73% cite hallucinated outputs as their top concern; 58% identify accuracy and trust as the primary blocker to broader adoption; only 7% have documented AI governance frameworks despite widespread implementation. April 2026 evidence reveals the depth of this concern: 800+ documented U.S. legal decisions show AI hallucinations (Fourth Circuit admonished a lawyer for briefs citing nonexistent cases), and independent research shows 18.7% hallucination rates on legal questions—far exceeding the 0.7% baseline on simpler tasks. Leading-edge deployments address this through explicit guardrails (ContractIQ's zero-hallucination policy flags "not stated" rather than fabricating values), but this adds manual review overhead. FTI Consulting's rigorous survey of 224 $100M+ corporations found only 39% report enterprise-level business impact from AI investments. Ironclad's framework for AI contract metrics documents that AI ROI typically requires 2-4 years, longer than organizational patience. Stanford and Caltech research published in early 2026 documented fundamental LLM reasoning limitations in formal contract logic. The EU AI Act, reaching full applicability in August 2026, adds regulatory complexity, though most clause extraction features fall into limited- or minimal-risk categories and vendors are adapting accordingly.

TIER HISTORY

ResearchJan-2018 → Jan-2018
Bleeding EdgeJan-2018 → Jan-2019
Leading EdgeJan-2019 → Apr-2024
Good PracticeApr-2024 → present

EVIDENCE (113)

— Empirical study of 327 real contracts (Jan-Apr 2026) with attorney validation showing 94% true-positive rate on high-severity flags; clause-specific accuracy: auto-renewal 99%, indemnification 96%, non-compete 93%—confirming production accuracy benchmarks.

— Independent comparison of five enterprise contract review tools establishing that 95–99% accuracy on risk identification is standard for leading platforms in controlled studies; caveat: unusual jurisdictions and bespoke structures remain challenging.

THE LEGAL TECH AI VISIBILITY INDEX 2026Adoption Metrics

— Market research ranking shows Kira Systems used by ~70 of top 100 global law firms and >80% of top 25 M&A practices, documenting vendor consolidation and institutional adoption at elite tier.

— Hallucination benchmark report showing 18.7% hallucination rate on legal questions (vs. 0.7% on basic summarization), quantifying fundamental AI limitation for contract analysis and high-stakes legal work.

— Fourth Circuit court admonishment of attorney filing briefs with hallucinated case citations; independent tracking database documenting 800+ U.S. legal decisions marred by AI hallucinations, establishing critical limitation in production contract review.

— Aggregated adoption metrics from Thomson Reuters and Gartner showing specific time-savings deployments (Agristo 2hr→15min, ECS 8hr→few hrs, Duvel 1day→20min); reports 53% of organizations seeing ROI and 75% median time reduction in contract review.

— Government contractor deployment using Icertis for compliance-driven clause extraction and risk flagging in federal contracting context (FAR/DFARS compliance), demonstrating public-sector production use.

— Implementation case study documenting production contract review deployments at major law firms using playbook-based clause extraction, risk flagging, and governance frameworks for multi-step analysis workflows.

HISTORY

  • 2018: LawGeex study established AI superiority over lawyers on contract risk identification (94% vs 85% accuracy). Luminance secured law firm deployments in Europe; Kira Systems integrated clause extraction into NetDocuments. Multiple vendors demonstrated working products with customer traction; productivity gains driving adoption over hype cycle skepticism.
  • 2019: Major law firms BCLP and Cassels deployed Kira Systems at scale for global high-volume contract work. Industry adoption survey found only 20% of law firms using AI/ML, highlighting persistent organizational resistance despite technical maturity. Critical assessments emerged questioning whether traditional NLP/ML extraction could fully address legal sector needs.
  • 2020: Luminance added Word document integration for in-platform remediation; Kira introduced Q&A interfaces for extracted data and differential privacy protections for shared models. LawGeex deployments expanded to dozens of Global 2000 companies. Platforms demonstrated capability breadth (Kira deployed on police contracts for reform advocacy), but industry adoption remained stalled at 20% of law firms—technical maturity had not overcome organizational friction around process redesign and staff retraining.
  • 2021: Market consolidation began with Litera's acquisition of Kira Systems. Evidence of real-world deployments accumulating: Lander & Rogers and BP showed significant time/effort reductions; Kalexius independent testing confirmed efficiency gains for junior lawyers. Luminance achieved 300+ customer base with 40% YoY growth. However, practitioner feedback revealed persistent implementation barriers (deployment timelines, vendor support, model maintenance costs) and critical assessments questioned extraction platform flexibility and semantic depth for complex legal work.
  • 2022-H1: Continued vendor platform maturation with open-source research models achieving SOTA results on clause extraction benchmarks (46.6% AUPR on CUAD dataset). Deployments expanded internationally: French law firm Lerins & BCW implemented Luminance for shareholder and M&A contracts; Brightleaf enabled railroad company extraction of domain-specific attributes from legacy contracts. Critical perspectives emerged questioning viability of certain AI approaches (record-based markup) while vendors emphasized human-AI collaboration maturity with 96-97% accuracy. Systematic review of 72 peer-reviewed sources synthesized findings on extraction adoption, identifying interoperability and integration costs as remaining barriers despite platform capability advances.
  • 2022-H2: Enterprise and government deployment acceleration: IDEXX (20K contracts in 20 minutes with Luminance), Arvato (45-60 min DPA review cut to 10 min with Legartis), IRS (6-hour clause review reduced to 6 minutes), DHS procurement AI integration. Luminance customer base grew tenfold with Fortune 500 signings. Advanced research (ConReader, EMNLP 2022) pushed implicit-relation modeling for clause extraction. Gartner 2022 Hype Cycle positioned Advanced Contract Analytics in trough of disillusionment, signaling reality-hype gap. Emerging generative AI (Spellbook/GPT-3) demonstrated complementary clause explanation and Q&A capabilities, indicating the start of a shift toward hybrid extraction-generation workflows.
  • 2023-H1: Multi-vendor enterprise adoption deepened with IDEXX/Luminance sanctions screening (20K contracts, 20 min), Deloitte/Luminance contract standardization (4.5K docs, 50% time savings), and Swedish law firm Moll Wendén/Luminance M&A deployment. Icertis reported 50% YoY AI adoption increase with named customers (Cigna, HERE). Regulatory uncertainty emerged: Japanese legal analysis questioned AI review's legality under UPL. Generative AI (GPT-4) tested for contract review showed hallucinations and missed clauses, reinforcing value of purpose-built extraction platforms. Market consolidation continued (Litera platform spanning negotiation-to-analytics); adoption barriers shifted from technology to organizational integration (70% of organizations still lacked fully automated CLM).
  • 2024-Q1: Generative AI entered mainstream contract review discourse; Icertis scaled AI copilots to $250M+ ARR with Fortune 100 penetration. Dioptra and other vendors demonstrated high-accuracy AI agents in production deployments (95%+ accuracy at law firms). Market bifurcated: purpose-built extraction tools (mature, proven, low risk) competed with generative AI alternatives (fast, but demanding organizational caution on liability, IP, compliance). Adoption intent remained strong (75% of in-house teams wanted AI), but organizational barriers persisted (63% waiting for expertise, IP risk concerns, data privacy questions). Critical legal analyses emerged questioning liability frameworks and regulatory compliance when using generative AI for sensitive contract work.
  • 2024-Q2: Adoption acceleration became visible as in-house legal teams rapidly embraced generative AI for contract review: Ironclad's survey of 800 lawyers showed 90% in-house adoption for flagging risky clauses; Juro survey found 85.7% of in-house lawyers globally now use GenAI (up from 55%). Icertis demonstrated enterprise ROI: one customer realized $30M cash benefit from optimized contract terms using AI risk assessment. Purpose-built extraction platforms remained dominant, but practitioner analyses highlighted persistent limitations: reliance on historical training data, inability to understand language nuances, context-dependent complexity, and irreducible need for human negotiation oversight. Gap between adoption intent and full-scale deployment persisted as organizations balanced capability gains against accuracy concerns, IP risks, and liability frameworks.
  • 2024-Q3: Market maturation accelerated with new entrant Zuva Analyze (spun from original Kira team) achieving 2-3x speed improvement in beta testing, while Kira maintained market dominance with 84% penetration in top M&A law firms and 64% of Am Law 100. Litera expanded Kira with Rapid Clause Analysis features for bulk clause extraction and comparison. LegalOn survey showed only 8% of legal professionals currently using AI despite 70% considering it, with time burdens (3+ hours per contract) driving adoption interest. Critical assessments highlighted persistent limitations: hallucination rates of 3-10%, IP ownership ambiguity, autonomous contracting consent issues, and governance gaps—signaling organizational caution despite platform maturity. Market bifurcation solidified: purpose-built extraction tools maintained dominance for high-stakes work, while generative AI alternatives accelerated adoption in resource-constrained environments.
  • 2024-Q4: Mainstream deployment phase reached with organizational momentum: Harbor's year-end law department survey documented majority prioritizing AI for workload management and cost control. IDC research predicted 69% of legal professionals would increase generative AI use over next two years, reflecting sustained adoption trajectory. Purpose-built extraction platforms (Kira, Luminance, eBrevia) consolidated market dominance for high-stakes M&A and corporate work with proven accuracy and liability credibility. Zuva Analyze launched with 2-3x speed improvements for in-house legal teams; Icertis sustained $250M+ ARR with Fortune 100 penetration. However, critical assessments hardened: vendor cost escalation risks, hallucination rates, IP ownership ambiguity, and governance deficits remained persistent organizational friction points. Market bifurcation reflected risk calculus—purpose-built tools dominated institutional deployments for liability protection, generative AI alternatives accelerated in resource-constrained in-house environments where speed outweighed precision concerns.
  • 2025-Q1: Vendor consolidation and platform maturation accelerated with Litera launching unified Litera One platform (March 2025) integrating drafting, review, and knowledge management; Icertis released Vera Analytics for GenAI-powered clause extraction. SMB market expansion: ClauseoAI entered with 500+ users at sub-dollar pricing. Adoption growth continued: LegalOn survey reported 17% of large companies using AI contract review (75% YoY growth) with 44% of all organizations using AI for contracting workflows. Barriers remained persistent: 55% cited data quality concerns, governance gaps, vendor cost escalation, and IP ownership ambiguity continued to shape organizational risk calculus despite sustained momentum toward AI-assisted contract review.
  • 2025-Q2: Ecosystem integration accelerated with Icertis integrating Harvey's legal models (April) and Luminance deploying Azure OpenAI across 600+ organizations (April). Research benchmarks emerged: Harvey's study revealed out-of-the-box LLMs achieved 65-70% accuracy in deal point extraction vs. human experts, establishing baselines for generative AI clause understanding. Adoption metrics confirmed dual demand: SpotDraft survey found clause analysis the top priority for 74% of in-house legal professionals; Counselwell benchmarking showed contract work as leading AI use case at 64% of users. However, practitioner assessments documented persistent limitations (ClauseBase: GenAI "spotty at best" for legal analysis) alongside trust barriers (60% lack confidence in AI outputs). Market bifurcation solidified: purpose-built platforms dominated high-stakes institutional work, generative AI alternatives accelerated in cost-sensitive and speed-prioritized environments.
  • 2025-Q3: Vendor platform maturation accelerated with Litera releasing generative AI integration into Kira Experience (July) for instant clause extraction in any language; Conga launched Redline AI for real-time risk analysis during negotiations. Market research confirmed adoption scale: AI contract review software market valued at $1.88B in 2024, projected to reach $7.5B by 2035 at 13.4% CAGR, with key players including Kira, Luminance, Icertis, and emerging vendors. Organizational adoption indicators: 38% of in-house legal teams actively using AI (50% exploring), clause analysis identified as top priority for 74% of professionals. However, critical adoption friction persisted: MIT-related research found 95% of generative AI pilots failing to deliver measurable ROI, revealing persistent gap between pilot success and production scaling. Integration barriers remained structural (59% cite complexity), alongside organizational hesitation on governance and autonomous decision-making. Purpose-built extraction platforms consolidated institutional dominance for high-stakes work; generative AI alternatives accelerated in cost-conscious environments where pilot-to-production scaling challenges were traded for speed.
  • 2025-Q4: Market bifurcation solidified with purpose-built platforms (Luminance 1,000+ customers, 90% time savings; Legartis >90% F1 scores; Dioptra acquired by Icertis at 40% MoM growth) dominating institutional work and generative AI alternatives capturing cost-conscious segment. Real-world deployments confirmed production maturity: Trench Group achieved 80% autonomous handling and 80% time reduction with Luminance; Arvato cut DPA review from 45-60 min to <10 min with Legartis. However, scaling barriers hardened: 95% of GenAI pilots still failing ROI targets, data quality concerns (55%), integration complexity (59%), and governance gaps persisting despite platform maturity. Market projected to reach $7.5B by 2035, but organizational hesitation around liability, accuracy, and scaling remained the primary constraint on broader adoption beyond leading-edge deployments.
  • 2026-Jan: Mainstream adoption acceleration confirmed: LegalOn survey found AI adoption for contract review nearly quadrupled since 2024, with contract review now foundation of legal AI enablement. Luminance launched new Legal-Grade AI platform with institutional memory; 75% annual growth in AI contract review pilots; Agiloft, Icertis, and Ironclad bundled clause extraction as default modules. Thomson Reuters and industry surveys confirmed persistent barriers: reliability skepticism (55% data quality concerns), integration complexity (59%), governance gaps despite vendor maturity. Bifurcated market dynamics continued: purpose-built platforms dominated institutional M&A work; generative alternatives captured SMB and cost-conscious segments.
  • 2026-Feb: Continued deployment momentum with Kalaam Telecom Group (Bahrain, Saudi Arabia, Kuwait, UAE, Jordan, Egypt, UK) adopting Luminance for centralized contract review and reduced turnaround times. Icertis and World Commerce & Contracting survey of 500+ practitioners confirmed shift from experimentation to measurable impact phase. Deloitte Global CPO Survey: 41.27% of procurement leaders identified contract extraction as top GenAI use case, though only 4% achieved large-scale deployment against 49% pilot activity. Critical research (Stanford/Caltech, TMLR 2026) documented fundamental LLM reasoning limitations for contract logic, highlighting ongoing tension between vendor ecosystem expansion and architectural AI constraints. EU AI Act enforcement timeline (full applicability August 2026) beginning to shape vendor compliance strategies, with most CLM extraction features classified as limited/minimal risk.
  • 2026-Apr: Mainstream adoption confirmed across institutional and government tiers: 87% of general counsel now use AI (up from 44% prior year) with 63% specifically using AI for clause identification; Kira Systems entrenched in 70 of top 100 global law firms and over 80% of top 25 M&A practices; Astrion selected Icertis for federal contracting compliance, advancing public-sector deployment. Production accuracy benchmarks strengthened — Concord reached 98% accuracy across 11 months of live contracts, compressing review from 92 minutes to 26 seconds; Inkvex's independent study of 327 real contracts (attorney-validated) confirmed 94% true-positive rates on high-severity flags with clause-specific accuracy of auto-renewal 99%, indemnification 96%, non-compete 93%; independent comparison of five enterprise platforms established 95-99% accuracy as standard for leading tools in controlled settings. Hallucination research quantified the practice's structural ceiling: 18.7% hallucination rate on legal questions (vs. 0.7% on basic tasks), and 800+ documented U.S. legal decisions marred by AI-generated false citations — including a Fourth Circuit admonishment — with only 7% of deploying organisations having documented AI governance frameworks, reinforcing that scaling beyond pilots remains blocked by accuracy validation and governance readiness rather than vendor capability.

TOOLS