The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that extracts, classifies, and flags risky clauses in contracts for human review. Includes automated redline generation and clause-level risk scoring; distinct from autonomous contract assessment which scores entire agreements without human review.
AI-driven clause extraction and risk scoring has crossed from early adoption into proven practice. The technology works — production deployments now achieve 98% accuracy on diverse contract portfolios, with empirical validation showing 94% true-positive rates on high-severity clause flags (auto-renewal 99%, indemnification 96%, non-compete 93%). Adoption among in-house legal teams has nearly quadrupled since 2024, with 63% of large corporations explicitly using AI for clause identification. Major CLM vendors bundle clause extraction as a default module, and the market is projected to grow from $2.1B (2025) to $3.9B (2030) at 17.3% annual growth. The practice occupies a well-defined layer between raw document understanding and higher-level contract governance: it identifies, classifies, and flags risky clauses for human review, including automated redline generation and clause-level risk scoring. The remaining challenge is not whether the technology delivers value but how organisations overcome governance and accuracy validation barriers to scale past pilots. Hallucinations remain the defining limitation: AI systems hallucinate on legal questions at 18.7% rates (vs. 0.7% on basic summarization), with courts admonishing lawyers for filing briefs containing fabricated case citations. Independent research in April 2026 documented 800+ U.S. legal decisions marred by AI-generated hallucinations. Integration complexity, data quality concerns, and unresolved liability questions around autonomous decision-making mean that most deployments still cluster in high-volume, lower-stakes workflows. For high-stakes M&A and regulatory work, purpose-built extraction tools with human-in-the-loop remain the default; generative AI alternatives are gaining ground in cost-conscious environments but carry accuracy and liability trade-offs that demand explicit guardrails and rigorous human review.
The vendor ecosystem has bifurcated along a clear risk-tolerance line. Purpose-built platforms — Kira Systems (84% penetration among top M&A firms), Luminance (1,000+ customers reporting 90% time savings), and Legartis (>90% F1 scores) — dominate institutional and high-stakes corporate work. Generative AI alternatives like Zuva Analyze, IntelAgree, and ClauseoAI serve cost-conscious in-house teams and SMBs where speed matters more than precision guarantees. Icertis sustains $250M+ ARR with Fortune 100 penetration and recently acquired Dioptra; Agiloft, Icertis, and Ironclad all now ship clause extraction as a bundled default. Recent Q1-Q2 2026 adoption data shows 52% of in-house legal teams actively using or evaluating AI for contract review (up 4x since 2024), with 87% of corporate general counsel now reporting some form of AI use, compared to 44% in prior year.
Current real-world deployments demonstrate production-grade maturity. Concord's production platform achieved 98% accuracy across 11 months and thousands of contracts, with task-specific variance (technology agreements 99%, healthcare 94%, construction 96%, financial services 97%) and processing speed improvement from 92 minutes to 26 seconds per contract. ContractIQ's SME deployment compressed a 200-contract acquisition due diligence from 160 hours to 90 minutes with explicit zero-hallucination guardrails. Kalaam Telecom Group's multi-country Luminance deployment, Trench Group's 80% autonomous handling, and Arvato's DPA review acceleration (45-60 minutes to under ten) all confirm maturity across geographies and use cases.
However, deployment and scaling barriers remain structural. Consilio's survey of 800+ legal professionals found 73% cite hallucinated outputs as their top concern; 58% identify accuracy and trust as the primary blocker to broader adoption; only 7% have documented AI governance frameworks despite widespread implementation. April 2026 evidence reveals the depth of this concern: 800+ documented U.S. legal decisions show AI hallucinations (Fourth Circuit admonished a lawyer for briefs citing nonexistent cases), and independent research shows 18.7% hallucination rates on legal questions—far exceeding the 0.7% baseline on simpler tasks. Leading-edge deployments address this through explicit guardrails (ContractIQ's zero-hallucination policy flags "not stated" rather than fabricating values), but this adds manual review overhead. FTI Consulting's rigorous survey of 224 $100M+ corporations found only 39% report enterprise-level business impact from AI investments. Ironclad's framework for AI contract metrics documents that AI ROI typically requires 2-4 years, longer than organizational patience. Stanford and Caltech research published in early 2026 documented fundamental LLM reasoning limitations in formal contract logic. The EU AI Act, reaching full applicability in August 2026, adds regulatory complexity, though most clause extraction features fall into limited- or minimal-risk categories and vendors are adapting accordingly.
— Empirical study of 327 real contracts (Jan-Apr 2026) with attorney validation showing 94% true-positive rate on high-severity flags; clause-specific accuracy: auto-renewal 99%, indemnification 96%, non-compete 93%—confirming production accuracy benchmarks.
— Independent comparison of five enterprise contract review tools establishing that 95–99% accuracy on risk identification is standard for leading platforms in controlled studies; caveat: unusual jurisdictions and bespoke structures remain challenging.
— Market research ranking shows Kira Systems used by ~70 of top 100 global law firms and >80% of top 25 M&A practices, documenting vendor consolidation and institutional adoption at elite tier.
— Hallucination benchmark report showing 18.7% hallucination rate on legal questions (vs. 0.7% on basic summarization), quantifying fundamental AI limitation for contract analysis and high-stakes legal work.
— Fourth Circuit court admonishment of attorney filing briefs with hallucinated case citations; independent tracking database documenting 800+ U.S. legal decisions marred by AI hallucinations, establishing critical limitation in production contract review.
— Aggregated adoption metrics from Thomson Reuters and Gartner showing specific time-savings deployments (Agristo 2hr→15min, ECS 8hr→few hrs, Duvel 1day→20min); reports 53% of organizations seeing ROI and 75% median time reduction in contract review.
— Government contractor deployment using Icertis for compliance-driven clause extraction and risk flagging in federal contracting context (FAR/DFARS compliance), demonstrating public-sector production use.
— Implementation case study documenting production contract review deployments at major law firms using playbook-based clause extraction, risk flagging, and governance frameworks for multi-step analysis workflows.