The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that performs legal research, analyses case law, and assists with e-discovery document review and classification. Includes case law semantic search and predictive coding for document review; distinct from litigation prediction which forecasts outcomes rather than finding relevant materials.
AI-driven legal research and e-discovery have a mature, proven ecosystem -- the operational question is deployment strategy, not viability. Technology-assisted review earned judicial acceptance over a decade ago, and cloud platforms from Relativity and Everlaw long ago commodified the tooling for multimillion-document matters. Generative AI now extends these capabilities into legal research, privilege review, and case strategy, with GA products from every major vendor and documented productivity gains of 50-70% on large matters. The defining tension today is not whether to adopt but how to manage reliability: hallucination rates in legal research tools remain material, creating a sharp split between high-confidence use cases (privilege logging, document review with human verification) and lower-confidence ones (open-ended legal research where fabricated citations carry sanctions risk). Legal AI adoption has reached an inflection point—63% of mid-sized firms have formally adopted GenAI tools while 87% of corporate general counsel report active use. Yet only 17.7% of e-discovery professionals deploy AI on most cases, reflecting justified caution about liability, validation gaps, and governance overhead. Organisations that match use case to verification tolerance are extracting real value. Those waiting for zero-defect AI research are waiting for a capability the current generation does not offer.
Thomson Reuters CoCounsel has reached one million users across 107 countries, covering 80% of the Am Law 100, with users reporting 2.6x faster legal research and document review. Corporate legal departments are moving in step: a survey of 657 global legal professionals found GenAI adoption doubled to 52% among U.S. in-house teams, with 64% expecting reduced outside counsel reliance. Broader legal profession adoption reached 69% across all types of AI tools and 42% for legal-specific tools, with legal research leading use cases at 58% adoption. April 2026 global data confirms acceleration: 92% of 810 lawyers across US, China, and 9 EU countries report using AI tools daily, with 80% of GenAI-adopting lawyers relying on AI for legal research and 62% reporting 6–20% time savings. E-discovery firms have moved from pilot to active deployment at scale—64% now report active AI integration, though 33% cite accuracy as the primary barrier. Vendors continue pushing upmarket. Relativity’s aiR for Case Strategy, launched in general availability in early 2026 with 50+ customers, automates fact extraction and witness summaries -- extending AI from document review into litigation intelligence. One case study showed 32 deposition transcripts summarised in minutes rather than hours, a 70% time reduction. A UK law firm deployed Relativity aiR mid-project on a 100K document matter and achieved 75% review population reduction and £50K cost savings in one week.
These gains coexist with persistent reliability and governance failures. Courts have documented 280+ filings containing AI-fabricated citations since 2023, with a sevenfold surge in 2025 and Q1 2026 sanctions totaling $145K. Hallucination rates in specialized legal research tools remain at 17-33% (Lexis+ AI 17%, Westlaw AI 34%), with independent researchers documenting 1,227+ hallucination cases globally and 486+ cases before US courts as of April 2026. Platform-specific redaction defects have triggered court sanctions exceeding $2.5M. Even Am Law 100 firms with comprehensive AI policies have failed verification: Sullivan & Cromwell submitted a bankruptcy brief with ~40 fabricated citations despite formal controls and training, demonstrating that the verification burden exceeds organizational discipline. A critical survey of 19 EDRM power-user practitioners found evaluation of GenAI document review effectiveness remains ad hoc with no statistical validation frameworks and no disclosures to courts. Regulatory pressure is tightening. The EU AI Act Article 50 transparency requirements take effect in August 2026, classifying AI document review systems as high-risk and carrying fines up to EUR 35M or 7% of global revenue, requiring conformity assessment and audit logging. Courts are shifting liability from user error to tool architecture: Q1 2026 rulings now scrutinize whether generative tools are "architecturally sufficient for work requiring verified citations." Only 17.7% of e-discovery professionals deploy generative AI on most cases, and 81% of mid-sized firms report internal reliability concerns, even as 60% acknowledge capability -- a gap reflecting justified caution about liability, validation gaps, and governance overhead rather than ignorance of tools.
— Global survey of 810 lawyers across US, China, and 9 EU countries shows 92% using AI tools daily, 62% reporting 6–20% time savings, 60% expecting increased investment.
— Ethicore analysis of HEC Paris hallucination database documents 1,227+ cases, academic studies showing Lexis+ AI 17% and Westlaw 34% error rates, and escalating judicial response framework.
— Vendor critical analysis: 486 documented hallucination cases in courts, 80% of firms not measuring ROI, attorney-client privilege lost when using GenAI with no confidentiality guarantee, EU AI Act €35M fines in 4 months.
— EDRM-supported survey reporting 64% of e-discovery firms actively deploying AI, with new governance tracking moving conversation from adoption velocity to adoption accountability.
— Am Law 100 firm advising OpenAI filed brief with ~40 fabricated citations despite comprehensive AI policies and training—demonstrates verification burden persists even at elite firms.
— Thomson Reuters survey of 1,500 professionals shows 80% of GenAI-using lawyers rely on AI for legal research with 82% weekly usage, indicating embedding into core workflows.
— Analysis of Q1 2026 sanctions ($145K across ~12 cases) shows courts shifting liability focus from user error to tool design, questioning whether tools are architecturally sufficient for verified citation work. Emerging standard: using generatively-trained models for legal research carries intrinsic liability.
— Comprehensive documentation of platform-specific redaction defects across Relativity, Everlaw, and GoldFynch with court sanctions ($2.5M+) and failure mode analysis—critical negative signal on privilege protection reliability in production deployments.