The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that predicts litigation outcomes based on case characteristics, jurisdiction, judge history, and precedent analysis. Includes settlement value estimation and risk scoring; distinct from legal research which finds precedents rather than predicting outcomes.
Litigation outcome prediction has mature tooling and real deployments but remains stuck at the leading edge -- adopted by elite law firms, insurers, and litigation funders while most of the profession watches from the sidelines. Platforms like Lex Machina and Pre/Dicta now cover millions of federal and state cases, offer API ecosystems, and claim 85%+ accuracy on motion prediction. Quinn Emanuel has embedded these tools into litigation strategy. Insurers report measurable loss reductions. The Canadian Department of Justice runs forecasting across 43,000+ active files. Yet the gap between vendor promises and practitioner experience is widening, not closing: only 37% of legal professionals report actual workflow improvements from AI tools, and peer-reviewed research consistently flags low classification performance and data scarcity as unresolved problems. Hallucination rates of 58-88% in legal AI tasks compound the credibility challenge. The practice delivers selective value for well-resourced institutional buyers, but reliability doubts, bias concerns, and regulatory scrutiny are hardening the barriers to broader adoption rather than dissolving them.
The vendor ecosystem continues to mature and expand globally. Lex Machina's database spans 27M+ civil cases across all 94 federal districts and 1,300+ state courts, with a public developer API launched in late 2025 enabling third-party integrations. Pre/Dicta operates across 20M+ cases with multi-stage outcome analytics (JudicialIQ, Prediction, Counsel Compare, Venue Strategist) and appellate coverage. New entrants are competing: UniCourt's DART platform launched judgment and attorney comparison analytics in March 2026; Canotera (Haifa-based startup) commercializes outcome prediction with claimed 85% accuracy across insurance, employment, and personal injury cases. Both Lex Machina and Pre/Dicta are available in educational institutions, signalling expansion beyond elite AmLaw buyers. Global deployment is documented: Canada's Department of Justice manages 43,000+ litigation files through forecasting; international arbitral institutions (CIArb, SVAMC, SCCAI, VIAC) now recognize outcome prediction as a "conservative AI application" with institutional guidance frameworks. At the algorithmic level, AI systems have surpassed specialized prediction algorithms -- Claude-based Supreme Court predictions exceed FantasySCOTUS's prior ~70% accuracy ceiling.
Institutional deployment evidence continues to accumulate. Casualty insurers using Lex Machina report 2-5% incurred loss reductions. Major law firms (Hogan Lovells, Fields Han Cunniff) integrate Lex Machina for class action risk assessment and forecasting. Employment litigation shows vertical growth (disability accommodation claims +42% YoY in 2025). Patent litigation professionals standardize outcome forecasting for Hatch-Waxman disputes. An Opus2/Ari Kaplan survey found 87% of AmLaw partners view AI case strategy tools as competitive advantage. The MTMP 2026 conference revealed divergent adoption patterns: firms embedding AI directly into intake workflows report measurable outcomes, while ad-hoc deployment remains inconsistent. These signals describe the vanguard consolidation and process maturation, not mainstream expansion.
The barriers are hardening. Research documents persistent generalization failures: models achieving 75% historical accuracy degrade to 58-68% on future cases, with judge identity and systemic bias accounting for predictive power. Hallucination rates of 33-88% in legal AI tasks (Stanford 2024 benchmark: 33% rate for Westlaw AI-Assisted Research, 17% for Lexis+ AI), with over 600 documented incidents globally and at least 128 lawyers implicated, compound credibility doubts. Courts now hold attorneys personally liable for AI-generated errors (Johnson v. Dunn precedent). The EU AI Act explicitly classifies outcome prediction tools as high-risk, effective August 2026, requiring conformity assessment, registration, documentation, and human oversight—with penalties of €15M or 3% of global turnover. The Colorado AI Act takes effect June 2026. Regulatory hardening is global, not regional. A Bloomberg Law survey found only 37% of practitioners reporting actual workflow improvements despite 75% predicting them—a credibility gap that tempers enthusiasm even among willing adopters.
— Stanford 2024 benchmark documents 33% and 17% hallucination rates in legal AI; real case shows attorneys fined for ChatGPT-generated fictitious citations; establishes systemic reliability barrier constraining outcome prediction adoption.
— MTMP 2026 conference analysis shows firms splitting into embedded vs. ad-hoc AI adoption; purpose-built platforms generate 800-900 validation checks per document vs. general LLMs one-at-a-time, enabling quantifiable confidence metrics on outcome predictions.
— Lewis Silkin survey identifies outcome prediction as a recognized 'conservative AI application' in international arbitration; documents institutional guidance from CIArb, SVAMC, SCCAI, and VIAC emphasizing human oversight and verification.
— Documents global deployment of outcome prediction systems (Solution Explorer BC Canada, Xiao Fa China, Victor Brazil, AAA arbitration) while identifying critical limitations: AI cannot explain reasoning or replicate contextual human judgment.
— EU AI Act explicitly classifies litigation outcome prediction tools as high-risk, effective August 2026, imposing conformity assessment, registration, documentation, and human oversight requirements; penalties €15M or 3% of global turnover.
— Hogan Lovells and Fields Han Cunniff attorneys use Lex Machina for risk assessment and forecasting; testimonials evidence active deployment for outcome prediction across major law firms in class action litigation.
— Canotera (Haifa-based startup) commercializes litigation outcome prediction with 85% claimed accuracy using LLMs and geometric machine learning across insurance, employment, and personal injury cases.
— Comprehensive technical guide for IP professionals on systematic litigation outcome prediction across Hatch-Waxman patent disputes; establishes outcome forecasting as standard methodology in pharmaceutical litigation.
2019: Commercial tools (Westlaw Edge, LexisNexis Context, Lex Machina, CourtQuant) emerged for judge analytics and case outcome scoring; Wolters Kluwer launched budget prediction module. France criminalized judicial analytics. Academic research questioned model reliability and bias risks.
2020: Lex Machina expanded into Federal Torts and state court analytics (California, Texas, New York). LexisNexis study reported 70% adoption among surveyed firms (up from 38% in 2017). Theo AI and other startups entered market. However, broader practitioner adoption remained slow (8% of lawyers, per ABA survey); accuracy and reliability concerns persisted.
2021: Major vendors consolidated platforms (LexisNexis launched analytics on Lexis+, Lex Machina expanded specialty modules). Elite firm Lenczner Slaght built proprietary ML prediction system, signaling confidence among sophisticated adopters. Vendor market segmentation matured (Lawptimize repositioned as probabilistic-analysis tool, not prediction). Practitioner demand remained strong but adoption barriers persisted; market growth driven by narrow segments rather than universal adoption.
2022-H1: Academic research on legal judgment prediction matured with comprehensive surveys and empirical validation of prediction accuracy; 68% of law firms reported using legal analytics (up 7% YoY), with litigation finance as key driver. However, satisfaction declined to 37% among practitioners, signaling integration challenges despite widespread tool adoption.
2022-H2: Platform consolidation accelerated: Lex Machina expanded with appellate analytics (400K+ circuit cases) and state motion metrics; Thomson Reuters added company-specific litigation analytics to Westlaw. Academic validation increased (IJCAI survey, PLOS ONE Brazilian study showing deep learning outperforms human experts). Market projections indicated 16% CAGR for legal analytics to 2031. Integration barriers persisted as the adoption constraint.
2023-H1: New entrants emerged with competitive accuracy claims (Pre/Dicta 85% on motions, Rain Intelligence 250% ROI); Pre/Dicta acquired Gavelytics to expand state court coverage (25+ states). Adoption mainstream markers reached: 68% firm adoption, 52% using predictive litigation tools. SCOTUS_AI research (AUC 0.8087) strengthened academic validation. Market growth accelerated to 31% CAGR projection through 2032. However, satisfaction and integration remained binding constraints on profession-wide deployment.
2023-H2: Pre/Dicta expanded motion prediction to summary judgment and class certification (85% accuracy claimed); Tilleke & Gibbins industry report documented deployment across US, Europe, and China but raised regulatory and bias concerns. Critical peer-reviewed survey (ACL Anthology, Dec 2023) found only 7% of 150+ LJP papers effectively predict outcomes, revealing field-wide methodological weaknesses and poor explainability. Market maintained 68% firm adoption; binding constraint shifted from tooling availability to model credibility amid evidence of research-level limitations.
2024-Q1: Lex Machina released Litigation Footprint (27M cases, 94 federal districts + 1,300+ state courts); firm adoption held steady at 68%. Academic research advanced on Indian legal datasets and civil law prediction frameworks, but Stanford study (Jan 2024) revealed 69-88% LLM hallucination rates on legal reasoning. Practitioner confidence in underlying model accuracy remained the binding constraint on broader adoption.
2024-Q2: Quinn Emanuel integrated Pre/Dicta tool for outcome prediction (May 2024), treating AI forecasting as core litigation strategy. Pre/Dicta expanded to California state courts (June 2024), first major vendor entry into state courts. Lex Machina released vertical-specific industry reports (insurance, antitrust) demonstrating mature institutional deployment. PILOT and PredEx papers advanced methodological foundations for common law and expert-annotated outcome prediction. Model validation gaps and LLM hallucination concerns persisted as credibility constraints on broader adoption.
2024-Q3: Lex Machina continued vertical specialization (trade secret report, September 2024). Academic research advanced methodological constraints via Legal Fact Prediction framework (EMNLP 2025 paper) addressing judges' unavailable fact determinations. UNESCO survey showed 44% of global judicial operators using AI for legal tasks with only 9% institutional training. Practitioner survey revealed 77% of in-house legal teams experienced failed tech implementations, exposing integration as the binding adoption constraint beyond tooling and model reliability.
2024-Q4: EvenUp raised $135M Series D (>$1B valuation) with four new AI products for personal injury settlement prediction, demonstrating strong commercial momentum in outcome-adjacent litigation finance. Lex Machina completed geographic expansion to all 94 federal districts and 1,300+ state courts (3.7M+ case database). Frontiers peer-reviewed study from Italian courts warned that predictive AI misaligns with legal deliberation and reproduces past bias; Wolters Kluwer survey of 712 professionals documented 41-42% reliability doubts among outcome prediction users. UK practitioner analysis identified outcome prediction as Wave 2 adoption with unresolved regulatory, bias, and explainability barriers. Consolidation (platform data completeness), product maturity (settlement prediction for personal injury), and institutional skepticism (doubts about accuracy) marked the quarter's signal balance.
2025-Q1: Quinn Emanuel integrated Pre/Dicta for outcome prediction as core litigation strategy; Lex Machina released 2025 damage awards analysis demonstrating sustained institutional analytics deployment. Adoption broadened to 79% of law firm professionals (315% growth from 2023-2024). Platforms achieved stable scale: Lex Machina 27M+ cases (all 94 federal districts and 1,300+ state courts), Pre/Dicta 20+ years federal data with 85% accuracy on motions. Academic research confirmed high accuracy benchmarks (92% French Supreme Court, 71.9% SCOTUS, 79% ECHR) but identified persistent methodological limitations. Practitioner and judicial communities remained cautious: law firm analysis emphasized professional liability risks from over-reliance; bias and lack of explainability remained binding constraints on judicial deployment. The quarter reflected mature commercial platforms with broad professional adoption but continued skepticism about reliability and potential for misuse.
2025-Q2: Lex Machina integrated Protégé AI assistant (April 2025) for natural-language litigation outcome queries, signaling focus on user experience and accessibility. Pre/Dicta maintained 85% accuracy claims with 13M+ decision analysis; adoption remained concentrated among AmLaw and institutional buyers. SAGE Open systematic review identified low classification performance and data scarcity as core unresolved challenges despite vendor accuracy claims. Gartner analysis projected 30% of successful AI pilots abandoned before production due to integration barriers, confirming that tool availability was no longer the binding constraint on adoption. Practitioner skepticism persisted despite broad platform adoption: questions about explainability, bias, and return on investment continued to limit expansion beyond elite institutional buyers.
2025-Q3: Pre/Dicta expanded platform with appellate forecasting and biographical intelligence tools (August 2025); Bloomberg Law survey revealed widening expectation-reality gap—75% of practitioners predicted AI workflow gains but only 37% reported improvements; majority experienced no change. MIT study (July 2025) documented 95% AI pilot failure rate, with legal sector projects among those delivering no measurable ROI. Insurer adoption showed positive signals (2-5% loss reductions reported by casualty carriers), but industry analyst coverage flagged hallucination concerns (Stanford HAI: 1-in-6 fabrication rate) and questioned accuracy despite vendor claims. Government deployment evidence emerged: Canadian Department of Justice audit documented production-stage litigation forecasting with 43,000 ongoing files. Signal balance reflected product maturity with selective institutional success (insurers, elite firms, government) alongside pervasive integration failures and expectation gaps, suggesting leading-edge plateau rather than mass-market acceleration.
2025-Q4: Lex Machina launched public developer API (December 2025), signaling ecosystem maturity and third-party integration capability; Pre/Dicta maintained 85% motion prediction accuracy with appellate expansion to 15M+ federal cases. Analyst survey (Opus2/Ari Kaplan Advisors) found 87% of AmLaw partners agree AI case strategy tech is competitive advantage, 81% believe required for litigation competitiveness. However, critical barriers hardened: SAGE Open peer-reviewed survey (April 2025) identified low classification performance and data scarcity as unresolved challenges; UK legal analysts warned of bias/discrimination risks citing Post Office Horizon precedent; US Legal Support survey of 2,011 professionals showed 77% expect increased AI use in next 5 years but only 26% prioritize AI/ML for 2026 (tempered from earlier enthusiasm). Platform ecosystem matured (APIs, ecosystem integration) but ROI realization gaps and reliability doubts widened, suggesting consolidation phase with selective institutional deployment rather than expansion.
2026-Jan: Lex Machina released 2026 Trade Secret Litigation Report documenting record 1,500+ federal trade secret filings in 2025 (all-time high), with 65% settlement rate and median trial duration of 1,124 days; signaling sustained institutional analytics deployment. Simultaneously, critical legal liability concerns emerged: over 600 documented AI hallucination cases in legal work implicating 128 lawyers, with Johnson v. Dunn federal court disqualification case establishing precedent that courts hold attorneys personally liable for AI-generated errors. The convergence reflects market maturity (platform scale, specialty analytics, API ecosystem) alongside unresolved liability and accuracy risks hardening adoption barriers.
2026-Feb: Platform ecosystem matured through educational integration (Lex Machina now available at Texas Tech Law Library) and vendor research advancement (Pre/Dicta released judicial behavior studies analyzing 670K+ appellate decisions). However, accuracy and adoption barriers hardened: hallucination analysis documented 58-88% error rates in legal AI tasks with 600+ global incidents; judicial operators globally (44% of 96 countries) faced regulatory scrutiny (EU AI Act classifies justice systems as high-risk); and legal AI tool benchmarking revealed widespread testing failures with competitive systems fabricating facts. Market remained concentrated among elite AmLaw and institutional buyers with skepticism about ROI realization and tool reliability blocking mainstream expansion.
2026-Q1/April: Vendor ecosystem continued maturation. UniCourt's DART platform launched judgment and attorney comparison analytics (March 2026), signaling new competitive entry into outcome prediction. Lex Machina expanded employment litigation analytics (disability accommodation claims +42% YoY). Pre/Dicta confirmed platform scale at 20M+ cases with multi-module analytics (JudicialIQ, Prediction, Counsel Compare, Venue Strategist) spanning full litigation lifecycle. AI systems surpassed specialized outcome prediction algorithms: Claude-based Supreme Court predictions outperformed FantasySCOTUS prior accuracy ceiling (~70%), indicating significant capability advancement. Peer-reviewed research (Goodman-Delahunty, 481 US attorneys) documented lawyers' systematic overconfidence bias, reinforcing demand for objective prediction tools. However, critical research highlighted persistent limitations: ECHR outcome prediction models degrading from 75% historical to 58-68% future accuracy, with judge identity and systemic bias accounting for predictive power. Industry benchmarks (Blott 2026) documented 17-34% error rates across leading legal AI tools and regulatory hardening (EU AI Act August 2026, Colorado AI Act June 2026). Algorithmic advancement in outcome prediction capability coincided with evidence of generalization failure and regulatory/liability barriers hardening adoption constraints. MTMP Spring 2026 conference revealed a deepening split between firms embedding prediction tools directly into intake workflows (purpose-built platforms running 800-900 validation checks per document for quantifiable confidence metrics) versus ad-hoc deployments that remain inconsistent; international arbitral institutions (CIArb, SVAMC, SCCAI, VIAC) now formally recognize outcome prediction as a "conservative AI application" with institutional guidance frameworks emphasising human oversight, while hallucination research (Stanford 33% rate) continued to constrain adoption among firms unable to sustain robust verification protocols.