Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Legal research & e-discovery

GOOD PRACTICE

TRAJECTORY

Stalled

AI that performs legal research, analyses case law, and assists with e-discovery document review and classification. Includes case law semantic search and predictive coding for document review; distinct from litigation prediction which forecasts outcomes rather than finding relevant materials.

OVERVIEW

AI-driven legal research and e-discovery have a mature, proven ecosystem -- the operational question is deployment strategy, not viability. Technology-assisted review earned judicial acceptance over a decade ago, and cloud platforms from Relativity and Everlaw long ago commodified the tooling for multimillion-document matters. Generative AI now extends these capabilities into legal research, privilege review, and case strategy, with GA products from every major vendor and documented productivity gains of 50-70% on large matters. The defining tension today is not whether to adopt but how to manage reliability: hallucination rates in legal research tools remain material, creating a sharp split between high-confidence use cases (privilege logging, document review with human verification) and lower-confidence ones (open-ended legal research where fabricated citations carry sanctions risk). Legal AI adoption has reached an inflection point—63% of mid-sized firms have formally adopted GenAI tools while 87% of corporate general counsel report active use. Yet only 17.7% of e-discovery professionals deploy AI on most cases, reflecting justified caution about liability, validation gaps, and governance overhead. Organisations that match use case to verification tolerance are extracting real value. Those waiting for zero-defect AI research are waiting for a capability the current generation does not offer.

CURRENT LANDSCAPE

Thomson Reuters CoCounsel has reached one million users across 107 countries, covering 80% of the Am Law 100, with users reporting 2.6x faster legal research and document review. Corporate legal departments are moving in step: a survey of 657 global legal professionals found GenAI adoption doubled to 52% among U.S. in-house teams, with 64% expecting reduced outside counsel reliance. Broader legal profession adoption reached 69% across all types of AI tools and 42% for legal-specific tools, with legal research leading use cases at 58% adoption. April 2026 global data confirms acceleration: 92% of 810 lawyers across US, China, and 9 EU countries report using AI tools daily, with 80% of GenAI-adopting lawyers relying on AI for legal research and 62% reporting 6–20% time savings. E-discovery firms have moved from pilot to active deployment at scale—64% now report active AI integration, though 33% cite accuracy as the primary barrier. Vendors continue pushing upmarket. Relativity’s aiR for Case Strategy, launched in general availability in early 2026 with 50+ customers, automates fact extraction and witness summaries -- extending AI from document review into litigation intelligence. One case study showed 32 deposition transcripts summarised in minutes rather than hours, a 70% time reduction. A UK law firm deployed Relativity aiR mid-project on a 100K document matter and achieved 75% review population reduction and £50K cost savings in one week.

These gains coexist with persistent reliability and governance failures. Courts have documented 280+ filings containing AI-fabricated citations since 2023, with a sevenfold surge in 2025 and Q1 2026 sanctions totaling $145K. Hallucination rates in specialized legal research tools remain at 17-33% (Lexis+ AI 17%, Westlaw AI 34%), with independent researchers documenting 1,227+ hallucination cases globally and 486+ cases before US courts as of April 2026. Platform-specific redaction defects have triggered court sanctions exceeding $2.5M. Even Am Law 100 firms with comprehensive AI policies have failed verification: Sullivan & Cromwell submitted a bankruptcy brief with ~40 fabricated citations despite formal controls and training, demonstrating that the verification burden exceeds organizational discipline. A critical survey of 19 EDRM power-user practitioners found evaluation of GenAI document review effectiveness remains ad hoc with no statistical validation frameworks and no disclosures to courts. Regulatory pressure is tightening. The EU AI Act Article 50 transparency requirements take effect in August 2026, classifying AI document review systems as high-risk and carrying fines up to EUR 35M or 7% of global revenue, requiring conformity assessment and audit logging. Courts are shifting liability from user error to tool architecture: Q1 2026 rulings now scrutinize whether generative tools are "architecturally sufficient for work requiring verified citations." Only 17.7% of e-discovery professionals deploy generative AI on most cases, and 81% of mid-sized firms report internal reliability concerns, even as 60% acknowledge capability -- a gap reflecting justified caution about liability, validation gaps, and governance overhead rather than ignorance of tools.

TIER HISTORY

ResearchJan-2015 → Jan-2015
Bleeding EdgeJan-2015 → Jan-2016
Leading EdgeJan-2016 → Jan-2019
Good PracticeJan-2019 → present

EVIDENCE (140)

— Global survey of 810 lawyers across US, China, and 9 EU countries shows 92% using AI tools daily, 62% reporting 6–20% time savings, 60% expecting increased investment.

— Ethicore analysis of HEC Paris hallucination database documents 1,227+ cases, academic studies showing Lexis+ AI 17% and Westlaw 34% error rates, and escalating judicial response framework.

— Vendor critical analysis: 486 documented hallucination cases in courts, 80% of firms not measuring ROI, attorney-client privilege lost when using GenAI with no confidentiality guarantee, EU AI Act €35M fines in 4 months.

— EDRM-supported survey reporting 64% of e-discovery firms actively deploying AI, with new governance tracking moving conversation from adoption velocity to adoption accountability.

— Am Law 100 firm advising OpenAI filed brief with ~40 fabricated citations despite comprehensive AI policies and training—demonstrates verification burden persists even at elite firms.

— Thomson Reuters survey of 1,500 professionals shows 80% of GenAI-using lawyers rely on AI for legal research with 82% weekly usage, indicating embedding into core workflows.

— Analysis of Q1 2026 sanctions ($145K across ~12 cases) shows courts shifting liability focus from user error to tool design, questioning whether tools are architecturally sufficient for verified citation work. Emerging standard: using generatively-trained models for legal research carries intrinsic liability.

— Comprehensive documentation of platform-specific redaction defects across Relativity, Everlaw, and GoldFynch with court sanctions ($2.5M+) and failure mode analysis—critical negative signal on privilege protection reliability in production deployments.

HISTORY

  • 2015: Courts endorsed technology-assisted review; Relativity dominated market with 300K+ users; Casetext launched free legal research platform; FTI Consulting documented broad Fortune 1000 adoption of TAR; predictive coding gained acceptance as cost-control mechanism for large-scale document review.
  • 2016: UK courts approved predictive coding (Pyrrho case); GM ignition switch MDL demonstrated massive-scale TAR deployment (2.5M documents, 31 law firms); Everlaw raised $8.1M Series A from A16z and partnered with legal services firms; predictive coding methodology advanced to version 4.0; e-discovery transitioned from emerging to established practice.
  • 2017: Expert consensus confirmed mainstream adoption—mass uptake by law firms and NewLaw entities; vendor ecosystem expanded with CS DISCO launching AI-powered platforms (85-95% accuracy); cloud delivery became standard; critical research revealed significant algorithmic variance in legal research (only 7% overlap across six databases), tempering optimism about search reliability; adoption remained uneven by firm size (54.6% among 1000+ lawyer firms, lower for mid-market), with focus shifting to implementation challenges and realistic ROI over hype.
  • 2018: TAR achieved full judicial acceptance (City of Rockford affirmed both keyword and TAR methods); UK litigation validated TAR at trial (David Brown v. BCA); vendor platforms advanced with performance metrics and visualization (Everlaw); legal research tools expanded conference presence (Casetext CARA, Ross EVA); however, United Airlines case exposed critical implementation failures, demonstrating that TAR success depends on rigorous training, validation protocols, and human oversight rather than algorithm sophistication alone.
  • 2019: UK courts expanded predictive coding approval to cases exceeding 3M documents; market data confirmed $7.87B e-discovery review spend with projected growth to $12.15B; adoption surveys showed 86% using active learning and 91% deploying TAR across multiple areas; however, only 23% of legal professionals actively using AI/ML, only ~20% of firms with active AI projects, and critical assessments emerged regarding structural barriers (partnership incentives, billable hours) and realistic 3-5 year ROI timelines, revealing gap between technical maturity and organizational adoption.
  • 2020: Everlaw raised $62M Series C validating market confidence; RelativityOne adoption accelerated 80% YoY as firms consolidated tool stacks and prioritized cost reduction; predictive coding expanded accessibility to mid-market and small firms (500%+ ROI demonstrated on tens-of-thousands-document cases); however, adoption remained low—ABA survey found only 7% of firms using AI tools, Oxford research confirmed only 27% using AI for legal research and 12% for TAR, quantifying persistent gap between technical readiness and organizational uptake despite established judicial acceptance and cost benefits.
  • 2021: RelativityOne customer base doubled, with 159 Am Law 200 firms adopting cloud platform; RelativityOne Redact and Automated Workflows drove feature adoption (66% of customers using workflows); Everlaw achieved FedRAMP and ISO 27001 certifications enabling government deployment; industry survey showed 95% of professionals using TAR across multiple areas and 40%+ using predictive coding in majority of workflows, with 84% of law firms increasing tech budgets; platforms matured toward SaaS consolidation as firms prioritized cost reduction over point-solution stacks.
  • 2022-H1: Everlaw platform data confirmed continued scale growth: 250% increase in A/V file transcription and 44% growth in in-platform collaboration features reflecting production deployment expansion; case studies documented $2M-£1M+ cost savings from TAR deployments at Consilio and Simmons & Simmons; legal research platforms (Casetext) expanded accessibility to smaller firms competing with BigLaw; however, broader adoption remained constrained by organizational barriers—only 37% of law firm attorneys satisfied with firm tech infrastructure; emerging complexity from fragmented workplace tools (88+ apps per org) created new compliance challenges in discovery workflows requiring platform evolution.
  • 2022-H2: Cloud-based e-discovery adoption accelerated sharply, with 48% of legal professionals reporting cloud as standard (up 66% YoY), and 63% of corporate legal teams considering cloud the norm; platform vendors consolidated with Everlaw named IDC MarketScape leader serving 91 Am Law 200 firms; AI/ML importance rose to 22% as driver of legal tech change; however, industry sentiment cooled with budgetary constraints and increasing data types cited as challenges, and experts warned of emerging "cloud divide" with laggards facing technical and compliance risks.
  • 2023-H1: Generative AI disrupted the landscape: Fisher Phillips deployed Casetext's CoCounsel (GPT-4), signaling first major law firm adoption of generative AI for legal research; however, critical hallucination risks emerged immediately—lawyer Steven Schwartz faced sanctions for citing six fictitious cases from ChatGPT, highlighting acute legal and liability risks; e-discovery vendors (Everlaw) published cautious frameworks for generative AI integration addressing hallucinations, explainability, and privacy concerns; traditional TAR adoption remained surprisingly low at 19.2% of lawyers using predictive coding (only 12% in 2018), revealing persistent organizational barriers despite decade-long judicial acceptance and proven ROI; the practice transitioned from mature TAR tooling to nascent, high-risk generative AI experimentation.
  • 2023-H2: Generative AI for legal research moved to active early-stage deployment with severe safety concerns: Relativity launched aiR for Review (October, limited availability by year-end), signaling major vendor commitment; surveys showed 40% of legal professionals using or planning generative AI adoption, yet 72% believed industry unprepared. Critical research emerged: Stanford preprint (December) found Lexis+ AI and Westlaw hallucinate on 17-34% of queries with false citations, demonstrating fundamental reliability failures. Law firms grappled with adoption friction—practitioners at Relativity Fest (November) warned against using ChatGPT for legal research and highlighted cost barriers and unclear ROI. eDiscovery firms reported 30% integration/deployment rate but cited accuracy (31%) and compliance (23%) as major challenges. Traditional TAR remained proven and stable; generative AI for legal research remained experimental and high-risk, creating a fractured landscape where vendors pushed new capabilities but practitioners remained cautious about unverified hallucinations.
  • 2024-Q1: Generative AI for legal research and e-discovery entered early production deployment: Relativity aiR achieved limited availability (targeting summer 2024 GA), Everlaw released AI Assistant beta, and CoCounsel expanded to 45+ large law firms with 9,000+ trained lawyers. However, critical reliability concerns remained acute—Stanford research found 69-88% hallucination rates in LLMs for legal tasks, and real-world deployment failures emerged (Vancouver lawyer faced Law Society investigation after ChatGPT hallucinated cases in court submission). Market data confirmed sustained investment: $3.02B spent on e-discovery processing in 2023, projected to grow to $4.53B by 2028. Traditional TAR remained stable; generative AI solutions remained high-risk early-stage deployments with severe accuracy concerns preventing enterprise-scale adoption.
  • 2024-Q2: Generative AI moved into limited production deployment with mixed signals. Relativity aiR for Review achieved GA (Q3 2024), with 40 customer deployments and documented ROI ($3M savings, 20-week-to-2-week review acceleration). Thomson Reuters committed to ecosystem-wide CoCounsel rollout across legal, tax, risk, and media products. However, critical reliability barriers persisted: Stanford/Yale preregistered study (May 2024) found Lexis+ AI and Thomson Reuters AI tools hallucinate 17-33% of the time, contradicting vendor claims. Legal professional surveys showed 27% using generative AI for work (primarily legal research and document review) with 79% citing accuracy concerns. Adoption gap remained structural: only 19.2% of lawyers used predictive coding, less than 30% of matters deployed TAR despite decade-long judicial acceptance. Traditional TAR remained proven and reliable; generative AI remained high-risk early-stage requiring extensive human verification.
  • 2024-Q3: Generative AI moved to commercial production release with documented deployments and unresolved reliability barriers. Relativity aiR for Review achieved GA (September 2024) with 50+ customers across 170+ workspaces; case studies showed 50-60% review time/cost reductions with 96% responsive document detection. Everlaw AI Assistant reached GA after beta with 125 orgs, achieving precision/recall (0.77/0.82) surpassing human review. Thomson Reuters released CoCounsel 2.0 (3x faster) with High Throughput beta for million-document reviews. Adoption metrics: 34% of lawyers using generative AI (17.5% in production on live matters); 61% expect it will become standard within two years. Market analysis projected e-discovery spending growth to $13.59B by 2028 with AI gradually reducing review-task share from 65% to 60%. However, critical reliability barriers persisted: Stanford/Yale study confirmed 17-33% hallucination rates; practitioners warned tools "not necessarily great at legal research yet"; 79% of adopters cited accuracy concerns. Traditional TAR remained stable; generative AI remained high-risk requiring extensive human verification and appropriate use case boundaries.
  • 2024-Q4: Generative AI reached full commercial maturity with documented production deployments yet persistent reliability barriers. Relativity aiR for Privilege achieved GA (November) with >99% recall, >70% precision, 80% time savings (20-week review reduced to 2 weeks). In-house legal teams showed rapid adoption: ACC survey of 475 CLOs found 25% reporting cost savings from GenAI, 58% expecting reduced outside counsel reliance (up from 25% in 2023), 23% using GenAI daily. Thomson Reuters expanded CoCounsel 2.0 integration with new features (Mischaracterization Identification, AI Jurisdictional Surveys). However, Sedona Conference experts (Grossman, Cormack, Baron) warned that LLMs lack proven validation protocols and raise grave hallucination concerns; practitioners questioned whether tools can replace traditional TAR. JND eDiscovery deployed aiR for Review on class-action matter with 96% recall and 71% precision, demonstrating real-world production use. Structural adoption barriers remained: generative AI required extensive human verification and appropriate use case boundaries; traditional TAR proved stable and reliable for organizations unable to manage hallucination risks.
  • 2025-Q1: Generative AI matured into production release with expanded enterprise deployments yet hardening reliability barriers limited mainstream adoption. Thomson Reuters reported CoCounsel reaching 1 million users integrated into Westlaw/Practical Law (Feb 2025). Relativity announced RelativityOne Government with FedRAMP authorization, signaling government-sector deployment expansion. Blank Rome (Am Law 100) deployed Everlaw AI Assistant on 126,000-document government investigation in one day with 70% time reduction (March 2025). Corporate legal adoption accelerated: ACC/Everlaw survey of 475 CLOs showed 58% expecting reduced outside counsel reliance due to GenAI, 25% already reporting cost savings. Privilege logging emerged as highest-confidence use case with practitioners predicting eventual TAR displacement for responsiveness review. However, reliability barriers hardened: multiple lawyer sanctions emerged for AI-generated fictitious citations (Morgan & Morgan, Michael Cohen case, Minnesota expert); federal court rejected AI expert report in Kohls v. Ellison (Feb 2025) due to ChatGPT hallucinations. Survey of 551 e-discovery professionals showed accuracy, confidentiality, transparency, and hallucination concerns blocking mainstream adoption; hybrid TAR+GenAI approaches emerging. Practice bifurcated: GenAI proven for high-verification-tolerance use cases (privilege, coding) but mainstream legal research and document review deployment faced severe adoption friction despite GA tooling and documented case-study ROI.
  • 2025-Q2: Generative AI achieved sustained commercial momentum with broad product availability and expanded adoption, yet systematic reliability barriers hardened into structural blockades. Relativity aiR for Review expanded to 150+ customers across government sector via RelativityOne Government FedRAMP authorization (April). Thomson Reuters CoCounsel reached 1 million users across Westlaw/Practical Law integration. Everlaw AI Assistant deployed on large matters: unnamed Am Law 100 firm deployed on 126,000-document government investigation (April 2025) with 50-67% time reduction and 90%+ accuracy. Market-wide adoption accelerated: Thomson Reuters survey (April) found 26% of legal orgs actively using GenAI (up from 14%), with document review (77%) as top use case; ACEDS survey (April) found 34% using GenAI for legal research with 74% expecting AI-enabled jobs within 12 months. Privilege logging confirmed as highest-confidence use case; hybrid TAR+GenAI approaches dominated adoption strategy. However, hallucination risks remained systematically uncontrolled: comprehensive tracker documented 31+ UK cases of false AI-generated citations with additional cases globally, revealing unresolved liability exposure. Adoption barriers persisted: ACEDS and Thomson Reuters surveys found 79% of adopters cited accuracy concerns, 56% cited confidentiality, 31% cited hallucination risks; only 41% of firms reported AI policies; 71% of corporate legal departments unaware if outside counsel deployed GenAI. Practice remained sharply bifurcated: GenAI matured for high-verification-tolerance cases (privilege with extensive review, controlled coding) and government investigations, but mainstream legal research and document review faced severe friction from liability, hallucination rates, and verification burden.
  • 2025-Q3: Generative AI consolidated from experimentation into sustained commercial deployment with expanding product availability and documented production use, yet hallucination failures intensified as structural blockade to mainstream adoption. Relativity aiR for Review expanded to 150+ government-sector customers with case studies showing 30-70% cost savings and 99.5% recall rates; Complete Discovery Source reported 750+ hours savings on million-document reviews in three weeks and 80% reduction in privilege review time. Thomson Reuters CoCounsel expanded to 50,000+ lawyers across 45+ large firms and launched CoCounsel Legal with agentic workflows (August 2025). Market adoption accelerated: Everlaw's 2025 survey found 37% of legal professionals using generative AI with 42% saving 260+ hours annually. However, hallucination failures remained uncontrolled: Thomson Reuters documented 22 hallucinated citations in July 2025 alone; Jones Walker analysis documented 300+ hallucination cases since mid-2023 (200+ in 2025). Pricing models remained unsettled with 37% per-document billing and high market uncertainty. Practice remained sharply bifurcated: GenAI proved valuable for privilege logging and government investigations but mainstream legal research and document review faced severe friction from liability, hallucination rates, and verification burden.
  • 2025-Q4: Generative AI sustained commercial deployment with mixed signals on tool retention and market stability. Relativity aiR for Review continued serving 150+ government-sector customers with documented case studies. Thomson Reuters CoCounsel maintained 50,000+ lawyers across 45+ large firms with CoCounsel Legal launched (August); however, early BigLaw adopters McGuireWoods and Addleshaw Goddard discontinued for competing tools (Harvey, Legora), citing superior interfaces and noting tool capabilities lagged launch hype. Public sector adoption scaled: Miami-Dade Public Defender deployed CoCounsel at 100+ licenses (400-staff, 15,000-case operation, operational since June 2023) for research and evidence review. Corporate legal adoption accelerated: ACC/Everlaw survey of 657 in-house counsel found GenAI active use jumped to 52% (double 2024), with 64% expecting reduced outside counsel reliance and 20% viewing AI as transformative. eDiscovery sector shifted from pilot to production: ComplexDiscovery survey found 64% integrating/deploying GenAI, though accuracy concerns (33%) remained primary challenge. Hallucination failures persisted: AI Hallucination Cases database documented 486 cases globally (324 U.S., 128 lawyers/2 judges implicated) including 12 of 19 fabricated citations in Social Security appeal and $10,000 sanctions in Sylvia Noland v. Land of the Free (September 2025). Tool satisfaction gaps emerged, signaling GenAI legal tools remained in flux; only 25% of early BigLaw adopters fully deployed CoCounsel. Practice remained sharply bifurcated: GenAI valuable for high-verification-tolerance cases (privilege, government) but mainstream legal research and document review faced severe friction from hallucination, tool satisfaction gaps, and liability frameworks.
  • 2026-Jan: Generative AI expanded product capabilities with product maturity deepening while regulatory and compliance pressures intensified adoption barriers. Relativity launched aiR for Case Strategy in production (January 12) as fact extraction and chronology tool with 50+ early customers achieving 70% faster deposition transcript analysis. Regulatory compliance burdens emerged sharply: $1.5B Bartz v. Anthropic settlement signaled legal reckoning for AI training practices; EU AI Act Article 50 transparency requirements (effective August 2026) mandated granular training data disclosure with €15M+ fines, creating new vendor compliance costs. Hallucination failures continued rising: Bloomberg Law and Illinois courts documented 280+ U.S. court filings with AI-fabricated citations (sevenfold rise in 2025), with filings showing "AI slop"—polished formatting with real citations but AI-generated reasoning lacking genuine attorney analysis. Courts issued implementation guidance (Illinois, January 28), acknowledging both capability and persistent quality risks. Practice remained bifurcated: GenAI valuable for high-stakes cases (privilege, government, case strategy) with continued production deployment, but mainstream legal research faced unresolved hallucination risks, verification burden, and evolving regulatory compliance costs.
  • 2026-Feb: Generative AI consolidated commercial deployment with CoCounsel reaching 1 million users across 107 countries (80% Am Law 100), Thomson Reuters reporting 2.6x speed improvements in legal research and document review. Corporate legal GenAI adoption doubled to 52% among U.S. legal departments (Everlaw/ACC survey, 657 respondents). However, adoption remained sharply bifurcated: 60.7% of e-discovery professionals expected GenAI transformative by end-2026 but only 17.7% deployed at scale, exposing gap between capability and operational adoption. Relativity released aiR for Case Strategy (GA, February 26) automating fact extraction and witness summaries. Regulatory and institutional recognition of maturity challenges intensified: National Center for State Courts published official guidance on hallucination risks and verification protocols; courts continued sanctioning attorneys for AI-fabricated citations (E.D. Pa., January 2026, $4,000 sanctions for eight false citations). Practice remained bifurcated: high-stakes cases (privilege, government, strategy) with documented deployment and cost savings; mainstream legal research facing unresolved hallucination risks, verification burden, and evolving EU AI Act compliance costs (€15M+ fines, effective August 2026).
  • 2026-Apr: Courts compounded adoption friction: Q1 2026 produced $145K in AI-hallucination sanctions and a landmark finding that tool architecture—not just user error—now determines liability, while platform-specific redaction defects across Relativity, Everlaw, and GoldFynch triggered $2.5M+ in sanctions. AI document review systems (aiR for Review, Everlaw AI, DISCO) were formally classified as high-risk under EU AI Act Annex III, with conformity assessment requirements ahead of August 2026 enforcement. Array UK's Relativity aiR deployment (75% review reduction, £50K savings) and the FTI finding that 87% of general counsel now use GenAI (up from 44%) confirmed sustained enterprise adoption growth in parallel with hardening regulatory and liability barriers. A Wolters Kluwer global survey of 810 lawyers (April 28) found 92% using AI tools daily—with 80% relying on AI for legal research—representing the highest documented adoption signal to date, while independent analysis simultaneously catalogued 1,227+ hallucination cases globally (Lexis+ AI 17%, Westlaw 34% error rates) and Sullivan & Cromwell filed a bankruptcy brief with ~40 fabricated citations despite comprehensive AI policies, demonstrating that verification failure remains structurally unsolved even at elite Am Law 100 firms.

TOOLS