The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that analyses incoming whistleblower reports, triages them by severity and credibility, and routes them for investigation. Includes automated classification and priority scoring; distinct from general ticket triage which handles customer rather than compliance reports.
AI-powered whistleblower report triage has crossed from experimental to production-proven, but the technology's deployment is surfacing hidden costs. A handful of major platforms process millions of reports annually, automating severity classification, credibility scoring, and investigator routing at scale. The technology works at the triage layer. Yet paradoxically, NAVEX's 2026 data now shows that case closure times are lengthening, possibly because AI tool integration adds procedural overhead that slows rather than accelerates investigation workflows. The defining tension has shifted: it is no longer whether the technology works -- production deployments at Control Risks and others confirm capability maturity -- but whether AI-assisted triage creates net positive investigation outcomes when procedural complexity is included. Investigation capacity, organisational maturity (only 61% maintain a reporting channel), and now AI integration architecture emerge as the binding constraints. The research gap also persists: AI classification systems systematically diverge from human judgment on rule violations, and base-rate effects generate thousands of false positives per million communications, straining already-thin investigation teams.
The vendor ecosystem has consolidated around a small number of integrated GRC platforms. NAVEX leads with 2.15 million reports across 4,000+ organisations; Diligent (which acquired Vault Platform) and EQS Integrity Line (14,000+ global customers as of April 2026) compete on AI-assisted classification, anonymisation, and multi-channel intake. Consolidation continues: Case IQ acquired WhistleBlower Security in late 2025, and specialised entrants like LegalIntel target law firms with AI case intelligence. NAVEX's February 2026 launch of Quick Insights and incident benchmarking reflects a market shifting from basic triage automation toward comparative analytics and programme-level performance measurement. Production-scale deployments demonstrate capability: Control Risks used Relativity aiR to analyse 275,000 multilingual whistleblower documents in parallel, completing within one week of a two-week deadline—language-barrier elimination and accelerated document triage at scale. SAI360's deployment at ABB shows multinational enterprise adoption of 30+-language AI translation and automatic case routing with zero-IP-tracking anonymity. Document analysis at scale is now proven: consulting firms deploying LLM-powered platforms achieve 80% time reduction on qualitative document processing and risk theme identification.
Regulatory pressure is accelerating demand. Japan criminalised whistleblower retaliation in 2025; the UAE and Netherlands expanded protections; California's Transparency in Frontier AI Act now mandates whistleblower safeguards for frontier AI companies. The DOJ's updated Evaluation of Corporate Compliance Programs explicitly instructs prosecutors to assess whistleblower protection and anonymity safeguards, formalising what was previously best practice into a compliance expectation. These drivers explain rising reporting volumes -- Europe's rate jumped from 0.49 to 0.67 per 100 employees -- but they are surfacing AI integration trade-offs. NAVEX's 2026 analysis reveals that case closure times are lengthening, potentially because AI tool integration adds procedural overhead. The compliance field is beginning to recognize that orchestrating AI into investigation workflows is not a pure acceleration; it can introduce friction. Seventy percent of US workers express comfort with AI-driven reporting tools, yet only 32% of organisations have formal AI governance programmes in place. Board-level governance visibility (tracking disclosure types, triage times, and systemic issues) remains rare despite regulatory expectations. The false-positive problem persists: base-rate effects in misconduct detection generate thousands of alerts per million communications, straining already-thin investigation teams. A complementary risk emerges: whistleblowers using mainstream consumer LLM tools face identity verification barriers and data-sharing risks that undermine anonymity protection—a distinct concern from enterprise deployment overhead. The critical question is no longer adoption feasibility but integration architecture: when to automate, when to preserve human judgment, and how to avoid both AI-added overhead and whistleblower exposure from inadvertent mainstream AI tool use.
— ALSP deployed AI to triage 200+ discrimination complaints in Q3 2024 with severity assessment and legal exposure flagging. Intake triage methodology (automated analysis, priority scoring, intelligent routing) directly applicable to whistleblower hotline report triage workflows.
— elsai AI platform automated risk theme identification and chronology sequencing from investigation documents for global advisory firm, reducing manual timeline work from 15-20 days to minutes while maintaining audit-trail evidence linking.
— Simform deployed LLM-powered document analysis platform using GPT-4o and RAG on Azure for fintech consultancy. 80% time reduction, 5x faster insight generation, 100% data coverage from unstructured document volumes—methodology directly transferable to whistleblower report triage workflows.
— Critical assessment of privacy and identity risks when whistleblowers use mainstream LLM tools, highlighting adoption barriers: mainstream AI providers require identity verification creating unacceptable risk for anonymous reporters, and corporate data sharing introduces uncompensated whistleblower exposure.
— SAI360 GRC platform with AI translation (30+ languages), automatic case creation and routing, and zero IP-tracking anonymity architecture. Named customer ABB reports deployment across multinational operations managing retaliation risk.
— Compliance counsel (Debevoise & Plimpton) citing NAVEX 2026 data documenting that case closure times are lengthening, potentially due to growing integration of AI tools into case management adding procedural steps. Recommends policy updates for agentic AI risks in whistleblower contexts.
— EQS Integrity Line serving 14,000+ organizations with AI-powered case routing, automated transcription, anonymization, and multi-language support. Human oversight built into every AI decision; positioned as eliminating low-value tasks to preserve expert investigation time.
— AICD governance guide citing ASIC's examination of whistleblower processes across 134 companies. Establishes board-level governance benchmarks: tracking disclosure volume and types, average triage time, escalation paths, and de-identified systemic issue analysis.