The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI-augmented detection and prevention of sensitive data exfiltration across endpoints, network, and cloud services. Includes context-aware DLP that understands document meaning; distinct from phishing detection which targets inbound threats rather than outbound data.
DLP sits at a leading-edge inflection point marked by growing architectural bifurcation: forward-leaning enterprises have deployed it broadly, yet the practice is undergoing forced reinvention as GenAI and agentic workloads expose the limits of traditional policy-based detection. Adoption reached 60% of enterprises by 2023, and market projections remain strong ($2.58B→$12.29B by 2033 at 18.9% CAGR), yet operational reality tells a darker story. False-positive rates remain catastrophic—surveys show 35-90% of alerts are noise, analysts report burnout from false positive triage, and 78% of organisations find DLP administration a significant challenge. The inflection is architectural: traditional regex-and-policy DLP has reached structural limits against AI-native workloads. Independent evidence documented critical failures—Microsoft Copilot bypassed Purview DLP and sensitivity labels for 28 days undetected (CW1226324), prompt injection attacks defeated four defense layers (CVE-2025-32711, CVSS 9.3)—while shadow AI continues unchecked: 77% of employees paste company data into personal AI accounts. Vendors are responding with AI-augmented classifiers, DSPM integration, and consolidated platforms, but most organisations still run traditional tooling unable to see cloud, SaaS, or generative-AI traffic. The category has proven value; delivering that value without architectural obsolescence and overwhelming operational burden remains unsolved.
By Q2 2026, vendor ecosystem maturity accelerated with major platform vendors shipping production-grade AI-augmented DLP. Microsoft extended Copilot DLP to prompt-level SIT detection with Bing search blocking and Recall snapshot protection (April 2026); Palo Alto Networks added ML-augmented pattern detection and SQL-like incident filtering for false positive isolation (April 2026); Cloudflare deployed AI context analysis via vector embeddings to adjust detection confidence (April 2026, GA)—independent vendors demonstrating hybrid regex+ML approaches now standard. Real-world data exposure metrics from Concentric AI document scale: 16% of business-critical data overshared with 802k average at-risk files per organization in Copilot deployments. Shadow AI adoption gap persists: IBM survey data shows only 37% of organizations have shadow AI policies, 97% of organizations reporting AI-related breaches lacked proper AI access controls, with $670k additional breach costs per incident.
Yet deployment reality contradicts vendor growth narratives. Critical policy boundary failures expose AI-DLP integration gaps: Microsoft's CW1226324 (April 2026, analyzed 4/11) showed Copilot Chat processed sensitivity-labeled emails despite DLP policies configured to block—a fundamental trust failure between policy intent and AI system behavior. Architectural assessment from consulting firms identifies three specific DLP failure modes for agent-based AI: (1) permission-based access at scale (agents inheriting user permissions rather than discrete data decisions), (2) summarization/insight extraction (meaningful intelligence bypassing pattern matching), (3) context leakage through conversation explanations. These are not configuration gaps—they are design limitations of traditional DLP when deployed against AI agents performing continuous background access. Market bifurcation reflects this architectural reality: AI-native vendors demonstrating 80% resource reduction and near-elimination of false positives, but adoption remains concentrated in advanced security teams. Most organizations continue running traditional regex-based tooling unable to detect LLM interactions, prompt injection, or agentic data access patterns.
Blocking broader adoption remain unchanged: 78% of security leaders find DLP administration challenging; false positive fatigue persists as operational friction despite vendor innovations; policy complexity and data governance gaps force 60% of DLP implementations to fail. Yet organizational urgency accelerates: with GenAI-related DLP incidents doubling to 14% of all incidents (Palo Alto, 7,051 enterprises), shadow AI data leakage quantified at $670k per breach, and 82% of organizations planning GenAI integration, platform consolidation toward AI-augmented DLP continues inevitable despite implementation friction.
— Critical analysis of DLP policy failures in regulated sectors: Change Healthcare 192M breach, bank data to personal apps, DOJ/Pentagon MOVEit compromise. Positions traditional perimeter-based DLP as obsolete; advocates privacy-first governance using homomorphic encryption and confidential computing.
— Proofpoint released Nexus Language Model (embedded prompt detection), Secure Agent Gateway (MCP monitoring), and Satori AI Agent suite (auto-triage). Case study: Tokyu Real Estate Holdings achieved zero external data exfiltration post-deployment, demonstrating DLP control effectiveness for AI agent workloads.
— GitGuardian 2026 State of Secrets Sprawl: 4.7M secrets in AI tool logs (340% increase YoY), 68% of companies with AI-related exposure, 147-day discovery time. Lovable and Passions platforms incident analysis documenting DLP blind spots in LLM context windows and WebSocket uploads.
— Palo Alto Networks released May 2026 Enterprise DLP updates: ML-augmentation for predefined patterns (address classification, healthcare provider data, SWIFT/BIC codes), 123 new app integrations (S3, Cloudamize, DealCloud), enhanced archive inspection (8 nesting levels, 1024 sub-files), expanded OCR (20MB images).
— Microsoft Purview DLP for Copilot prompts now GA with unified DSPM agent observability (May 2026). Shifts DLP focus from 'Can user open file?' to 'What can Copilot infer from all accessible data?' Addresses data-in-use risks in agentic AI contexts.
— DEF CON research disclosure of CVE-2026-24299: comprehensive vulnerability chain in Copilot enabling data exfiltration via HTML preview CSS, CSP bypass, delayed tool invocation, and memory hijacking. Demonstrates fundamental DLP limitation when AI assistants have broad access and process untrusted content.
— CrowdStrike GA release of purpose-built DLP platform for agentic AI with real-time data-in-motion protection, AI-powered classification, and runtime cloud visibility across endpoints, SaaS, and AI workflows.
— OpenAI released Privacy Filter (April 22, 2026) with 1.5B-parameter local PII masking model (96% F1) masking 8 PII categories before data leaves machine. Addresses GDPR Article 5 minimization for GenAI workloads.