The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
Governance frameworks for managing data used in AI training and fine-tuning, including provenance, consent, data rights, and opt-out management. Includes training data documentation and deletion-from-model workflows; distinct from general data privacy which manages operational rather than AI-specific data.
Data governance for AI sits in a precarious split: the infrastructure half has matured while the hardest technical problem remains unsolved. Governance platforms now provide production-grade lineage, access control, and documentation capabilities, and regulatory mandates like the EU AI Act and U.S. federal procurement standards have made these table-stakes for regulated deployment. That side of the practice works. The other side -- verifiable deletion of training data from models -- does not. Peer-reviewed research continues to show that machine unlearning methods suppress rather than truly remove learned information, and no scalable proof-of-deletion mechanism exists. This bifurcation defines the bleeding-edge status: organisations can govern what goes into training pipelines, but they cannot yet prove data has been removed once a model has learned from it. The gap between regulatory expectation and technical capability is the defining tension, and it is not closing.
Governance infrastructure has reached production maturity and market saturation. Databricks, Microsoft Azure, AWS, and specialist vendors (Collibra, Immuta, Informatica) now offer GA platforms for lineage tracking, access governance, and automated compliance workflows. Collibra launched dedicated AI Governance capabilities in March 2026 unifying use-case, model, and agent registries. Immuta expanded in April 2026 with "Agentic Data Access" treating AI agents as governed data users with zero standing privileges and instant audit trails—addressing a critical governance gap as 73% of enterprises now run AI agents in production. Governance has shifted from niche compliance function to table-stakes competitive requirement: 51% of CDOs prioritise data governance, 65% invest in AI-specific frameworks, and financial services firms contractually require training data provenance documentation. Yet adoption significantly lags deployment: a 2026 LexisNexis survey found 80% of Fortune 500 firms deployed GenAI but fewer than 40% have adequate governance, creating liability and accountability blind spots.
Regulatory enforcement and compliance barriers intensified through April 2026. GDPR enforcement reached EUR 5 billion in cumulative fines; the EU AI Act's August 2, 2026 compliance deadline for high-risk systems (EUR 35M or 7% revenue penalties) drives urgent governance adoption. Analysis of 19 regulatory guidelines across jurisdictions reveals enforcement divergence masked by surface consensus: Italy fined OpenAI EUR 15 million for inadequate legal basis and transparency; Brazil's ANPD suspended Meta's AI training (July 2024, lifted after compliance). The core deletion problem remains technically unresolved. March 2026 research from Europe's data protection authority (EDPS) assessed unlearning as a governance mechanism, documenting realistic scenarios where deletion creates unintended model degradation and revealing verification processes remain fragmented. The European Court of Justice-adjacent European Data Protection Board issued Opinion 28/2024 (December 2024) clarifying that AI models trained on personal data are subject to GDPR unless case-by-case assessment proves non-negligible extraction likelihood cannot exist—rendering the "is this model anonymised?" question a forensic necessity. March 2026 research demonstrated that supposed unlearning failures against multi-hop queries, adversarial attacks cause information leakage surges of 1,150x, and quantization (universal in production) masks standard unlearning, requiring new approaches. The EDPS February 2026 assessment and March 2026 EACL-published auditing framework (Partial Information Decomposition) show that residual knowledge persists post-unlearning despite claimed success. Organisations now face a three-part compliance paradox: regulators demand deletion rights enforceable with penalties; governance infrastructure exists to control data input; but proof of verifiable deletion from trained models does not—and new research surfaces each quarter raises the technical bar further.
— Case study: customer support AI agent deployed successfully until encountering SSN in tickets; ungoverned access revealed data governance failure; concrete evidence of production governance gaps in real deployment.
— Pebblous 2026 analysis: OpenMetadata metadata governance platform reached GitHub Trending #1 with 13,535 stars, driven by AI governance features for semantic data governance and agent integration.
— Practitioner analysis of data governance complexity explosion when feeding proprietary data to LLMs: training data provenance, output ownership, bias propagation, and cross-border flows remain unresolved.
— ICLR 2026: MU-Mis method achieves practical unlearning without remaining-data access (0.07 gap to retrained model vs 0.14-0.47 for baselines), reducing enterprise operational burden for rights management.
— ICLR 2026: First data-centric metric for verifying unlearning via watermarking with R²~0.99 calibration; directly addresses governance verification gap for proving deletion compliance without retraining.
— iManage 2026 benchmark: 85% at some stage of AI adoption but 36% experienced policy violations; governance gaps emerging in access controls and auditability for data governance in production.
— NeurIPS 2025: Framework shows unlearning overestimates effectiveness when knowledge is inferentially correlated; exposes verification gap—implicit knowledge persists through related facts even after deletion claims.
— Immuta April 2026 GA capability: governed data access for AI agents with policy-driven provisioning and zero standing privileges; addresses governance gap as 80% of Fortune 500 deploy GenAI but <40% have adequate governance.