Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Data governance & rights management for AI

BLEEDING EDGE

TRAJECTORY

Stalled

Governance frameworks for managing data used in AI training and fine-tuning, including provenance, consent, data rights, and opt-out management. Includes training data documentation and deletion-from-model workflows; distinct from general data privacy which manages operational rather than AI-specific data.

OVERVIEW

Data governance for AI sits in a precarious split: the infrastructure half has matured while the hardest technical problem remains unsolved. Governance platforms now provide production-grade lineage, access control, and documentation capabilities, and regulatory mandates like the EU AI Act and U.S. federal procurement standards have made these table-stakes for regulated deployment. That side of the practice works. The other side -- verifiable deletion of training data from models -- does not. Peer-reviewed research continues to show that machine unlearning methods suppress rather than truly remove learned information, and no scalable proof-of-deletion mechanism exists. This bifurcation defines the bleeding-edge status: organisations can govern what goes into training pipelines, but they cannot yet prove data has been removed once a model has learned from it. The gap between regulatory expectation and technical capability is the defining tension, and it is not closing.

CURRENT LANDSCAPE

Governance infrastructure has reached production maturity and market saturation. Databricks, Microsoft Azure, AWS, and specialist vendors (Collibra, Immuta, Informatica) now offer GA platforms for lineage tracking, access governance, and automated compliance workflows. Collibra launched dedicated AI Governance capabilities in March 2026 unifying use-case, model, and agent registries. Immuta expanded in April 2026 with "Agentic Data Access" treating AI agents as governed data users with zero standing privileges and instant audit trails—addressing a critical governance gap as 73% of enterprises now run AI agents in production. Governance has shifted from niche compliance function to table-stakes competitive requirement: 51% of CDOs prioritise data governance, 65% invest in AI-specific frameworks, and financial services firms contractually require training data provenance documentation. Yet adoption significantly lags deployment: a 2026 LexisNexis survey found 80% of Fortune 500 firms deployed GenAI but fewer than 40% have adequate governance, creating liability and accountability blind spots.

Regulatory enforcement and compliance barriers intensified through April 2026. GDPR enforcement reached EUR 5 billion in cumulative fines; the EU AI Act's August 2, 2026 compliance deadline for high-risk systems (EUR 35M or 7% revenue penalties) drives urgent governance adoption. Analysis of 19 regulatory guidelines across jurisdictions reveals enforcement divergence masked by surface consensus: Italy fined OpenAI EUR 15 million for inadequate legal basis and transparency; Brazil's ANPD suspended Meta's AI training (July 2024, lifted after compliance). The core deletion problem remains technically unresolved. March 2026 research from Europe's data protection authority (EDPS) assessed unlearning as a governance mechanism, documenting realistic scenarios where deletion creates unintended model degradation and revealing verification processes remain fragmented. The European Court of Justice-adjacent European Data Protection Board issued Opinion 28/2024 (December 2024) clarifying that AI models trained on personal data are subject to GDPR unless case-by-case assessment proves non-negligible extraction likelihood cannot exist—rendering the "is this model anonymised?" question a forensic necessity. March 2026 research demonstrated that supposed unlearning failures against multi-hop queries, adversarial attacks cause information leakage surges of 1,150x, and quantization (universal in production) masks standard unlearning, requiring new approaches. The EDPS February 2026 assessment and March 2026 EACL-published auditing framework (Partial Information Decomposition) show that residual knowledge persists post-unlearning despite claimed success. Organisations now face a three-part compliance paradox: regulators demand deletion rights enforceable with penalties; governance infrastructure exists to control data input; but proof of verifiable deletion from trained models does not—and new research surfaces each quarter raises the technical bar further.

TIER HISTORY

ResearchJan-2023 → Apr-2024
Bleeding EdgeApr-2024 → present

EVIDENCE (77)

— Case study: customer support AI agent deployed successfully until encountering SSN in tickets; ungoverned access revealed data governance failure; concrete evidence of production governance gaps in real deployment.

— Pebblous 2026 analysis: OpenMetadata metadata governance platform reached GitHub Trending #1 with 13,535 stars, driven by AI governance features for semantic data governance and agent integration.

— Practitioner analysis of data governance complexity explosion when feeding proprietary data to LLMs: training data provenance, output ownership, bias propagation, and cross-border flows remain unresolved.

— ICLR 2026: MU-Mis method achieves practical unlearning without remaining-data access (0.07 gap to retrained model vs 0.14-0.47 for baselines), reducing enterprise operational burden for rights management.

— ICLR 2026: First data-centric metric for verifying unlearning via watermarking with R²~0.99 calibration; directly addresses governance verification gap for proving deletion compliance without retraining.

— iManage 2026 benchmark: 85% at some stage of AI adoption but 36% experienced policy violations; governance gaps emerging in access controls and auditability for data governance in production.

— NeurIPS 2025: Framework shows unlearning overestimates effectiveness when knowledge is inferentially correlated; exposes verification gap—implicit knowledge persists through related facts even after deletion claims.

— Immuta April 2026 GA capability: governed data access for AI agents with policy-driven provisioning and zero standing privileges; addresses governance gap as 80% of Fortune 500 deploy GenAI but <40% have adequate governance.

HISTORY

  • 2023-H1: Data governance for AI emerged as urgent industry priority post-ChatGPT. Databricks acquired Okera to add AI-specific governance; TDWI published governance frameworks for ML assets. Unlearning research validated feasibility of data deletion from models.
  • 2023-H2: Regulatory enforcement accelerated; Italy suspended ChatGPT, Canada, France, and Spain opened investigations. Unlearning research advanced (EMNLP, NeurIPS competitions) but critical limitations emerged: methods may not achieve true data removal, utility trade-offs remain unsolved. Copyright opt-out mechanisms proved ineffective without platform transparency. Gap widened between regulatory expectations (right to be forgotten) and technical reality.
  • 2024-Q1: Unlearning research advanced on efficiency and multimodal models, with partial amnesiac approaches reducing fine-tuning overhead. Data Provenance Initiative documented 1,800 curated datasets. Databricks Unity Catalog expanded into financial services for EU AI Act compliance. Enterprise surveys showed 36% identified AI governance as GenAI adoption barrier. Analyst predictions: 80% of governance initiatives will fail by 2027. Regulatory gap widened: EU AI Act exempted open-source models from dataset transparency requirements.
  • 2024-Q2: EU AI Act finalized with explicit copyright opt-out and data governance mandates (€35M/7% penalties, 24-month compliance window). Vendors (Databricks) accelerated platform adoption for production GenAI deployments. IDC research showed governance maturity as key driver of AI initiative success (20% fail without infrastructure). U.S. state-level regulations emerged (Colorado CAIA, Utah AI Policy Act). Critical gap remains: opt-out implementation infrastructure and practical deletion-from-model workflows still lacking at scale. Governance becoming table-stakes for regulated deployment but organizations struggle with training pipeline integration.
  • 2024-Q3: Vendor governance platforms matured (Microsoft/Azure Databricks best practices published). Gartner forecast 30% GenAI project abandonment by 2025 due to poor data quality and governance gaps. Critical limitations in unlearning emerged: Google/Princeton research exposed adversarial vulnerabilities (model accuracy degraded to 3.6%); MUSE benchmark found most algorithms fail privacy/utility simultaneously; Oxford/MIT survey concluded unlearning cannot reliably enable deletion-from-model workflows. Opt-out infrastructure and verification mechanisms remained absent. Governance platforms adopted for lineage and access control; deletion-from-model compliance mechanisms still immature.
  • 2024-Q4: AWS launched SageMaker Data and AI Governance GA, signaling broad vendor platform maturity for governance infrastructure. Research revealed severe unlearning vulnerabilities: reconstruction attacks recovered deleted data despite unlearning, emphasizing differential privacy as mitigation necessity. Industry surveys documented widespread governance adoption barriers—80% of AI projects fail (RAND/Gartner), with 62% citing lack of governance and only 12% of organizations reporting sufficient data quality for AI. Governance became recognized adoption blocker and competitive differentiator.
  • 2025-Q1: Unlearning research advanced with new evaluation metrics and parameter-efficient frameworks (ICLR 2025 papers), but critical vulnerability assessments revealed state-of-the-art methods fail at scale—they degrade model quality or merely modify classifiers without truly removing training data influence. Governance platform maturity continued (Databricks DAGF v1.0 framework released), but enterprise adoption surveys showed 21% of organizations still lack governance frameworks, 33% cite leadership misalignment, and 60%+ cite data quality barriers. EU AI Act compliance deadline (April 2025) approached with deletion-from-model mechanisms still unproven, widening gap between regulatory mandate and technical feasibility.
  • 2025-Q2: April 2025 EU AI Act compliance deadline arrived without reliable unlearning solutions. New research exposed verification gaps: arXiv survey on unlearning verification (June 2025) found behavioral and parametric approaches remain fragmented with no unified standard; CMU peer-reviewed analysis (April 2025) showed benchmark structures systematically overestimate unlearning effectiveness; comprehensive auditing frameworks (May 2025) found six algorithms fail to demonstrate true knowledge removal. CSA assessed right-to-be-forgotten as unresolved with no proven scalable solutions. Financial services drove governance adoption, treating data provenance documentation as contractual requirement. Core tension remained: governance platforms advanced for transparency/lineage, but deletion-from-model verification stayed unproven at scale.
  • 2025-Q3: Enterprise governance deployment stalled; only 30% of organizations advanced beyond experimentation to production, with just 13% managing multiple deployments and 48% failing to monitor production systems. Federal government cited data governance and security as critical AI adoption barriers, despite regulatory mandates. The quarter revealed persistent infrastructure gaps: enterprises struggled with governance platform integration, data quality remained a blocker for 60%+ of organizations, and no new breakthroughs in deletion-from-model verification emerged. Governance remained a recognized adoption blocker and competitive requirement, but deployment maturity plateaued.
  • 2025-Q4: Unlearning research advanced with new frameworks (OBLIVIATE, LUNE) addressing efficiency and deletion quality, but no resolution emerged for verification gaps or scalable proof-of-deletion. Governance platform deployments remained operational for lineage and access control (Databricks, Azure, AWS), yet financial sector contracts still relied on documentation and provenance rather than technical deletion guarantees. Federal agencies continued struggling with governance infrastructure adoption. The year ended with governance platforms mature and research active, but the core tension—between regulatory deletion mandate and technical verification inability—unresolved at production scale.
  • 2026-Jan: EU AI Act and OMB M-25-22 enforcement drove governance from emerging practice to market license. Vendor governance frameworks matured (Databricks, Azure, AWS); strategic analysis from data leaders confirmed governance as 2026 priority and enablement layer for scaling AI. Simultaneously, peer-reviewed research published critical assessments: Columbia Law Review analyzed unlearning's policy limitations, new economic audit models exposed verification challenges, and GhostDrift analysis identified accountability evaporation risks in static compliance frameworks. Governance infrastructure and documentation standardized; deletion-from-model verification remained unsolved at scale.
  • 2026-Feb: Regulatory enforcement and compliance barriers continued to intensify. New peer-reviewed research (February arXiv papers) exposed fundamental verification gaps in unlearning: representation-level analysis questioned whether methods truly delete vs. suppress training information; perfect retraining attacks revealed deletion claims may inadvertently expose undeleted elements. OpenAI case study documented immense technical challenges of purging user data from complex ML pipelines. GDPR enforcement reached €5B cumulative fines with 20 US states enacting comprehensive privacy laws. Persistent gap between opt-out expectations and technical reality in training data governance underscored compliance obstacles. Governance infrastructure commoditized; deletion-from-model verification and audit methodologies remained fragmented and unproven.
  • 2026-Apr: Enterprise governance platforms advanced with Collibra launching dedicated AI Governance covering use cases, models, and agents, and Immuta treating AI agents as first-class governed data users with zero standing privileges — addressing a critical surface as 80% of Fortune 500 firms deploy GenAI but fewer than 40% have adequate governance. OpenMetadata reached GitHub Trending #1 (13,535 stars) driven by AI governance and semantic data features, while a production case study of an ungoverned customer support agent encountering SSNs in tickets illustrated the real costs of governance gaps. Unlearning remained practically unreliable: ICLR 2026 research showed adversarial prefix attacks cause 1,150x information leakage surges, EACL 2026 auditing frameworks revealed residual knowledge persists post-unlearning, and production quantization masks standard unlearning methods — while the EDPS TechSonar assessment confirmed GDPR-aligned deletion mechanisms remain unverifiable at scale.