Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

📊 Data & Analytics

AI for turning raw data into queryable, analysable, actionable insight. Streaming analytics, MLOps, and feature engineering are good practice with proven deployments at scale. The bulk sits at leading-edge, held back not by tooling but by data quality and governance gaps — 60% of AI projects stall on data readiness. Nearly all practices are stalled in trajectory.

16 practices: 5 good practice, 10 leading edge, 1 bleeding edge

Data & Analytics — Biweekly Brief

The headline: AI analytics tooling is mature and widely available, but 74% of its measurable value accrues to just 20% of companies -- the ones that invested in data foundations first.

The Picture

Most organizations now have access to AI-powered analytics -- natural-language querying (asking questions of data in plain English), automated dashboards, anomaly detection (flagging unusual patterns), and predictive modeling. The tools work. The problem is that most companies cannot use them effectively because their underlying data is not ready: poorly cataloged, inconsistently governed, siloed across departments. PwC surveyed 1,217 executives and found the gap between AI leaders and the rest is not about which AI they bought -- it is about whether their data infrastructure can support it. Only 7% of enterprises report fully AI-ready data systems. The rest face a widening competitive gap as leaders pull further ahead, compounding their advantage with every quarter of clean, governed data they accumulate.

This Fortnight

  • Synthetic data earned its first regulatory endorsements. The UK's Financial Conduct Authority (FCA) published a project deploying fully synthetic datasets for anti-money laundering testing, and the EU's Data Protection Supervisor issued formal governance guidance. Regulators are not just permitting synthetic data (artificially generated datasets that mimic real data patterns) -- they are building with it. If your compliance or QA teams have not evaluated synthetic data for testing workflows in regulated domains, the window of early-mover advantage is closing.

  • Snowflake's AI usage surged to 9,100+ weekly active accounts. That is 200% growth in AI-related workloads, with half of Snowflake's customer base now using its AI features. Natural-language data querying is crossing from experiment to default workflow at the platform level, which means procurement and IT teams should expect AI-assisted analytics as a standard feature in their next contract renewal, not an add-on.

  • Microsoft forced Power BI Copilot narrative mode for licensed users. At the same time, independent benchmarking across 40+ AI models found 15-18% error rates when generating narratives in medical and legal contexts. Finance teams testing AI-generated commentary should treat every output as a draft requiring human sign-off -- particularly for board-facing or regulatory materials.

  • Uber published three production case studies quantifying operational AI at scale. Its AthenaX system processes over a trillion daily messages; its D3 drift detection system catches data quality incidents 5x faster than manual monitoring; and its Bayesian forecasting system decomposes uncertainty into actionable categories. These are not proofs of concept -- they are engineering patterns that set the bar for what mature data operations look like.

Coming Up

  • EU AI Act data lineage mandates take effect for high-risk systems. Organizations deploying AI in hiring, credit, healthcare, or law enforcement will need to demonstrate end-to-end data lineage (tracking where data came from and how it was transformed). Most enterprises still track data dependencies in spreadsheets. Audit your current lineage tooling against the Act's requirements now -- remediation takes 6-12 months for a typical implementation.

  • First-generation AI analytics products are being retired. AWS Lookout for Equipment shuts down in October 2026, following Azure Anomaly Detector's end-of-life. Standalone AI analytics tools are being absorbed into broader platforms (Databricks, Snowflake, Microsoft Fabric). If your team relies on a standalone AI analytics vendor, assess platform migration risk before renewal conversations.

  • Agentic analytics (AI that explores data autonomously) is approaching enterprise pilots. Databricks Genie, Meta's internal deployment, and several startups are pushing AI systems that generate their own queries and surface insights without prompting. Current reliability sits at roughly 45% on real-world data. Watch this space, but do not bet production decisions on it yet -- the failure modes are subtle and the cost overruns are documented (96% of organizations deploying generative AI report unexpected costs).

What's Hard About This

  • Data readiness is the bottleneck, not AI capability. Gartner confirms 60% of enterprise AI projects are abandoned due to data quality failures. High-performing organizations spend 60% of their AI budgets on data foundations -- quality, governance, integration -- before touching analytics tools. Most organizations invert this ratio, buying dashboards before building pipelines.

  • Privacy defenses are losing ground to AI-powered attacks. New PII redaction tools achieve 96-97% accuracy, but research shows AI-based deanonymization (re-identifying individuals from anonymized data) now works at 68% accuracy for $1-4 per profile. Enterprise telemetry finds nearly half of all secrets and a third of financial data leak through AI tools. Privacy automation is necessary but no longer sufficient on its own.

  • Benchmark performance does not predict production results. Text-to-SQL (converting questions to database queries) scores 87% on academic tests but drops to 10% on real enterprise databases. 62% of causal inference models underperform simple baselines on real-world data. Organizations deploying AI analytics based on vendor benchmark claims face painful recalibration in production.


Go deeper: the full Data & Analytics briefing -- the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.