The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that generates data pipeline configurations and creates dashboards and reports from data sources. Includes automated ETL generation and chart/report creation from natural language; distinct from narrative generation which produces written explanations rather than visual outputs or pipeline code.
AI-assisted dashboard and pipeline generation has reached maturity on technical capability but remains constrained by organisational readiness. The tools demonstrably work: Meta processes 4 petabytes daily with AI-driven autoscaling, Walmart realised $5.6M annual savings through self-service analytics, and major enterprises (Capital One 16k+ users, Daiso 200 users across 40 dashboards) run production dashboards via natural-language queries and semantic layers. Yet this technical success masks a persistent adoption plateau. A peer-reviewed NBER survey of 6,000 executives found 80% of AI-adopting firms report zero measurable productivity impact, and Gartner 2026 data shows only 1 in 5 AI investments show measurable ROI despite 44% of organisations having implemented semantic layers. The barrier is not tooling but readiness: high-performing organisations allocate 60% of AI spend to data foundations (quality, governance, integration) rather than dashboarding tools. Logitech's experience exemplifies the paradox—a fully instrumented AI analytics program for 6,000 employees revealed usage metrics uncorrelated with business impact, forcing a shift to deletion-based measurement. Dashboard generation has reached feature parity; pipeline automation gains traction at scale (Trek Bikes 80% ETL acceleration, Nissan 50% timeline reduction). The defining tension remains whether organisations can sustain governance and data quality discipline required for production AI pipelines. This is a leading-edge practice where technical capability has measurably outrun organisational adoption readiness.
Dashboard and pipeline generation platforms have reached technical production parity in Q2 2026. AWS rolled out Amazon QuickSight Generate Analysis feature (May GA) generating multi-sheet production-ready dashboards from natural language in minutes; Databricks shipped AI/BI production GA with Genie spaces (natural language interface) and scheduled insights (automated recurring reports), complemented by Lakeflow Pipelines Editor with agentic code generation for ETL; GoodData announced production-ready dbt integration for automatic semantic layer generation from dbt models, signalling ecosystem consolidation around dbt as the transformation standard. Real-world deployments demonstrate measurable production gains: AWS's internal TARA analytics system (production-wide) achieved 48% query accuracy improvement and 90% latency reduction (2–3 minutes to 10 seconds) using semantic layers with conversational AI; Matillion's agentic pipeline generation (Maia) reduced ETL development from 40–50 hours to minutes for complex multi-source pipelines, with named deployments at consulting firm LTM; healthcare and financial services deployments showed 2.8x–10x productivity gains from LLM-assisted ETL and compliance automation.
Yet structural adoption barriers persist and widen. Gartner Data & Analytics Summit 2026 (248 data management leaders) found that organisations will abandon 60% of AI projects through 2026 due to insufficient data readiness—85% of failures cite data quality and only 12% of organizations possess data of sufficient quality for AI. Fivetran's 2026 benchmark (500 senior data leaders) revealed 97% report pipeline failures delay AI initiatives, 53% of data team time spent on maintenance, average 328 pipelines per enterprise, 73% reporting unmet ROI. Broader adoption assessment: only 7% of enterprises (Cloudera/HBR survey of 1,574 IT leaders) report completely AI-ready data, with 60% of initiatives abandoned due to foundational infrastructure gaps. A critical governance gap emerged: semantic layers validate schema and freshness but miss distributional shifts (e.g., upstream defaults silently invalidating filter logic), exposing data quality vulnerabilities in production. The practice remains technically leading-edge but operationally constrained by data readiness, integration complexity, and governance discipline required for production scale adoption.
Pipeline automation shows accelerating deployment with real-world examples: Walmart ($5.6M annual savings, 90% faster analysis), Trek Bikes (80% ETL acceleration), Nissan (50% timeline reduction), Meta (4 petabytes/day with AI autoscaling), and GroupBWT's production ETL serving 30+ cities from 7 heterogeneous sources with source isolation and 50% developer productivity gain. Yet critical blockers persist. Data fragmentation -- not tooling -- remains dominant: a Fortune 500 retailer faced 47 data silos requiring 23 manual exports before deploying AI. Industry-wide, 88% of AI agents fail to reach production, 60% of AI projects abandoned due to data quality issues, and 42% of US enterprises abandon before reaching production. Fivetran benchmark reveals operational overhead: 53% of data team time spent on maintenance, average 328 pipelines per enterprise, 73% report unmet ROI. Logitech's analytics infrastructure revealed that usage metrics (prompts, tokens, MAU) are uncorrelated with business value, requiring measurement discipline most organisations lack. High-performing firms allocate 60% of AI spend to data foundations (quality, governance, integration) rather than dashboarding tools—a maturity threshold most organisations have not reached. The vendor ecosystem ships continuously; the constraint is whether enterprise data estates possess sufficient governance, integration maturity, and measurement discipline to operationalise what vendors build.
— AWS announces GA of Generate Analysis feature: multi-sheet production-ready dashboards from natural language in minutes rather than hours, available across all AWS regions for Enterprise/Pro users.
— Major BI platform announces production-ready dbt integration: automatic logical data model and metrics generation from dbt models with Apache Arrow caching—ecosystem signal of semantic layer maturity.
— Databricks AI/BI production GA: Genie spaces (natural language interface), scheduled insights (recurring automated reports), unified pipeline and BI generation via Lakeflow Pipelines Editor.
— AWS internal case study: TARA technical analytics system deployed production-wide; 48% query accuracy improvement, near-zero failures, 90% latency reduction (2-3 min → 10 sec) using semantic layer + conversational AI.
— Production case study from consulting firm LTM: Matillion Maia AI reduces ETL pipeline development from 40-50 hours (10 tables) to minutes via metadata-driven framework and agentic generation.
— Cloudera/HBR survey of 1,574 IT leaders: only 7% report completely AI-ready data; 60% of AI projects abandoned due to foundational infrastructure gaps—quantifies readiness barrier beyond tool capability.
— Gartner Data & Analytics Summit 2026: 60% of AI projects will be abandoned through 2026 due to data quality; 85% of failures cite data infrastructure; only 12% have sufficient quality—critical negative signal.
— Pharma deployment: regulated RAG system (21 CFR Part 11, GDPR) built in 90 days with compliance agents; regulatory response 10 days → <2 hours; demonstrates AI-ready pipeline maturity for governed environments.