The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that analyses product usage data and surfaces actionable insights about feature adoption, retention drivers, and user behaviour. Includes automated insight generation and metric explanation; distinct from automated EDA which analyses any data rather than specifically product metrics.
AI-powered product analytics interpretation has outrun the organisations it aims to serve. Vendors now ship autonomous agents that generate hypotheses, investigate anomalies, and propose experiments from raw usage data -- capabilities that were research-grade three years ago. The tooling works. The problem is that almost no one can use it effectively: surveys consistently find the vast majority of enterprises reporting zero measurable return from generative AI investments, and product analytics is no exception. The binding constraint has shifted from technical capability to organisational execution -- data governance, cross-functional alignment, and the discipline to act on insights rather than simply surface them. This gap between what platforms can do and what teams actually achieve defines the practice's bleeding-edge status: genuinely powerful, demonstrably risky, and still far from routine. By May 2026, autonomous product analytics had reached platform ubiquity (Google Analytics Generated Insights, Amplitude Global Agent, GoodData's agentic anomaly detection suite), with named production deployments at scale (Mercado Libre, NTT DOCOMO, Rappi achieving +10% order growth). Yet governance gaps remain binding: Gartner reports 73% of data leaders cite data quality as the primary AI barrier, and surveys document the paradox—organisations scale to 10+ AI tools while making material decisions on demonstrably bad data. Only data infrastructure maturity and vetting discipline distinguish successful deployments from expensive pilots.
By May 2026, autonomous product analytics had achieved platform ubiquity and documented production momentum at scale. Amplitude released four production GA products (Global Agent for continuous behavioral understanding, Specialized Agents for async tracking and monitoring, AI Assistant for in-product support, Agent Analytics bridging product analytics with LLM observability) with Q1 ARR growth to $374M (17% YoY). GoodData shipped agentic anomaly detection skills integrated into its AI Assistant, moving from dashboards to conversational insight generation. Practitioner frameworks emerged (Johnny Mai's L6 framework for AI product metrics with task completion, hallucination rate, time-to-accuracy, and cost-per-valid-output thresholds) documenting how traditional metrics fail for AI-powered products. Named production deployments show measurable ROI: Rappi's Amplitude-powered analytics achieved +10% first-time orders and -30% customer acquisition cost reduction through AI-driven audience segmentation; Shopify Protect's real-time anomaly detection analysed 10B+ transactions for 99.7% approval rate and $350M annual savings.
Yet infrastructure barriers remain structural and binding. Gartner research documents 73% of enterprise data leaders rank data quality as the primary AI barrier (above models or compute), with 60% reporting zero ROI on AI investments—root cause remains foundational data infrastructure gaps, not technology. Sisense's survey of 267 product leaders reveals the execution paradox: 48% trust AI insights but teams spend 40% of their time validating them before action; 29% of AI initiatives remain stuck in pilot; 69% lack easily accessible analytics. Most critically, OneStream's survey of 350+ executives documents the governance-trust inversion at the moment of AI scaling: 47% admit making material decisions on bad data in the past 12 months, 72% report bad data costs $500K+, and executives scaling 10+ AI tools are 4x more likely to use demonstrably bad data. Amplitude's production learnings from 4,500+ customers identify the hard problem: insight verification is difficult, most organizations lack specialized context and observability infrastructure, and data governance at scale has not been solved. For the majority, the capability-infrastructure gap leaves analytics AI in pilot purgatory: vendors have shipped autonomous agents operating at scale; the market has not yet solved data governance and verification discipline to move from pilots to sustained operational use.
— OneStream survey of 350+ executives: 47% made material decisions on bad data in past 12 months; 72% report bad data costs $500K+; executives scaling 10+ AI tools 4x more likely to use bad data—governance-trust paradox undermines analytics AI effectiveness.
— Q1 2026 GA launches: Global Agent (continuous behavioral understanding), Specialized Agents (async tracking, monitoring, sentiment), AI Assistant (in-product support), and Agent Analytics (bridges product analytics with LLM observability for AI quality measurement at scale).
— Survey of 267 product leaders: 48% trust AI insights but teams spend 40% of time validating them; 29% of AI initiatives stuck in pilot; 69% lack accessible analytics—operationalization and trust gaps remain binding constraints.
— Senior practitioner framework (FAANG-sourced) for outcome-focused metrics for AI products: L6 framework covers task completion, hallucination rate, time-to-accuracy, human-in-the-loop frequency, retention delta, and cost per valid output with multi-tier validation model.
— Gartner research: 73% of data leaders rank data quality as primary AI barrier (not models or compute); 60% report zero ROI; CRM data averages 25% critical error rate—foundational infrastructure gap constrains product analytics AI adoption.
— Shopify Protect case study: analyzed 10B+ transactions to achieve 99.7% approval rate, cut fraud chargebacks by 75%, and saved $350M annually via real-time anomaly detection—validates autonomous interpretation as production-critical infrastructure.
— 54% of enterprises deployed AI agents in core operations (up from 11% in 2024); 80% report measurable economic impact; 'agentic analytics' emerging as specific use case—documents mid-2026 production deployment momentum.
— Technical analysis: 60%+ of AI production failures trace to data quality, not models—null propagation, schema drift, stale indices, and inconsistent definitions compound reliability failures across multi-step workflows.
2022-H1: Early tools (Mixpanel, Amplitude) faced pricing and data throttling concerns; PostHog emerged as an alternative enabling company-wide adoption and unthrottled ingestion. Modern data stack (Snowplow, dbt, BigQuery) established as custom path. Practitioner analysis revealed tool misalignment with recurring revenue models; Netflix's analytics failure and failed Parable product highlighted interpretation risks and limits of sophisticated metrics without context.
2022-H2: Major platform deployments confirmed (G2 scaled Amplitude across 100% of product managers) as enterprise adoption solidified around Mixpanel and Amplitude. Simultaneously, critical discourse emerged: analytics tools faced fundamental reliability challenges (GA4's ML predictions, GDPR compliance issues) and practitioner warnings about the gap between data hype and analytical rigor, emphasizing interpretation discipline and organizational maturity as limiting factors over data quantity.
2023-H1: Vendors accelerated AI-powered insight generation features (Microsoft Adoption Score GA, Amplitude AI enhancements). Research advanced with peer-reviewed LLM-based frameworks for extracting structured insights from feedback. However, consulting data documented that 95% of company-wide AI projects failed to deliver measurable results, highlighting execution barriers. Practitioner reports revealed persistent gap between data collection and actionable insight—teams possessed tools but struggled with translation and organizational adoption of insights.
2023-H2: Amplitude launched Ask Amplitude and Data Assistant as GA, advancing LLM-powered insight generation from beta to production. Named deployments (QuillBot, 35M MAUs) confirmed organizations moving beyond pilots into operational analytics workflows. However, adoption maturity remained constrained: only 10% of product leaders could validate all decisions with data; 10% of organizations had deployed GenAI to production. Practitioner analysis emphasized execution barriers—platforms could generate insights, but organizations lacked vetting discipline, cross-functional alignment, and maturity to translate insights into action.
2024-Q1: Amplitude advanced Session Replay as GA (Feb 2024), enabling integrated qualitative-quantitative insight generation at enterprise scale. Mixpanel released Benchmarks 2024 covering 7,500+ companies, signaling maturation of industry-wide analytics standards and interpretation baselines. PostHog demonstrated sustainable growth (6x YoY revenue, 5-day CAC payback) through integrated product insight platform. However, critical analysis documented the persistent insights-to-actions gap: customers generate insights but fail to execute, limiting perceived value and vendor pricing power—execution discipline, not tool capability, remained the constraint on ROI.
2024-Q2: Amplitude launched Snowflake-native analytics (June 2024), signaling ecosystem consolidation and data governance maturity. Mixpanel published benchmarks across 7,700+ customers and 11.7T events, establishing industry-wide analytics baselines and competitive reference points. HostAI achieved 50% improvement in LLM evaluation scores using integrated PostHog analytics and LangFuse, demonstrating real deployment of analytics interpretation within AI products. However, mid-2024 surveys revealed persistent adoption barriers: only 25% of planned AI projects fully implemented; 42% reported no significant benefits; 65% of executives not seeing value from AI investments—confirming that technical capability had outpaced organizational execution and vetting discipline.
2024-Q3: Platform vendors accelerated AI-powered simplification: Amplitude released "Amplitude Made Easy" with one-line setup and AI query engine, signaling response to usability barriers. MIT SMR survey showed 67% of leaders actively using GenAI for analytics with 48% expecting 100% ROI in 3 years, indicating mainstream adoption momentum. Mixpanel customers reported 35.4% time savings and 79% faster decision-making from self-serve analytics. However, practitioner analysis revealed continued limitations: language models cannot reliably perform math and require subject matter expertise for validation, necessitating human oversight. Platform pricing remained a constraint: event-based models create unpredictable costs at scale. Despite mainstream awareness, the practice remained bottlenecked by execution discipline and organizational maturity rather than technology—organizations knew how to measure but struggled to translate insights into action and maintain vetting discipline across teams.
2024-Q4: Platform vendors continued ecosystem maturation: Mixpanel launched Revenue Analytics integrating financial metrics with product analytics (22 trillion events/year processed), and Canal+ achieved 3x conversion improvement and 20M subscriber scale using Amplitude—demonstrating sustained deployment momentum. However, broad enterprise surveys confirmed persistent adoption barriers: 80% of AI projects failed with data quality and infrastructure as primary causes; only 2% of U.S./UK businesses achieved GenAI production deployment with 48% citing data security/privacy and 33% citing data readiness as blockers. The gap between platform capability and organizational execution widened, with enterprises struggling to move from pilots to sustained deployment despite year-over-year vendor innovation in AI-powered insight generation and ease-of-use.
2025-Q2: Vendors advanced autonomous product analytics capabilities: Amplitude launched AI Agents integrated with Amazon Bedrock for real-time friction detection and autonomous optimization; Mixpanel expanded with AI-powered insights and metric trees. Yet adoption fatigue accelerated despite innovation—42% of companies abandoned most AI pilots (up from 17% in 2024), with 46% average abandonment across all initiatives and 45% burnout among frequent AI users. Field studies documented structural limitations: Danish research across 25k workers showed ChatGPT saved 3% workday but zero wage impact; FullStory survey revealed 87% of product teams collect behavioral data but only 25% act on it, with just 13% describing AI adoption as extensive. Organizational barriers dominated: consultancy analysis (BCG) found 70% of AI failures stem from people/process/change management, not technology. Product analytics remained caught between vendor capability (autonomous insight generation) and organizational execution (inability to translate insights into consistent action), with enterprises treating AI as an experimental tool rather than operational infrastructure.
2025-Q3: Platform vendors accelerated AI capabilities: Mixpanel GA'd Spark enabling natural language querying with transparent AI reasoning (July), continuing the shift toward simplified interpretation interfaces. Amplitude maintained momentum with 14% YoY revenue growth and 634 $100K+ enterprise customers, despite broad ROI challenges. Yet the adoption divide deepened: MIT Project NANDA published meta-analysis showing 95% of organizations report zero business return on GenAI investments despite $30-40B spending; only 5% of custom AI tools reached production. Critical research from FERZ documented fundamental technical limitations—probabilistic AI systems cannot meet deterministic reliability requirements (RAG + LLM stacked reliabilities yield ≤77% reliable output), undermining trust in autonomous insight generation for compliance-sensitive domains. Forrester analysis revealed enterprise vendors embedding AI agents to deepen lock-in, pushing high-margin products rather than solving adoption barriers. The landscape bifurcated: technology advanced (natural language analytics, autonomous agents, integrated workflows), but organizational translation of insights into action remained the constraint. Product analytics interpretation had evolved from research-stage exploration (2021) through production tooling (2023-24) into a bifurcated market—vendors offering sophisticated capabilities competing on ease-of-use, but enterprises unable to move from pilots to sustained deployment and ROI realization due to organizational maturity, data governance, and execution discipline gaps.
2025-Q4: Vendor capability reached peak autonomous sophistication: Amplitude launched AI Agents for fully autonomous hypothesis generation, anomaly investigation, and experiment design (December 2025); Mixpanel continued ecosystem maturation with advanced AI-powered insights. Yet adoption stalled further: MIT reported 95% of organizations saw zero return from GenAI spending; McKinsey found only 39% of companies achieved EBIT improvement; only 9.7% of U.S. firms deployed production AI by mid-year. Critical research revealed the capability-insight gap: AI achieved 91% factual accuracy in data synthesis but only 67% strategic insight capture, requiring human validation that many organizations lacked. Organizational barriers dominated—74% of companies struggled to scale beyond pilots due to people/process/change management, not technology. Product analytics interpretation remained caught between vendor autonomy innovation and enterprise execution paralysis, with the discipline bifurcating into sophisticated tooling (bleeding-edge capability) serving a constrained base of mature, well-governed enterprises while 95% of organizations abandoned pilots before ROI realization.
2026-Jan: Major vendors continued AI sophistication launches (Amplitude AI platform, Mixpanel metric trees with natural language querying) positioning autonomous product analytics as mainstream capability. Real-world deployments emerged: Yum! Brands deployed Amplitude AI Agents for 24/7 autonomous analytics cycles; Dun & Bradstreet built multilayered data resilience framework for AI trust. However, critical barriers persisted: only 11% of AI agents reach production (88% failure rate) with data fragmentation and integration complexity cited as primary blockers; 54% of organizations back up <40% of AI data, creating reliability risks. PostHog expanded qualitative-quantitative fusion integrating customer feedback with LLM traces. The gap between vendor capability and organizational capability widened—platforms achieved autonomous insight generation while enterprises struggled with data infrastructure maturity and verification discipline required for production deployment.
2026-Feb: Vendors demonstrated sustained autonomous agent deployments and performance benchmarking: Complex media deployed Amplitude AI agents for real-time customer behavior analysis; Amplitude published Global Agent evaluation showing 76% overall accuracy with 7x improvement over six months. Mixpanel's 2026 benchmarks quantified adoption shift—26% YoY device growth but declining event volume—signaling market maturation from exploration to operational execution. Industry positioning shifted focus to agentic systems and ROI as AI value concentration point. Yet adoption barriers remained structural: 95% of organizations reported zero ROI on generative AI spending, with data readiness and workflow integration as primary constraints. Vendor trade-offs exposed (PostHog: powerful features vs. high engineering overhead) highlighted that capability differentiation had plateaued—market advantage shifted to organizational maturity and data infrastructure readiness rather than technology advancement.
2026-Mar: Autonomous product analytics reached platform ubiquity. Google Analytics' February 2026 Generated Insights feature brought automatic anomaly detection and plain-language trend summaries to the world's most-used analytics tool; Amplitude GA'd its AI Agents (76% accuracy) with production deployments at NTT DOCOMO and Mercado Libre reporting improved decision velocity and reduced customer acquisition costs. Benchmarks across autonomous data agents (Energent 94.4%, Tableau Pulse, Power BI Copilot) show analysts saving 3+ hours daily on manual extraction. Yet governance gaps remain the binding constraint: multi-expert analysis documents that AI cannot fix underlying data quality gaps, 80% of enterprise data stays unstructured, and organizations remain stuck in pilot purgatory despite ubiquitous tooling—with proprietary context identified as both the critical differentiator and the hardest capability to implement.
2026-Apr: Practitioner deployments accelerated alongside critical barriers documentation. DeFacto achieved 4x faster experimentation and 2% revenue increase using Amplitude; Pipp cut reporting from 3 weeks to 30 minutes and achieved 25% churn reduction using Querio AI analytics, validating ROI for well-governed teams. PostHog reached $58M ARR (112% YoY) with 176K platform companies and strategic vision for autonomous AI agents in feedback loops. However, comprehensive data (Mixpanel benchmarking 3.7T events across 12K companies; Snowflake/Omdia survey of data readiness; HouseofMVPs AI failure analysis) documented that practitioners face fundamental barriers: data governance gaps (52% of orgs cite data quality as primary AI blocker, surpassing technical talent and budget concerns for first time); predictive analytics AI projects fail 64% of the time with only 15% true success; 79% of organizations face data-centric AI challenges despite 92% already using data for LLMs. Product analytics interpretation remains bifurcated—sophisticated deployment delivering measurable ROI for mature, governance-ready teams, while the majority remain constrained by data infrastructure and interpretation discipline required to act on AI-generated insights reliably.
2026-Q2 (late Apr): Mid-year evidence documents sustained production momentum against persistent infrastructure barriers. dbt Labs 2026 survey (363 analytics professionals) shows 72% prioritize AI-assisted workflows and 71% cite hallucinated outputs as top concern, with trust in data jumping 66%→83% YoY—adoption acceleration without equivalent governance maturity. Ampcome mid-year analysis documents 54% of enterprises deployed AI agents in core operations (up from 11% two years prior), with 80% reporting economic impact; agentic analytics emerging as operational use case. Yet Amplitude's production learnings from 4,500+ enterprise customers reveal analytics is harder than coding for autonomous AI because output verification is difficult and most orgs lack specialized context/observability infrastructure. Cloudera's Data Readiness Index (1,270 IT leaders) shows infrastructure paradox: 96% integrated AI but 80% constrained by data access, only 18% fully govern data. Technical analysis confirms 60%+ of AI failures trace to upstream data quality (null propagation, schema drift, stale indices) rather than models. Denodo survey (850 executives) documents adoption barriers: 66% require real-time data for trust, 63% struggle finding context, 80% face data access constraints. The discipline remains bifurcated: vendors have achieved autonomous agent deployment at scale (platform ubiquity); infrastructure and governance maturity remain the binding constraints on broad operationalization.
2026-May: Amplitude GA'd four autonomous analytics products (Global Agent, Specialized Agents, AI Assistant, Agent Analytics) with Q1 ARR at $374M and LLM observability bridging now a shipping feature. Governance-trust paradox sharpened: Sisense survey of 267 product leaders found 48% trust AI insights but teams spend 40% of time validating them before action, while OneStream's 350+ executive survey showed organisations scaling to 10+ AI tools are 4x more likely to act on demonstrably bad data—confirming that tooling proliferation compounds rather than resolves data quality risk.