Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Feature prioritisation & roadmap support

BLEEDING EDGE

TRAJECTORY

Stalled

AI that helps prioritise features by synthesising customer signals, business impact, and engineering effort estimates. Includes RICE/ICE scoring assistance and roadmap scenario modelling; distinct from backlog management which organises work rather than prioritising outcomes.

OVERVIEW

Feature prioritisation is the practice of systematically ranking product features and roadmap initiatives based on customer signals, business impact, and effort estimates. Rather than manually juggling priorities or defaulting to loudest-voice decision-making, teams use frameworks like RICE (Reach, Impact, Confidence, Effort) or ICE (Impact, Confidence, Effort) to make explicit trade-offs. AI is emerging as a tool to accelerate this process—synthesising customer feedback into prioritisation signals, modelling roadmap scenarios, and suggesting effort estimates based on historical data. The challenge is that AI-assisted decision-making requires high-quality inputs and rigorous validation: bad feedback data produces bad priority rankings, and model error can lead teams to over-commit on low-value work.

CURRENT LANDSCAPE

By May 2026, feature prioritisation entered a phase of sophisticated deployment evidence alongside persistent execution crisis. On the vendor side, agentic capability had matured beyond assisted scoring into autonomous workflow systems. Productboard case studies documented Principal PMs at Amplitude deploying AI agents that automate weekly product briefs via analytics integrations, analyse metric changes with root-cause hypothesis generation, and perform continuous session replay analysis—demonstrating agent-driven discovery at scale. Productboard (6,000+ customers: Microsoft, Zoom, Salesforce) and ecosystem competitors (ServiceNow RICE/WSJF scoring, Koji research-to-prioritisation, Atlassian Intelligence/Jira Rovo velocity-based scoring) offered production-grade AI-assisted prioritisation. Real-world deployments showed quantified outcomes: CloudSync compressed prioritisation cycles from 3-hour debates to 15-minute decisions (AI-synthesized customer research + ARR ranking: $513K SSO > $490K API > $287K Reporting); text analytics deployments delivered 35% backlog reduction with 12 NPS gains and prevented $50M product recalls; leading product teams reduced discovery workflows from 50+ steps to 18. Adoption breadth expanded: IdeaPlan's 1,200+ PM survey found 73% using AI weekly (up from 45% in 2024), 31% for roadmap narratives, 5-8 hrs/week savings; FAANG data showed 57% at Meta/Airbnb/Dropbox using RICE, 22% throughput gains with WSJF at Spotify/Amazon. Yet adoption paradoxically masked stalled strategic execution. Mustafa Kapadia's April 2026 benchmark found that despite 73% weekly AI use, leverage of AI for core product work (roadmap prioritisation, strategic planning) remained <10%—unchanged from six months prior, indicating adoption breadth had not translated into decision quality. Critically, only 11.5% of PMs reported confident prioritisation decisions despite widespread tool access.

The execution gap persisted on three fronts: (1) Framework inadequacy for AI features—RICE broke when applied to AI projects; IdeaPlan's RICE-A added AI Complexity dimension (data readiness 40%, model maturity 35%, operational overhead 25%), yet 80%+ of AI projects failed with <20% scaling to production within 18 months. (2) Organisational barriers remained primary: governance emerged as the blocker, not technology. Deloitte's April survey of 3,235 leaders revealed 88% use AI but only 20% achieve revenue growth; governance, infrastructure, data, and talent readiness declined despite rising adoption. By May, senior practitioners documented the deeper constraint: 94% of product managers use AI tools, yet 95% of GenAI pilots fail to deliver ROI—the root cause is operating model and process readiness, not tool capability. Frameworks promised objectivity but failed—ITONICS documented how RICE scores became political (confidence=optimism, effort=acceptable-not-real), MoSCoW became politicised, and frameworks were bypassed for politically significant decisions. (3) Systemic "research breakage" persisted: research findings disappeared into organisational fog (no clear owner) or silent mid-roadmap abandonment. Wire's analysis showed AI tools accessed only 1 of 5 required context dimensions (strategic, user, technical, competitive, organisational), causing failure modes (keyword frequency ranking SSO above onboarding without contract context).

The fundamental constraint remained unchanged since 2024: organisations lacked not technology but governance discipline, data quality, strategic clarity, and organisational alignment. Vendor lock-in fears (94% of IT leaders) and wildly varying deployment complexity (ServiceNow 4-8 weeks vs Canny 1-2 hours) compounded adoption barriers. Automation without human reflection risked accelerating feature factories rather than enabling genuine prioritisation discipline. New deployment methodologies emerged (outcome-first KPI scoring, RICE-A framework extensions, text analytics-driven input) but implementation remained constrained by the same organisational execution barriers that had blocked progress since 2024.

TIER HISTORY

ResearchJun-2023 → Jun-2023
Bleeding EdgeJun-2023 → present

EVIDENCE (68)

— 18-year product leader documents critical gap: 94% PM AI adoption reported but 95% of GenAI pilots fail ROI; root cause is operating model readiness and organizational governance, not technology maturity—negative signal balancing positive vendor maturity evidence.

— Principal PMs at Amplitude (Frank Lee) and Productboard (Chris Patton) deploy AI agents for automated discovery, metric analysis, and opportunity detection; demonstrates sophisticated production-ready AI-assisted prioritization at scale.

— Front CPO (9k+ customers, $100M ARR) details how AI shifts feature prioritization from effort/impact to adoption outcomes and go-to-market clarity; discovery and delivery workflows collapsing into continuous cycle.

— Practical use case scoring framework (Value, Feasibility, Time-to-impact, Risk) for prioritizing AI initiatives with outcome-first KPI models; directly applicable to AI-informed feature roadmap prioritization methodology.

— FAANG adoption evidence: 57% at Meta/Airbnb/Dropbox use RICE; 41% of scrum teams use MoSCoW; WSJF adoption at Spotify/Amazon reports 22% higher throughput—demonstrates framework uptake and comparative effectiveness signals.

— Three named deployments using AI text analytics for prioritization: SaaS reduced backlog 35% and gained 12 NPS points; MedTech automated 60% compliance docs; consumer electronics prevented $50M recall—quantified outcomes from real-world prioritization adoption.

— Critical analysis of RICE, MoSCoW, Kano, and Value-vs-Effort: frameworks promise objectivity but fail under organizational pressure; RICE scores become political (confidence=optimism, effort=acceptable not real), frameworks bypassed for politically significant decisions; proposes cross-functional scoring and evidence validation.

— Follow-up to 2025 AI Empowered Product Team Benchmark (54 CPO interviews): leverage AI for core product work including roadmap prioritization accounts for <10% of use; no organic shift toward strategic work six months later—negative signal showing adoption breadth has not translated to decision quality.

HISTORY

  • 2023-H1: Productboard and other vendors launch AI capabilities for feedback analysis and prioritisation. Path & Planning case study documents real deployment using Productboard for OKR-aligned roadmap centralisation and improved forecasting. Broader survey data shows widespread caution about AI project ROI and high failure rates in enterprise AI initiatives.
  • 2024-Q1: Vendor ecosystem maturation: Productboard Spark (weighted scoring) enters beta, Strive and competitors expand. Société Générale case study shows structured deployment with 100+ use cases and closed-loop value tracking, but enterprise adoption remains cautious—surveys show ~70% of generative AI projects fail to deliver value. Product strategists emphasize that framework effectiveness depends on strategic clarity and disciplined implementation, not AI alone.
  • 2024-Q2: Adoption accelerating to mainstream: 61% of PMs now report using AI/ML in their workflows. Zefi AI launches as dedicated VoC platform with roadmap prioritisation as explicit use case. Documented efficiency gains show 25-30% improvement in product development cycle speed. Cautionary data persists: enterprise deployments continue to struggle with data quality and ROI validation; priority framework effectiveness tied to strategic clarity rather than tool capability.
  • 2024-Q3: Vendor consolidation and cautionary signals: Productboard re-architects platform and scales for enterprise deployment (Salesforce, Zoom, Pitney Bowes confirmed as users). However, Gartner forecasts 30% of GenAI projects will be abandoned by EOY 2025 due to poor data quality, cost, and unclear ROI. Generative AI adoption reaches 39% of U.S. workforce (Harvard Kennedy School survey, Aug 2024), but real-world deployment of AI-assisted prioritisation continues to be hampered by data quality issues and integration complexity. The gap between vendor capability and reliable enterprise implementation widens.
  • 2024-Q4: Vendor maturity and production readiness gap widen: Productboard launches Pulse AI for Voice of Customer integration. Enterprise AI spending surges to $13.8B (6x from 2023); 85% of enterprises testing GenAI. Yet deployment stalls: only 22% confident in IT architecture; 60% of UK enterprises not in production; AI project ROI declined to 47.3% from 56.7% in 2021; data quality and governance cited as leading obstacles. Workforce sentiment cools (excitement drops 47%→41%), with 48% of workers uncomfortable admitting AI use. Feature prioritisation remains trapped between vendor maturity and operational complexity.
  • 2025-Q1: Vendor momentum continues but deployment crisis deepens. Productboard releases Spark AI suite with agentic prioritisation capabilities; Productboard CEO emphasizes strategic integration over AI-first hype at SaaStr Summit. Simultaneously, S&P Global reports failure rates surge to 42% (up from 17%), with 46% of AI pilots failing to reach production. Industry analysis reveals 60-95% of AI initiatives stalled in "Pilot Purgatory"; tech leaders cite reliability concerns (45%) and integration challenges as top barriers. The market reached inflection point: vendor tooling matured while enterprise execution deteriorated, widening the proof-of-concept-to-production gap.
  • 2025-Q2: Failure acceleration and ROI crisis materialize. Product leaders report 70% are investing in AI/ML, with 75% recognizing AI/data fluency as critical PM competency—yet simultaneous collapse in execution: 42% of companies scrapped most AI initiatives (vs 17% year prior), and 46% of POCs abandoned. MIT research shows 95% of AI pilots fail to scale; 70% of all AI initiatives never escape pilot phase. Only 4% of companies achieve significant AI returns; average ROI 3.7x but 66% struggle with positive ROI. The practice reaches a critical juncture: vendor maturity is proven (Productboard, Airfocus, Craft.io firmly established), feature prioritisation frameworks are understood, but enterprise execution remains fundamentally constrained by data quality, integration complexity, and measurement discipline—not technology capability.
  • 2025-Q4: Inflection reached. Deloitte survey of 1,854 execs: only 6% hit satisfactory AI ROI within a year. Productboard survey: 99% of PMs experimenting with AI but only 8% say it's core to prioritisation. High-profile failures documented: Volkswagen Cariad ($7.5B loss), Taco Bell drive-thru (viral failures). UserIntuition analysis: 64% of delivered features miss adoption targets because frameworks amplify bad input, not solve it. The paradox crystallizes—tooling matured, but organisations remained trapped by the same constraint: quality of data and strategic clarity, not technology capability.
  • 2026-Jan: Vendor production readiness confirmed, adoption barriers harden. Productboard publishes case study of Pulse AI (processing 200k-1M feedback items) and Spark agentic system in production, but UC Berkeley research finds only 5% of enterprises see P&L impact from gen-AI and AI tooling can increase task completion time 19%. Data quality, governance gaps, and organisational misalignment emerge as primary adoption blockers—not technology maturity.
  • 2026-Feb: Deployment evidence and ROI reality collide. P&G field experiment shows AI-enabled teams 3x more likely to produce top-tier ideas with 13-16% faster ideation cycles, confirming the capability exists. Yet KPMG's 2,500-executive survey reveals only 24% achieve ROI across multiple AI use cases, with high performers at 4.5x ROI and the majority struggling. Leading product discovery teams demonstrate workflow compression (50+ steps to 18), but strategic misalignment persists: feature prioritisation remains constrained by vendor lock-in fears (94% of IT leaders), roadmap optimization misconceptions, and the fundamental tension between tool maturity and organisational execution discipline.
  • 2026-Mar: Vendor ecosystem expansion and deployment validation. ServiceNow embeds RICE/WSJF scoring, Koji launches AI-moderated research conversion, CloudSync case study shows ARR-based prioritisation ($513K SSO > $490K API). IdeaPlan survey of 1,200+ PMs: 73% weekly AI use, 31% for roadmap narratives, 5-8 hrs/week savings. Yet organisational readiness gaps widen: Deloitte survey (3,235 leaders) shows 88% use AI but only 20% achieve revenue growth; governance and data readiness declining. Wire analysis reveals critical flaw: AI tools access 1 of 5 context dimensions, causing failure modes (keyword frequency ranking SSO above onboarding without contract context). The paradox persists: capability maturity confirmed, but organisational execution barriers (data quality, strategic clarity, cross-functional alignment) remain unchanged since 2024.
  • 2026-Apr: Framework inadequacy, governance failures, and vendor confidence peaks. IdeaPlan documented that standard RICE frameworks fail for AI features—proposing RICE-A with an AI Complexity dimension covering data readiness, model maturity, and operational overhead—amid evidence that 80%+ of AI projects fail and fewer than 20% scale to production within 18 months. Product-Led Alliance's 2026 PM survey found only 11.5% report confident prioritisation decisions despite 73% weekly AI use, confirming that adoption breadth has not translated into decision quality. Mustafa Kapadia's April benchmark follow-up revealed that core product work (roadmap prioritisation, strategic planning) remains <10% of AI use despite 73% weekly adoption—negative signal of stalled strategic execution. Systemic "research breakage" documented as structural governance failure: findings disappear through organisational fog or silent mid-roadmap abandonment. ITONICS analysis showed frameworks fail under organisational pressure (RICE scores become political, MoSCoW politicised, Value/Effort suffers from political feasibility bias). Productboard announced 30% workforce reduction and shift to "AI-only" operating model, signaling vendor conviction in production maturity. MetaCTO consulting documented that 88% of AI POCs never reach production, traditional 12-36 month roadmaps are obsolete, and 80.3% of AI projects fail to deliver intended value (RAND 2025)—critical negative signals revealing execution barriers persist despite vendor maturity.
  • 2026-May: Agentic deployment evidence, operating model crisis confirmed. Productboard published case studies of Principal PMs at Amplitude and Productboard deploying AI agents for autonomous discovery briefs, metric analysis, and opportunity detection—demonstrating sophisticated production-ready workflows. New prioritisation methodologies emerged: outcome-first KPI scoring frameworks for AI initiatives, text analytics deployments delivered quantified outcomes (35% backlog reduction, 12 NPS gains, $50M recall prevention). Yet senior practitioners documented the fundamental gap: 94% of PMs use AI tools but 95% of GenAI pilots fail ROI; root cause is organisational execution readiness, not technology maturity. FAANG framework adoption persisted (57% RICE at Meta/Airbnb/Dropbox, 22% throughput gains with WSJF at Spotify/Amazon). The paradox hardened—agentic capability matured, deployments delivered outcomes, but strategic leverage remained <10% of AI use. Organisational execution barriers (governance, data quality, strategic clarity, cross-functional alignment) remained the constraint.

TOOLS