The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that helps prioritise features by synthesising customer signals, business impact, and engineering effort estimates. Includes RICE/ICE scoring assistance and roadmap scenario modelling; distinct from backlog management which organises work rather than prioritising outcomes.
Feature prioritisation is the practice of systematically ranking product features and roadmap initiatives based on customer signals, business impact, and effort estimates. Rather than manually juggling priorities or defaulting to loudest-voice decision-making, teams use frameworks like RICE (Reach, Impact, Confidence, Effort) or ICE (Impact, Confidence, Effort) to make explicit trade-offs. AI is emerging as a tool to accelerate this process—synthesising customer feedback into prioritisation signals, modelling roadmap scenarios, and suggesting effort estimates based on historical data. The challenge is that AI-assisted decision-making requires high-quality inputs and rigorous validation: bad feedback data produces bad priority rankings, and model error can lead teams to over-commit on low-value work.
By May 2026, feature prioritisation entered a phase of sophisticated deployment evidence alongside persistent execution crisis. On the vendor side, agentic capability had matured beyond assisted scoring into autonomous workflow systems. Productboard case studies documented Principal PMs at Amplitude deploying AI agents that automate weekly product briefs via analytics integrations, analyse metric changes with root-cause hypothesis generation, and perform continuous session replay analysis—demonstrating agent-driven discovery at scale. Productboard (6,000+ customers: Microsoft, Zoom, Salesforce) and ecosystem competitors (ServiceNow RICE/WSJF scoring, Koji research-to-prioritisation, Atlassian Intelligence/Jira Rovo velocity-based scoring) offered production-grade AI-assisted prioritisation. Real-world deployments showed quantified outcomes: CloudSync compressed prioritisation cycles from 3-hour debates to 15-minute decisions (AI-synthesized customer research + ARR ranking: $513K SSO > $490K API > $287K Reporting); text analytics deployments delivered 35% backlog reduction with 12 NPS gains and prevented $50M product recalls; leading product teams reduced discovery workflows from 50+ steps to 18. Adoption breadth expanded: IdeaPlan's 1,200+ PM survey found 73% using AI weekly (up from 45% in 2024), 31% for roadmap narratives, 5-8 hrs/week savings; FAANG data showed 57% at Meta/Airbnb/Dropbox using RICE, 22% throughput gains with WSJF at Spotify/Amazon. Yet adoption paradoxically masked stalled strategic execution. Mustafa Kapadia's April 2026 benchmark found that despite 73% weekly AI use, leverage of AI for core product work (roadmap prioritisation, strategic planning) remained <10%—unchanged from six months prior, indicating adoption breadth had not translated into decision quality. Critically, only 11.5% of PMs reported confident prioritisation decisions despite widespread tool access.
The execution gap persisted on three fronts: (1) Framework inadequacy for AI features—RICE broke when applied to AI projects; IdeaPlan's RICE-A added AI Complexity dimension (data readiness 40%, model maturity 35%, operational overhead 25%), yet 80%+ of AI projects failed with <20% scaling to production within 18 months. (2) Organisational barriers remained primary: governance emerged as the blocker, not technology. Deloitte's April survey of 3,235 leaders revealed 88% use AI but only 20% achieve revenue growth; governance, infrastructure, data, and talent readiness declined despite rising adoption. By May, senior practitioners documented the deeper constraint: 94% of product managers use AI tools, yet 95% of GenAI pilots fail to deliver ROI—the root cause is operating model and process readiness, not tool capability. Frameworks promised objectivity but failed—ITONICS documented how RICE scores became political (confidence=optimism, effort=acceptable-not-real), MoSCoW became politicised, and frameworks were bypassed for politically significant decisions. (3) Systemic "research breakage" persisted: research findings disappeared into organisational fog (no clear owner) or silent mid-roadmap abandonment. Wire's analysis showed AI tools accessed only 1 of 5 required context dimensions (strategic, user, technical, competitive, organisational), causing failure modes (keyword frequency ranking SSO above onboarding without contract context).
The fundamental constraint remained unchanged since 2024: organisations lacked not technology but governance discipline, data quality, strategic clarity, and organisational alignment. Vendor lock-in fears (94% of IT leaders) and wildly varying deployment complexity (ServiceNow 4-8 weeks vs Canny 1-2 hours) compounded adoption barriers. Automation without human reflection risked accelerating feature factories rather than enabling genuine prioritisation discipline. New deployment methodologies emerged (outcome-first KPI scoring, RICE-A framework extensions, text analytics-driven input) but implementation remained constrained by the same organisational execution barriers that had blocked progress since 2024.
— 18-year product leader documents critical gap: 94% PM AI adoption reported but 95% of GenAI pilots fail ROI; root cause is operating model readiness and organizational governance, not technology maturity—negative signal balancing positive vendor maturity evidence.
— Principal PMs at Amplitude (Frank Lee) and Productboard (Chris Patton) deploy AI agents for automated discovery, metric analysis, and opportunity detection; demonstrates sophisticated production-ready AI-assisted prioritization at scale.
— Front CPO (9k+ customers, $100M ARR) details how AI shifts feature prioritization from effort/impact to adoption outcomes and go-to-market clarity; discovery and delivery workflows collapsing into continuous cycle.
— Practical use case scoring framework (Value, Feasibility, Time-to-impact, Risk) for prioritizing AI initiatives with outcome-first KPI models; directly applicable to AI-informed feature roadmap prioritization methodology.
— FAANG adoption evidence: 57% at Meta/Airbnb/Dropbox use RICE; 41% of scrum teams use MoSCoW; WSJF adoption at Spotify/Amazon reports 22% higher throughput—demonstrates framework uptake and comparative effectiveness signals.
— Three named deployments using AI text analytics for prioritization: SaaS reduced backlog 35% and gained 12 NPS points; MedTech automated 60% compliance docs; consumer electronics prevented $50M recall—quantified outcomes from real-world prioritization adoption.
— Critical analysis of RICE, MoSCoW, Kano, and Value-vs-Effort: frameworks promise objectivity but fail under organizational pressure; RICE scores become political (confidence=optimism, effort=acceptable not real), frameworks bypassed for politically significant decisions; proposes cross-functional scoring and evidence validation.
— Follow-up to 2025 AI Empowered Product Team Benchmark (54 CPO interviews): leverage AI for core product work including roadmap prioritization accounts for <10% of use; no organic shift toward strategic work six months later—negative signal showing adoption breadth has not translated to decision quality.