Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

🎯 Product & Design

AI applied from user research through to shipped product experience. Wide maturity spread: A/B testing and analytics are established, prototyping and design systems are good practice, but nearly half the domain is bleeding-edge — generative UI, autonomous UX research, and AI-native product frameworks are experimental. Most practices are stalled, with more energy in tooling announcements than production adoption.

13 practices: 2 established, 4 good practice, 4 leading edge, 3 bleeding edge

Product & Design -- Biweekly Brief

The headline: Most organizations using AI in product and design are getting faster without getting better -- and three independent studies now confirm only 5% achieve meaningful financial returns.

The Picture

AI adoption in product and design is near-universal: over 70% of designers and 73% of product managers use AI tools weekly. The tooling is mature and revenue-generating -- Adobe's AI creative tools now produce over $250M in annual recurring revenue, and prompt-to-product platforms like Lovable have crossed $400M. But the organizations pulling real value from these tools are a small minority. McKinsey reports its best clients earn $3 back for every $1 spent on AI, but only when they focus narrowly and invest in measurement infrastructure first. For everyone else, three research syntheses this cycle landed on the same number: 95% of AI pilots deliver zero impact on the bottom line. The gap between leaders and the pack is not closing -- it is hardening into a structural divide defined by organizational readiness, not tool selection.

This Fortnight

  • Design system tooling crossed a maturity threshold. Organizations using machine-readable design systems (component metadata encoded as structured data, not just visual tokens) are reporting roughly 10x throughput on feature work. Salesforce now measures AI design success by verification cost, not speed. This matters because design systems are becoming the interface layer that AI agents (software that acts on its own without being prompted) depend on -- without one, generated output is unreliable.

  • The European Accessibility Act started issuing fines. Six EU member states have fined organizations between 5,000 and 40,000 euros for non-compliance, and automated tools catch only 25-30% of accessibility issues. Meanwhile, web accessibility failure rates rose for the first time in six years -- to 95.9% of the top million sites -- with AI-generated code identified as a contributing factor. Any organization shipping AI-generated interfaces into European markets faces a compliance cost that most have not budgeted for.

  • Klarna publicly reversed its AI copy strategy. After claiming AI handled 80% of copy and saved $10M per year, Klarna's CEO acknowledged that "too much efficiency focus damaged quality." This is the highest-profile admission yet that AI throughput without quality governance (safety rules meant to stop AI doing the wrong thing) erodes brand value, and it validates what practitioners across the domain have been documenting for months.

  • Figma's stock fell 55% from its IPO high. AI-native design tools -- Claude Design from Anthropic, v0 with 4 million users, Lovable -- are generating complete interfaces from plain-language descriptions, bypassing traditional design software. But Lovable's security breach exposed over a million projects' source code, and practitioner assessments show Figma's own AI features are uneven: some deliver production value, others do not. The competitive picture is messy, not settled.

  • ROI accountability hit a tipping point. Three independent analyses (Terminal X, ViviScape, KPMG) converged: 65% of organizations report difficulty scaling AI use cases (double the prior year), and 62% cite skills gaps as the primary barrier. Measurement priorities are shifting from productivity metrics toward financial impact -- boards are asking for P&L proof, not usage dashboards.

Coming Up

  • European Accessibility Act enforcement will escalate. Fines are currently modest (under 40,000 euros), but the regulation allows penalties up to 5% of annual turnover. Organizations shipping digital products into EU markets should audit AI-generated interfaces for WCAG 2.2 compliance now, before enforcement scales.

  • Design-to-code and prompt-to-product are splitting into separate markets. The first requires developer expertise and component discipline; the second prioritizes speed over maintainability. Lovable's security breach is an early signal that the prompt-to-product category carries risks most organizations have not assessed. Evaluate which category your team actually needs before committing to a platform.

  • AI governance tooling is becoming a budget line item. Adobe, Figma, and Amplitude are all positioning governance infrastructure -- brand enforcement, design system contracts, verification workflows -- as the product. Organizations that have not allocated budget for AI verification and quality management will find it harder to extract value as the 95% pilot-failure rate hardens into institutional skepticism.

What's Hard About This

  • Organizational readiness, not tool capability, determines outcomes. The 5% ROI success rate is not a technology gap -- it correlates with measurement infrastructure, data quality (only 44% of organizations report adequate data for AI), and cross-functional governance. Most organizations cannot close this gap by purchasing better tools.

  • Speed creates new risks that speed cannot fix. AI compresses research synthesis from 11 days to 4 hours and multiplies experiment volume by 4.7x. But 66% of employees trust AI outputs without verification, and organizations running 10+ AI tools are four times more likely to act on bad data. The throughput gains are real; the verification infrastructure to make them safe largely does not exist.

  • AI-generated output is actively degrading quality baselines. Web accessibility failures rose for the first time in six years. AI-generated code fails in production 43% of the time. Design hallucinations (when an AI tool confidently makes things up) -- phantom buttons, ghost UI elements -- occur 3-5% of the time. These are not edge cases; they are baseline rates that compound at enterprise scale without human review at every stage.


Go deeper: the full Product & Design briefing -- the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.