Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

Pick a role above to explore practices

BLEEDING EDGE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

LEADING EDGE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
👥 PEOPLE & TALENT
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

GOOD PRACTICE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
👥 PEOPLE & TALENT
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

ESTABLISHED

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💹 FINANCE & ACCOUNTING
👥 PEOPLE & TALENT

🎧 Customer Operations

AI for supporting, retaining, and understanding customers after the sale. The highest concentration of good-practice tiers: chatbots, ticket routing, sentiment analysis, and voice-of-customer are deployed at scale in most industries. Bleeding-edge frontiers include autonomous resolution without human escalation and real-time emotion detection. Momentum is steady but churn prediction and proactive outreach remain stalled.

18 practices: 11 good practice, 3 leading edge, 4 bleeding edge

Where AI Stands in Customer Operations

Customer operations is the most mature AI domain in enterprise technology -- and the one where the gap between what works and what organisations actually achieve has become the defining story. Across eighteen practices spanning chatbots, agent assist, quality monitoring, voice AI, and analytics, the tooling is overwhelmingly production-ready. Zendesk, Salesforce, Intercom, Microsoft, and AWS all ship GA capabilities across the full stack. The economics are documented and repeatable: auto-draft with human review delivers 620% average ROI within 18 months, ticket routing cuts cost-per-resolution from $15-22 to $2, and call summarisation reduces after-call work by 25-40%. Twelve of the eighteen practices tracked sit at the broadest adoption level, with proven deployments across industries and geographies.

Yet the headline numbers mask a persistent execution crisis. Across virtually every practice, the same pattern recurs: 80-90% of organisations have invested in AI tooling, but only 10-12% report mature, fully optimised deployments. Intercom's 2026 survey of 2,400 support professionals captures the ratio precisely -- 82% had invested in AI, 10% called their deployment mature. Forrester predicts 2026 will be a year of "gritty foundational work" rather than transformation, with service quality dipping as organisations redesign workflows. The constraint is no longer technology. It is governance, change management, data quality, and the organisational discipline to move from pilot to production. This is a domain where the vanguard -- Klarna, Salesforce Agentforce, Bank of America, TeamSystem -- operates at a different altitude from the median enterprise.

The structural shape of the domain has stabilised. Human-in-the-loop architectures have won the design argument: auto-draft with human review, response suggestion, and quality monitoring all deliver their strongest results when agents retain final authority. Fully autonomous approaches -- autonomous send, LLM-powered conversational chatbots -- remain experimental or carry material risk. Stanford-CMU research showing hybrid human-AI teams outperforming autonomous systems by 68.7% has given the augmentation thesis empirical backing. Meanwhile, the legacy layer (scripted chatbots) is in managed decline, with Zendesk sunsetting its scripted bot builder by August 2026. The domain's centre of gravity sits firmly in the augmentation middle: proven, scalable, and limited primarily by how well organisations implement it.

What's New, 2026-04-19 to 2026-05-03

This scan cycle produced no tier or trend changes across the eighteen practices -- a signal of structural stability rather than stagnation. The evidence base deepened considerably, with new data points across every practice reinforcing existing positions.

The most significant new evidence came from three directions. First, autonomous resolution economics crystallised: Salesforce Agentforce documented 380,000+ support interactions at 84% autonomous resolution with only 2% escalation, while enterprise benchmarking across 150+ data points confirmed a median deflection rate of 41.2% with cost-per-resolution at $0.62 versus $7.40 for human agents. These numbers give the practice its clearest unit economics to date. Second, voice AI infrastructure reality sharpened: Intuit deployed Amazon Connect across 11 countries handling 275M+ annual interactions, but Haptik's analysis of 10M+ production calls documented a critical pilot-to-production gap -- latency climbing from 380ms to 900ms+ at scale, CSAT dropping 4 points, and escalation rates tripling. Third, the VoC practice faces a structural headwind: Forrester analysts argued that VoC platforms risk commoditisation as AI agents displace the customer interactions that generate feedback, while survey response rates collapsed from 30% to 18% in six months.

Other notable evidence includes Zendesk's GA of email agents with multi-step procedure execution (extending autonomous resolution from chat to email), insurance claims automation reaching enterprise scale (70-90% straight-through processing across major carriers), and peer-reviewed research confirming knowledge base semantic quality improves LLM accuracy by 17-23 percentage points -- proving governance is more important than model selection.

Key Tensions

  • The 10% maturity wall. Across agent assist, chatbots, routing, quality monitoring, and proactive engagement, the same figure surfaces: roughly 10% of organisations reach mature, optimised deployment despite 80%+ investment rates. The causes are consistent -- data fragmentation, absent governance, insufficient change management, and misaligned success metrics (optimising for deflection rather than resolution). This is not a technology problem. It is an organisational design problem, and it explains why vendor revenue can grow at 150% year-over-year while customer outcomes remain bifurcated. Zendesk's AI ARR hit $500M in early 2026, but only 25% of its customer base has fully integrated AI capabilities.

  • Augmentation vs. autonomy: the design argument is settled, the market argument is not. Human-in-the-loop architectures (auto-draft, response suggestion, quality coaching) consistently outperform autonomous alternatives on reliability, CSAT, and governance metrics. Hiver's survey of 700+ leaders found 90% uncomfortable with AI representing their brand directly to customers. Yet vendor roadmaps and pricing models push toward autonomy -- Zendesk's outcome-based pricing at $1.99/resolution and Salesforce Agentforce's per-conversation model create economic incentives to remove humans from the loop. The tension between what works (augmentation) and what vendors sell (autonomy) will define purchasing decisions through 2027.

  • Autonomous resolution's bimodal ROI distribution. Analysis of 600+ deployments shows only 12% clearing 300%+ ROI while 88% operate at or below break-even. The dividing line is not vendor selection or model quality -- it is deployment discipline: knowledge base maturity, escalation logic, system integration depth, and continuous governance. In regulated European markets, AI workflows outnumber autonomous agents 5:1, with 78% citing EU AI Act compliance as the primary barrier. Organisations considering autonomous resolution face a binary outcome set with little middle ground.

  • Voice AI's pilot-to-production chasm. Two-thirds of Fortune 500 companies report production voice AI deployments, and Intuit's 275M-interaction deployment proves enterprise scale is achievable. But Haptik's analysis of 10M+ calls reveals that latency climbs from 380ms in pilots to 900ms+ in production, CSAT drops, and escalation rates triple. Taco Bell rolled back its 500-location drive-thru pilot after edge-case fragility overwhelmed the system. The industry median latency of 1.4-1.7 seconds (per Chatarmin's 4M-call analysis) sits far above the 300ms expectation. Voice AI works in high-volume, structured scenarios with mature operations teams; it fails where scope is broad, edge cases are frequent, or integration testing is inadequate.

  • Consumer trust trails organisational confidence by a structural margin. Gartner's survey of 5,728 customers found 64% prefer companies not use AI for customer service; UC Berkeley research documented 53-77% negative experience rates; and 73% of consumers switch brands after a poor AI interaction. This gap is not closing. Business leaders operate at 82-91% enthusiasm while consumers remain sceptical, creating a structural risk that faster deployment without quality improvement will accelerate brand damage rather than reduce cost. The 4x failure rate of AI customer service compared to other AI applications (Qualtrics, 20,000+ respondents) underscores that this domain carries higher consumer sensitivity than most.

Top 10 Evidence Items

  1. Scaling Voice AI for Large Enterprises: What Changes After 10 Million Calls (opinion) — The single most important evidence item for the voice AI pilot-to-production chasm: Haptik's production analysis documents latency climbing from 380ms to 900ms+, CSAT dropping 4 points, and escalation rates tripling at scale — precisely the mechanism behind why two-thirds of voice AI deployments disappoint despite impressive pilots. https://www.haptik.ai/blog/scaling-voice-ai-for-large-enterprises

  2. Chatbot Frustration is Real: Hidden Costs and Best Practices (research-paper) — Peer-reviewed UC Berkeley/CMR research quantifying the consumer trust gap: 53-77% negative experience rates and 64% preference against AI service — empirical grounding for the summary's claim that consumer sentiment structurally lags organisational confidence and that faster deployment without quality improvement accelerates brand damage. https://cmr.berkeley.edu/2026/04/chatbot-frustration-is-real-hidden-costs-and-best-practices/

  3. AI Agent Failure Rate: Why 70-95% Fail in Production (industry-report) — Fiddler AI's empirical analysis documenting 70-95% production failure rates for autonomous agents grounds the bimodal ROI distribution narrative — the 88% of deployments operating at or below break-even are experiencing exactly these failure modes, not model inadequacy. https://www.fiddler.ai/blog/ai-agent-failure-rate

  4. Intuit Improves Customer Experience with Amazon Connect (case-study) — Proof that enterprise-scale voice AI is achievable: 275M+ annual interactions across 11 countries deployed in 2 weeks — establishing the upper bound of what mature operations can deliver and making the contrast with Haptik's failure analysis sharper. https://aws.amazon.com/solutions/case-studies/intuit-contact-center-case-study/

  5. What's new in Zendesk: May 2026 (product-ga) — Zendesk's GA of email agents with multi-step procedure execution extends autonomous resolution from chat to email channels, illustrating how vendor capability continues advancing even as most organisations struggle to optimise existing deployments — widening the vanguard/median gap. https://support.zendesk.com/hc/en-us/articles/10609395164442-What-s-new-in-Zendesk-May-2026

  6. AI Workflows Over Autonomous Agents: Utrecht's 2026 Enterprise Strategy (case-study) — Hard evidence for the regulatory barrier in European markets: AI workflows outnumber autonomous agents 5:1, with 78% of enterprises citing EU AI Act compliance as the primary constraint — directly quantifying why the augmentation vs. autonomy tension plays out differently by geography. https://aetherlink.ai/en/blog/ai-workflows-over-autonomous-agents-utrecht-s-2026-enterprise-strategy-utrecht

  7. Why Service AI Keeps Failing — and How to Fix It (case-study) — Diginomica's production deployment analysis names the execution failures behind the 10% maturity wall: knowledge base gaps, poor escalation logic, and integration deficits rather than model quality — the clearest articulation of why the constraint is organisational design, not technology. https://diginomica.com/why-service-ai-keeps-failing-and-how-fix-it

  8. AI Orchestration in Customer Service: Why 81% Still Fail? (adoption-metric) — Moveo AI finding that 81% of customer service teams run AI as disconnected tools with only 1 in 5 reporting integrated systems — the structural mechanism behind the 10% maturity wall, showing fragmented tooling is the most common root cause. https://moveo.ai/blog/ai-orchestration-customer-service

  9. Insight Was Never The Point: Arise, Systems of Action (opinion) — Forrester analyst argument that VoC platforms face structural commoditisation as AI agents displace the customer interactions that generate feedback — the most consequential long-term threat to the VoC practice that the summary's "structural headwind" language points to. https://www.forrester.com/blogs/insight-was-never-the-point-arise-systems-of-action/

  10. Grant Thornton: Insurers See AI Gains but Face Governance Gap (adoption-metric) — Insurance sector data showing 52% revenue growth and 70-90% straight-through claims processing alongside a 40%+ governance gap — the claims automation success story and its shadow in one survey, illustrating that even the domain's highest-performing practice carries unresolved compliance risk. https://www.insurancejournal.com/news/national/2026/04/30/867821.htm