Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Agent assist — response suggestion

GOOD PRACTICE

TRAJECTORY

Advancing

AI that suggests pre-written or generated responses to support agents during customer interactions for manual selection. Includes canned response recommendation and knowledge article surfacing; distinct from auto-draft which generates full responses rather than suggesting options.

OVERVIEW

Agent-assist response suggestion has settled into the infrastructure layer of modern customer operations. The capability -- AI that surfaces pre-written or generated reply options for a human agent to review, edit, and send -- is now a standard feature across every major contact center and CRM platform. The question facing operations leaders is not whether to adopt it, but how to extract consistent value at scale.

The practice works because it preserves human judgment while accelerating throughput. Agents keep final control over tone and context; the system handles retrieval and drafting. That human-in-the-loop design has proven more reliable in production than fully autonomous alternatives, which continue to show high failure rates on complex interactions. Response suggestion occupies the pragmatic middle ground where efficiency gains are real and measurable -- documented deployments report 20-74% faster response times and meaningful reductions in agent burnout -- without the trust risks of full automation.

The remaining constraint is organizational, not technical. Governance frameworks, data quality, and implementation discipline determine whether a deployment delivers its projected ROI or stalls at pilot stage.

CURRENT LANDSCAPE

Every major platform now ships response suggestion as a core feature. Salesforce offers Einstein Service Replies and Reply Recommendations, Zendesk provides Copilot with auto-assist, Genesys has consolidated onto Agent Copilot (retiring its legacy token-based Agent Assist in June 2025), AWS delivers Amazon Q in Connect, and Cisco launched Webex Contact Center AI Assistant with real-time response recommendations in early 2026. The vendor story is one of convergence: platforms are competing on integration depth and governance tooling, not on whether the capability exists.

Deployment evidence supports the confidence with concrete metrics. Telus cut 40 minutes per interaction through Google Cloud's agent-assist tooling; Danfoss automated 80% of transactional decisions; Zendesk customers like Esusu reached 64% email automation with a 10-point CSAT increase; AssemblyAI-based systems achieve 27% AHT reduction and 7.7% concurrency gains. A Five9 survey found 94% of business leaders already using AI to support agents during live interactions. Gartner projects 70% of customer service agents will use AI-assist tools by end of 2026. Comprehensive ROI analysis shows 20-30% AHT reduction, 8-15% FCR improvement, and 30-50% ramp acceleration in production deployments—outcomes concentrated in organizations with sufficient ticket history and governance infrastructure.

Those headline numbers mask wide variance in practice and rising implementation barriers. Baseline automation rates range from 50% to 86% depending on tuning effort, and Princeton research shows that 18 months of model capability gains produced zero reliability improvement in production agent systems. Gartner warns that 40% of agentic AI projects will be cancelled by 2027 -- but response suggestion, with its human-in-the-loop design, has consistently been identified as one of the patterns that works. Zendesk's April 2026 deprecation of autonomous "AI agents—Essential" tier signals that broader agent automation bundles face ROI challenges despite organizational pressure to deploy. A Qualtrics survey of 20,000+ consumers in early 2026 documents AI customer service failing at 4x the rate of other AI applications, with 19% reporting zero benefit and widespread context-loss and hallucination failures. Response suggestion's human-approval checkpoint avoids these failure modes. However, April 2026 reporting reveals governance gaps: enterprises are deploying response suggestion and agent coaching at scale but lack control frameworks for hallucination detection, bias auditing, and data leakage—creating trust debt that constrains adoption in regulated industries. Cold-start problems (1,000+ ticket minimum before suggestions work) and tiered licensing ($50/agent add-ons) further concentrate benefits to large organizations. The gap is not capability but operational discipline: governance, data quality, observability, and change management remain the binding constraints on scaling.

March 2026 surveys document the organizational readiness challenge. A Gartner study of 321 service leaders found 91% under pressure from executive leadership to implement AI, but only 84% are planning to reshape agent roles and add new skills—signaling realistic understanding that tool deployment without organizational redesign fails. An Intercom survey of 2,470 customer service professionals found 82% invested in AI in 2025 and 87% planning further investment in 2026, but only 10% self-report mature deployments where systems work at scale. Among those mature teams, success is measured by first-contact resolution, repeat-contact rates, and agent confidence—not by tool adoption metrics. This concentration of value in mature, governance-disciplined organizations validates response suggestion as a good-practice tier—proven and widely available, but requiring organizational competence to realize its benefits.

TIER HISTORY

ResearchJan-2019 → Jan-2019
Bleeding EdgeJan-2019 → Jan-2022
Leading EdgeJan-2022 → Jul-2024
Good PracticeJul-2024 → present

EVIDENCE (90)

— Comprehensive ROI framework for real-time agent assist shows 20-30% AHT reduction, 8-15% FCR improvement, 30-50% ramp acceleration, 90%+ compliance miss elimination, distinguishing response suggestion from autonomous agents.

— Critical analysis identifies response suggestion adoption barriers: cold-start problem (1,000 tickets minimum), $50/agent tiered licensing, slow intent model updates (2 weeks per intent), limiting small-enterprise adoption despite positive ROI case studies.

— Zendesk auto-assist feature expansion to employee service use cases with event logging for audit and acceptance tracking, showing mature production deployment patterns and governance maturity.

— Real-time agent assistance with live knowledge article surfacing and response suggestions achieves 27% AHT reduction and 7.7% increase in concurrent conversations per agent in production deployments.

— Vector-based response suggestion system deployed in production achieving one-click agent confirmation with feedback loops; specific metrics: 30% error rate in manual template selection reduced through contextual generation pipeline.

— Enterprises deploying agent-assist and coaching at scale lack governance frameworks for hallucination, bias, and data leakage; Cisco VP emphasizes 'trust is architectural' as control-failure barrier to mature adoption.

Agent AI Release Notes - Kore.ai DocsProduct Launches

— Kore.ai documents Agentic Next Best Action suggestion enhancements with agent feedback mechanisms and Salesforce/Five9 integrations, showing iterative investment in response suggestion as core capability.

— Arahi guide cites Forrester metrics: teams using agent assist resolve tickets 34% faster with 22% higher CSAT; compares 8 platforms spanning response suggestions, NBA, and autonomous agents; positions response suggestion as adoption accelerant.

HISTORY

  • 2019: Salesforce Einstein reply suggestions launched within Service Cloud (March). Canned response best practices documented by Crisp and others; agent experience constraints identified as primary adoption blocker (Zendesk/Gartner survey: 66% of agents report negative tool experience). Next Best Action recommendations emerging as differentiation in major CRM platforms.
  • 2020: Microsoft Dynamics 365 Customer Service achieved general availability for agent-suggestion features (October); IBM published research on conversational agent assist (CAIRAA) combining response and document recommendations. Adoption metrics showed AI chatbot usage doubling but organizational barriers (workflow fit, audit logging, configuration control) limiting real-world deployment velocity.
  • 2021: Agent-assist response suggestion became standard across all major platforms (Salesforce, Microsoft, Zendesk, Genesys). Google Cloud added Dialogflow Agent Assist SDKs and AWS released reference implementations via live call analytics. Emerging vendors (Haptik) launched competitive products, shifting focus from capability maturity to organizational integration and workflow alignment. By year-end, response suggestion was table stakes rather than differentiator.
  • 2022-H1: Salesforce published engineering details on Einstein Reply Recommendations production deployment using TOD-BERT, a transformer model trained on 60+ task-oriented dialogue domains (April). Vendor ecosystem consolidated around standard agent-assist capabilities including real-time response suggestion, with Sprinklr and others documenting mature NLP-based implementations claiming FCR and CSAT gains. Response suggestion had fully transitioned from emerging capability to foundational feature across support platforms.
  • 2022-H2: Vendor ecosystem matured further with documentation and active development across major platforms: Genesys formalized Agent Assist with real-time knowledge suggestions and confidence-rated recommendations; Google Cloud continued active development of agent-assist-integrations open-source tooling; emerging vendors like Konverso documented 3-7% productivity gains from deployed installations. By year-end 2022, agent-assist response suggestion remained a foundational capability across support platforms with focus shifting toward operational metrics and organizational change management.
  • 2023-H2: Agent-assist response suggestion continues as a standard feature across major platforms, with production deployments reported in contact centers (Genesys community practitioners reporting real-world use in multi-queue environments with known technical gaps). Ecosystem maturation accelerates as vendors consolidate integrations: Genesys retires Google CCAI Agent Assist integration (EOL August 2024), shifting customers to native platforms and third-party replacements. The capability remains proven and widely adopted, with evolution focused on reliability, workflow integration, and organizational scaling rather than foundational capability development.
  • 2024-Q1: Vendor ecosystem consolidates further with Genesys formally deprecating Google CCAI integration in favor of native capabilities, reflecting competitive differentiation and platform autonomy. Production deployments face reliability challenges, as evidenced by Google Cloud global outages affecting Agent Assist across multiple regions. Market sentiment shows strategic priority (53% of contact center leaders prioritize AI for CX automation) but persistent implementation gaps: only 41% are satisfied with current AI solutions and 46% with third-party integrations. Salesforce, Microsoft, Cognigy, and emerging vendors continue active investment. The capability remains foundational, with evolution focused on reliability, ecosystem integration depth, and organizational change management rather than new foundational capability development.
  • 2024-Q3: AWS launches Amazon Q in Connect (July GA) providing real-time suggested responses and step-by-step guides for agents. Salesforce, Zendesk, and Genesys continue expanding mature response suggestion capabilities with named customer deployments (Catapult Sports, Rotho, telecommunications and e-commerce verticals) reporting 3-82% productivity gains. IBM State of Salesforce survey (1,191 customers) shows 69% adoption of native Salesforce AI capabilities. Market consolidation continues with Genesys completing Google CCAI deprecation (EOL August 2024). Persistent adoption barriers include integration complexity, configuration overhead, organizational change management, and AI system brittleness. The practice remains proven and widely deployed but evolution focused on reliability and operational maturity rather than new capability development.
  • 2024-Q4: Vendor ecosystem consolidates with Genesys deprecating Agent Assist via tokens in favor of unified Agent Copilot (December 2024). Enterprise customer deployments demonstrate measurable Q4 gains: Zendesk (Esusu 64% email automation, 10-point CSAT increase; Rotho tripled productivity); Salesforce internal pilot of Einstein Copilot expands from 100 to thousands of users with 80% query success rate; Google Cloud ecosystem partner agents (Bain for SEB) report 15% efficiency gains. However, BCG research shows 74% of companies struggle to move AI pilots to production value. Practitioner assessments document 15-20% error rate thresholds as intolerable in production, highlighting reliability and validation requirements over autonomous operation. Practice remains proven and widely adopted but limited by implementation and organizational scaling barriers rather than capability maturity.
  • 2025-Q1: New customer deployments confirm continued adoption momentum: Google Cloud's TTEC deployment achieved 40% escalation reduction and 11% AHT improvement; Zendesk's Freedom Furniture case showed 92% faster resolution and 17% CSAT gain. Salesforce released Spring '25 GA enhancements for Service Replies customization. However, industry data reveals persistent scaling barriers: MIT 2025 report shows only 5% of GenAI task-specific tools reach production; Wittify data shows 88% of organizations use AI but only 30% scale effectively, with just 10% of organizations deploying agents at scale in any single function. Practitioner analysis highlights hidden deployment costs (full-time maintenance burden for simple automation) defeating ROI expectations. Practice tier remains good-practice; widespread platform availability and deployment evidence support tier stability, but adoption velocity constrained by organizational scaling and ROI realization barriers rather than technical capability gaps.
  • 2025-Q2: Enterprise deployment and adoption metrics accelerate: Five9 survey reports 94% of business leaders use AI to support agents live during customer interactions, signaling mainstream penetration. Zendesk Copilot updated product page (June 2025) documents continued capability investment with customer metrics showing up to 120 tickets per agent shift. However, reliability concerns persist: survey data documents 73% of AI agent deployments failing to meet reliability expectations due to infrastructure gaps (vector database failures, embedding drift, observability challenges). Critical consulting analyses highlight persistent ROI disconnect in agentic AI, citing inflated expectations, masked costs, data readiness hurdles, and infrastructure complexity as primary barriers to value realization. Practice tier remains good-practice with broad platform availability and strong adoption metrics, but implementation velocity constrained by infrastructure maturity, reliability requirements, and ROI realization barriers.
  • 2025-Q3: Vendor consolidation continues with Genesys deprecating Agent Assist via tokens (EOL June 2025) in favor of Agent Copilot. However, scaling barriers harden: Gartner warns 40% of agentic AI projects will be cancelled by 2027; PwC data shows 79% of organizations deploying AI agents but most pilots fail at scale; research documents 70% of AI agents struggle with standard tasks. Industry shift toward organizational readiness frameworks: Google playbooks and consulting analyses identify governance, automation, and access as critical success factors. Practice remains table-stakes and universally deployed, but organizational scaling and infrastructure readiness increasingly recognized as fundamental constraints on value realization rather than capability gaps.
  • 2025-Q4: Adoption momentum accelerates with industry metrics projecting AI handling 95% of support interactions by end of 2026. Deployment case studies confirm sustained efficiency gains: Telus saves 40 minutes per interaction, Danfoss automates 80% of transactional decisions. Zendesk metrics framework (December 2025) formalizes adoption measurement (acceptance rates, satisfaction tracking). Practitioner analyses identify response suggestion and human-in-the-loop systems as successful patterns despite broader agentic AI scaling challenges. However, baseline effectiveness varies significantly (50-86% resolution depending on tuning), and success factors remain governance, data quality, and organizational readiness rather than capability maturity. Practice tier remains good-practice; capability stability confirmed but value realization constrained by organizational and infrastructure factors rather than technical development needs.
  • 2026-Jan: Zendesk released January 2026 product enhancements including auto-assist procedure version tracking with detailed performance metrics and group-level permissions for AI features. AgentOps case studies show Klarna handling 2.3M conversations (700 FTE equivalent) with 2-minute resolution versus prior 11 minutes, though quality concerns led to later workforce retraining. Deployment momentum continues amid persistent implementation barriers: Gartner forecasts 70% of agents will use AI-assist tools by year-end 2026, but 40% of agentic AI projects are predicted to fail or be cancelled by 2027, and reliability gaps (95% uptimes unachievable in practice) remain production risks. Practitioner assessments identify response suggestion and human-in-the-loop systems as proven patterns despite broader agentic autonomy failures.
  • 2026-Feb: Vendor ecosystem advances with Cisco Webex Contact Center launching AI Assistant Real-Time Assist for voice and digital channels. Zendesk continues feature investment with copilot guides and Salesforce releases Einstein Service Replies email enhancements. However, critical research surfaces reliability plateaus: Princeton data shows 18 months of AI model capability gains yielded zero reliability improvements in production agents, and analyst predictions confirm 40% of agentic AI projects will be cancelled by 2027 due to ROI challenges. Response suggestion systems with human-in-the-loop approval remain identified as pragmatic, proven pattern despite broader agentic AI scaling limitations.
  • 2026-Mar to Apr: Market maturity and organizational readiness signals document the practice's operational reality. Gartner survey of 321 service leaders (March 2026) confirms 91% under pressure to implement AI but 84% planning to reshape agent roles and add new skills—realistic organizational redesign beyond tool adoption. Intercom survey of 2,470 professionals (March 2026) finds 82% invested in AI in 2025 and 87% planning 2026 investment, but only 10% achieving mature deployments; among mature teams, success metrics are FCR, repeat contact, and agent confidence, not tool usage. Forrester Wave Q1 2026 names customer deployments with specific outcomes: Big Bus Tours 20% resolution-time improvement, Satair 40% ticket-handling reduction. Conectys documents market growth to $7.08B by 2030 (23.8% CAGR), positioning response suggestions within broader agent-assist trend. Critical consumer research (Qualtrics 20,000+ respondent survey, March 2026) shows AI customer service fails at 4x rate of other AI applications; 19% of consumers saw zero benefit; context-loss and hallucination are primary failure modes. Zendesk's April 2026 deprecation of autonomous "AI agents—Essential" tier (removal December 2026) confirms market pullback from overpromising autonomous capabilities and narrows platform investment focus toward augmentation-first patterns like response suggestion. Response suggestion tier remains stable as good-practice: universally available, proven in deployments, but value realization constrained by organizational readiness, governance, and infrastructure maturity rather than technological capability.
  • 2026-May: ROI metrics for response suggestion consolidate with greater precision: Balto's production analysis across enterprise deployments confirms 20-30% AHT reduction, 8-15% FCR improvement, and 30-50% new-hire ramp acceleration as repeatable outcomes. AssemblyAI-based real-time assist deployments document 27% AHT reduction and 7.7% increase in concurrent conversations per agent. Vector-based suggestion systems in production reduce manual template selection errors by 30%. Implementation barriers remain documented: Zendesk's cold-start problem (1,000 ticket minimum before suggestions activate), tiered licensing ($50/agent), and 2-week intent model update cycles concentrate benefits at larger organizations. Governance gaps are emerging as a structural risk: NoJitter reporting notes enterprises deploying response suggestion at scale while lacking hallucination detection, bias audit, and data leakage controls—a trust debt accumulating fastest in regulated sectors.