The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that suggests pre-written or generated responses to support agents during customer interactions for manual selection. Includes canned response recommendation and knowledge article surfacing; distinct from auto-draft which generates full responses rather than suggesting options.
Agent-assist response suggestion has settled into the infrastructure layer of modern customer operations. The capability -- AI that surfaces pre-written or generated reply options for a human agent to review, edit, and send -- is now a standard feature across every major contact center and CRM platform. The question facing operations leaders is not whether to adopt it, but how to extract consistent value at scale.
The practice works because it preserves human judgment while accelerating throughput. Agents keep final control over tone and context; the system handles retrieval and drafting. That human-in-the-loop design has proven more reliable in production than fully autonomous alternatives, which continue to show high failure rates on complex interactions. Response suggestion occupies the pragmatic middle ground where efficiency gains are real and measurable -- documented deployments report 20-74% faster response times and meaningful reductions in agent burnout -- without the trust risks of full automation.
The remaining constraint is organizational, not technical. Governance frameworks, data quality, and implementation discipline determine whether a deployment delivers its projected ROI or stalls at pilot stage.
Every major platform now ships response suggestion as a core feature. Salesforce offers Einstein Service Replies and Reply Recommendations, Zendesk provides Copilot with auto-assist, Genesys has consolidated onto Agent Copilot (retiring its legacy token-based Agent Assist in June 2025), AWS delivers Amazon Q in Connect, and Cisco launched Webex Contact Center AI Assistant with real-time response recommendations in early 2026. The vendor story is one of convergence: platforms are competing on integration depth and governance tooling, not on whether the capability exists.
Deployment evidence supports the confidence with concrete metrics. Telus cut 40 minutes per interaction through Google Cloud's agent-assist tooling; Danfoss automated 80% of transactional decisions; Zendesk customers like Esusu reached 64% email automation with a 10-point CSAT increase; AssemblyAI-based systems achieve 27% AHT reduction and 7.7% concurrency gains. A Five9 survey found 94% of business leaders already using AI to support agents during live interactions. Gartner projects 70% of customer service agents will use AI-assist tools by end of 2026. Comprehensive ROI analysis shows 20-30% AHT reduction, 8-15% FCR improvement, and 30-50% ramp acceleration in production deployments—outcomes concentrated in organizations with sufficient ticket history and governance infrastructure.
Those headline numbers mask wide variance in practice and rising implementation barriers. Baseline automation rates range from 50% to 86% depending on tuning effort, and Princeton research shows that 18 months of model capability gains produced zero reliability improvement in production agent systems. Gartner warns that 40% of agentic AI projects will be cancelled by 2027 -- but response suggestion, with its human-in-the-loop design, has consistently been identified as one of the patterns that works. Zendesk's April 2026 deprecation of autonomous "AI agents—Essential" tier signals that broader agent automation bundles face ROI challenges despite organizational pressure to deploy. A Qualtrics survey of 20,000+ consumers in early 2026 documents AI customer service failing at 4x the rate of other AI applications, with 19% reporting zero benefit and widespread context-loss and hallucination failures. Response suggestion's human-approval checkpoint avoids these failure modes. However, April 2026 reporting reveals governance gaps: enterprises are deploying response suggestion and agent coaching at scale but lack control frameworks for hallucination detection, bias auditing, and data leakage—creating trust debt that constrains adoption in regulated industries. Cold-start problems (1,000+ ticket minimum before suggestions work) and tiered licensing ($50/agent add-ons) further concentrate benefits to large organizations. The gap is not capability but operational discipline: governance, data quality, observability, and change management remain the binding constraints on scaling.
March 2026 surveys document the organizational readiness challenge. A Gartner study of 321 service leaders found 91% under pressure from executive leadership to implement AI, but only 84% are planning to reshape agent roles and add new skills—signaling realistic understanding that tool deployment without organizational redesign fails. An Intercom survey of 2,470 customer service professionals found 82% invested in AI in 2025 and 87% planning further investment in 2026, but only 10% self-report mature deployments where systems work at scale. Among those mature teams, success is measured by first-contact resolution, repeat-contact rates, and agent confidence—not by tool adoption metrics. This concentration of value in mature, governance-disciplined organizations validates response suggestion as a good-practice tier—proven and widely available, but requiring organizational competence to realize its benefits.
— Comprehensive ROI framework for real-time agent assist shows 20-30% AHT reduction, 8-15% FCR improvement, 30-50% ramp acceleration, 90%+ compliance miss elimination, distinguishing response suggestion from autonomous agents.
— Critical analysis identifies response suggestion adoption barriers: cold-start problem (1,000 tickets minimum), $50/agent tiered licensing, slow intent model updates (2 weeks per intent), limiting small-enterprise adoption despite positive ROI case studies.
— Zendesk auto-assist feature expansion to employee service use cases with event logging for audit and acceptance tracking, showing mature production deployment patterns and governance maturity.
— Real-time agent assistance with live knowledge article surfacing and response suggestions achieves 27% AHT reduction and 7.7% increase in concurrent conversations per agent in production deployments.
— Vector-based response suggestion system deployed in production achieving one-click agent confirmation with feedback loops; specific metrics: 30% error rate in manual template selection reduced through contextual generation pipeline.
— Enterprises deploying agent-assist and coaching at scale lack governance frameworks for hallucination, bias, and data leakage; Cisco VP emphasizes 'trust is architectural' as control-failure barrier to mature adoption.
— Kore.ai documents Agentic Next Best Action suggestion enhancements with agent feedback mechanisms and Salesforce/Five9 integrations, showing iterative investment in response suggestion as core capability.
— Arahi guide cites Forrester metrics: teams using agent assist resolve tickets 34% faster with 22% higher CSAT; compares 8 platforms spanning response suggestions, NBA, and autonomous agents; positions response suggestion as adoption accelerant.