The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that automatically drafts full responses for agents to review, edit, and send during customer interactions. Includes tone-matched response generation and policy-aware drafting; distinct from response suggestion which offers options rather than complete drafts.
Auto-draft with human review has become the proven pattern for AI in customer support. The approach -- AI generates a full, tone-matched response draft; the agent edits and sends it -- is now a GA feature across tier-1 platforms, with documented ROI at enterprise scale. The question for most organisations is how to roll it out effectively, not whether it works.
What makes auto-draft durable is what it chose not to automate. Fully autonomous AI agents face high failure rates and mounting governance concerns; auto-draft sidesteps these by keeping the human in the approval loop. That architectural choice, once seen as a concession, has proven to be the practice's competitive advantage. Deployments that preserve agent judgment deliver measurable gains in handle time, resolution rate, and satisfaction. Those that skip the review gate face spiralling incident rates and stalled scaling.
Zendesk and Intercom ship auto-draft as standard platform infrastructure, not add-on pilots. Zendesk expanded its AI writing tools (tone controls, expand, simplify) to Professional-tier plans in early 2026, while its AI-generated procedure drafts give agents review-ready content at scale. March 2026 releases introduced auto-assist event logging for full audit trails and pre-approved action workflows, maturation signals that governance layers are now native to the product, not bolted-on. Intercom publishes a formal automation-rate KPI for its Fin agent, treating draft-and-review throughput as a production metric.
Deployment is approaching mainstream at the 55-60% adoption mark. Metrigy research across 656 companies (August-September 2025) shows 55% have deployed agent assist, 39% planning or evaluating (94% combined engagement), with two-thirds reporting improvements in agent quality and 59% noting increased sales. Real-world ROI evidence strengthens the case where implementation is mature. Nucleus Research documents measurable impact across 30+ Zendesk customers in production, showing resolution performance and effort reduction gains with human-supervised workflows at scale. Google's Agent Assist deployment guide documents 10-15% handle-time reductions; named enterprise customers report results ranging from 64% email automation with CSAT gains to 92% faster resolution.
However, deployment-execution gaps widen at scale. Intercom's 2026 survey of 2,400 service professionals found 82% had invested in AI for customer service, yet only 10% rated their deployment as mature—and the gap in quality outcomes between mature and early-stage teams was stark (87% vs. 43% reporting improvements). Qualtrics research across 20,000+ consumers in 14 countries reveals AI customer service fails at 4x the rate of other AI applications, with context loss and hallucination being common failure modes. This disparity validates auto-draft's human-review gate: Gravitee's survey of 900+ executives shows 81% deployed AI agents but only 14% had full security approval; 88% reported incidents. The practice's governance advantage is structural—human review prevents the silent failures that plague autonomous systems. The scaling challenge is now operational: change management, agent training, workflow integration, and governance rigor, not proof of concept.
— Agent assist ROI framework quantifies direct cost savings (20% AHT reduction), quality improvements, and throughput gains; positioned as proven ROI category.
— Customer service automation leads with 620% average ROI within 18 months; AI agents handling Tier 1/2 resolve 78% without escalation in production.
— Survey of 700+ leaders: 90% uncomfortable with AI representing brand directly to customers, validating human review gate as essential control for adoption.
— Hybrid human-AI escalation model achieves 4.25/5 CSAT (vs 4.1 pure-AI), narrowing gap to human-only 4.3 by just 0.05 points; validates auto-draft architecture.
— Sinch deployment: AI Copilot for agents combined with autonomous agents achieved 47% faster resolution and doubled self-service automation from 17% to 32%.
— LiveAgent's 'draft & approve' mode demonstrates standard GA implementation: AI generates response as private note, agent reviews and edits before sending.
— Customer service agents achieve 9x cost reduction per task, 8.7 hours saved weekly, 4.2x productivity multiplier, 4.1-month payback period across deployments.
— 40-person SaaS team expected 60% automation but achieved 23% after six months; knowledge-base indexing limits and intent classification gaps identified as root causes, illustrating steep configuration and tuning cliff in auto-draft deployments.