The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that automatically summarises support calls and generates disposition codes and structured notes for CRM entry. Includes after-call work automation and key moment extraction; distinct from call transcription in sales which focuses on sales conversations rather than support calls.
Call summarisation and disposition has graduated from experimental feature to proven capability. Every major contact centre platform now ships AI-generated post-call summaries and disposition codes as GA functionality, and early adopters report 25-40% reductions in handle time once they invest in tuning. The practice replaces the manual after-call work agents perform on every interaction — typing summary notes, selecting disposition codes, updating CRM records — with models that extract the customer's issue, resolution, action items, and classification codes automatically. The question facing most organisations is no longer whether the technology works but how much customisation it demands. Out-of-the-box accuracy sits well below production-grade thresholds, and bridging that gap requires structured validation workflows, domain-specific fine-tuning, and ongoing human review. Organisations that make that investment see real returns; those expecting plug-and-play results do not.
The vendor ecosystem has reached full feature parity. Microsoft, AWS, Zendesk, ServiceNow, Oracle, Talkdesk, Webex, CloudTalk, and Dialpad all offer GA summarisation capabilities, often bundled into platform pricing rather than sold as add-ons. Recent releases reflect refinement rather than novelty: Microsoft extended Copilot with row-level summarisation in Dynamics 365 Customer Service, Cisco Webex added mid-call transfer summaries with API access, and CloudTalk shipped AI tagging with direct CRM auto-entry. Industry estimates put contact centre adoption above 60%, with documented cost reductions of 35% at deployed sites.
Those headline numbers obscure a persistent gap between feature availability and production-grade accuracy. Microsoft's own Azure AI documentation now explicitly flags dialectal variance, abstractive hallucination, and degraded performance on under-represented conversation types as known limitations. Independent testing tells a similar story: raw AI summaries achieve 63-89% accuracy, while deployments with structured human-review workflows reach 94-96%. Speaker diarisation accuracy drops nearly 30 percentage points on hybrid calls, and domain jargon remains a blind spot without custom vocabulary tuning. Context reconstruction on escalated tickets still costs an estimated $200-500 per incident. The pattern is clear: organisations willing to invest in validation protocols and fine-tuning unlock genuine efficiency gains, but the out-of-the-box experience remains insufficient for unsupervised use.
— Mid-size European bank case study: 47,000 calls/quarter with only 26% captured in CRM summaries; unanalyzed 35,000 calls contained 2,800 upsell signals, 1,400 churn warnings, 340 compliance gaps, revealing material adoption and implementation gap.
— Independent third-party testing of 10 call summary platforms across 400+ real test calls; demonstrates ecosystem maturity with broad vendor feature parity and adoption breadth across major contact center platforms.
— AWS Transcribe Call Analytics product page: tier-1 platform GA'd generative AI call summarization combined with call categorization/disposition; confirms market-leading vendor capability maturity and feature consolidation.
— Technical analysis of AI meeting summarization pipeline with specific error rates: ASR 3-35% WER depending on conditions, diarization 11-13% error, LLM hallucination measurable; directly applicable error modes and failure patterns to call summarization deployments.
— Deepgram releases domain-specific language model for contact center call summarization, fine-tuned on 200K conversations with quantified wrap-up time reduction use case demonstrating vendor-specific optimization for summarization practice.
— Cisco Webex Contact Center official feature documentation: GA'd AI conversation summarization across multiple scenarios (dropped calls, AI transfers, consults); confirms tier-1 platform capability with CSAT improvement outcomes.
— Real-world call center automation guide including Telefónica Germany deployment case with specific ACW reduction and operational efficiency metrics, positioning summarization as core automation lever in modern contact center stacks.
— NIST AI 600-1 governance framework requiring pre-deployment TEVV for confabulation testing in regulated domains; directly applicable to call summarization quality assurance requirements in financial services, healthcare, and compliance-sensitive operations.