The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that monitors agent interactions for quality and compliance while providing real-time sentiment and tone coaching. Includes automated QA scoring and in-call coaching prompts; distinct from agent assist which drafts responses rather than evaluating agent performance.
AI-driven quality monitoring and coaching is a proven capability with a mature vendor ecosystem, GA tooling, and documented ROI — yet a persistent gap between deployment and value extraction keeps the practice from reaching universal status. The technology itself works: auto-scoring accuracy exceeds 99%, 100% interaction coverage has replaced manual sampling at forward-leaning organisations, and real-time coaching delivers measurable gains in handle time, attrition, and compliance. The question facing most contact centres is no longer whether to adopt, but how to move past fragmented pilots into strategic integration. That transition is where most stall. Only 12% of organisations with AI in their contact centres report fully optimised value, and change management failures — agent distrust, leadership gaps in empathy training, disconnects between operational metrics and business outcomes — remain the binding constraint. The tooling is ready; the organisational maturity is not.
Calabrio, Observe.AI, NICE, and Omind all ship GA products offering 100% interaction coverage, automated scoring, and real-time agent coaching. Named deployments back the value claims: Calabrio's QM platform delivers 90% reductions in manual QA time at production scale, while a healthcare deployment through its CareAI programme automated quality evaluation for 53% of patient inquiries with measurable improvements in time to care. Observe.AI, serving over 400 enterprise customers, reports consistent 20% AHT reductions and 25% CSAT improvement from real-time coaching; Calabrio documents 25% lower agent attrition at GE Appliances and a $2.7M revenue increase at Peckham.
These results, however, come from the organisations that have pushed past initial deployment. A USAN survey found 98% of contact centres have adopted some form of AI, but only 12% have reached full strategic optimisation — an 86-point gap that defines the practice's current ceiling. The barriers are primarily human, not technical. Only 35% of agents understand how AI tools are being used in their workflow, more than half fear job automation, and 64% of leaders neglect empathy training despite agents rating it a core strength. Bias in scoring models — accent, sentiment, gender, and script-adherence patterns — remains documented across a majority of deployed systems, and privacy litigation under statutes like CIPA adds legal friction. The technology has arrived; closing the implementation gap is now the work.
— Liveops survey of 815 enterprise executives shows 65% remain in hybrid Walk/Run stages requiring quality management infrastructure for human-AI workflows; only 14% reach full optimization.
— AmplifAI recognized as leading provider in 2026 CMP Research Prism for Automated QA/QM; analyst validation that coaching integration and 100% coverage are table-stakes.
— Critical analysis exposing coaching quality gaps and attrition drivers: agents leave when QA feels punitive, feedback is delayed, and coaching is sampled rather than continuous.
— Microsoft launches Quality Assurance Agent for real-time and post-interaction evaluation across AI and human interactions, addressing shift away from sampling.
— McKinsey finding that AI-driven QA achieves 90%+ accuracy vs 70% manual scoring while cutting costs in half; SQM Group documents $286K annual savings per 1% FCR improvement.
— Palomarr analyst ranking of 94 quality monitoring vendors by transcription, real-time analytics, AI tunability, and coaching automation identifies LevelAI, Cresta, and Observe.AI as leaders.
— Expert framework distinguishing AI agent monitoring from traditional QA, requiring 100% observability with metrics for resolution, accuracy, escalation, and compliance.
— Systematic QA process with issue-centric lifecycle tracking, annotation workflows, and eval suite as primary quality infrastructure for production AI systems.