Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Agent quality monitoring & coaching

GOOD PRACTICE

TRAJECTORY

Stalled

AI that monitors agent interactions for quality and compliance while providing real-time sentiment and tone coaching. Includes automated QA scoring and in-call coaching prompts; distinct from agent assist which drafts responses rather than evaluating agent performance.

OVERVIEW

AI-driven quality monitoring and coaching is a proven capability with a mature vendor ecosystem, GA tooling, and documented ROI — yet a persistent gap between deployment and value extraction keeps the practice from reaching universal status. The technology itself works: auto-scoring accuracy exceeds 99%, 100% interaction coverage has replaced manual sampling at forward-leaning organisations, and real-time coaching delivers measurable gains in handle time, attrition, and compliance. The question facing most contact centres is no longer whether to adopt, but how to move past fragmented pilots into strategic integration. That transition is where most stall. Only 12% of organisations with AI in their contact centres report fully optimised value, and change management failures — agent distrust, leadership gaps in empathy training, disconnects between operational metrics and business outcomes — remain the binding constraint. The tooling is ready; the organisational maturity is not.

CURRENT LANDSCAPE

Calabrio, Observe.AI, NICE, and Omind all ship GA products offering 100% interaction coverage, automated scoring, and real-time agent coaching. Named deployments back the value claims: Calabrio's QM platform delivers 90% reductions in manual QA time at production scale, while a healthcare deployment through its CareAI programme automated quality evaluation for 53% of patient inquiries with measurable improvements in time to care. Observe.AI, serving over 400 enterprise customers, reports consistent 20% AHT reductions and 25% CSAT improvement from real-time coaching; Calabrio documents 25% lower agent attrition at GE Appliances and a $2.7M revenue increase at Peckham.

These results, however, come from the organisations that have pushed past initial deployment. A USAN survey found 98% of contact centres have adopted some form of AI, but only 12% have reached full strategic optimisation — an 86-point gap that defines the practice's current ceiling. The barriers are primarily human, not technical. Only 35% of agents understand how AI tools are being used in their workflow, more than half fear job automation, and 64% of leaders neglect empathy training despite agents rating it a core strength. Bias in scoring models — accent, sentiment, gender, and script-adherence patterns — remains documented across a majority of deployed systems, and privacy litigation under statutes like CIPA adds legal friction. The technology has arrived; closing the implementation gap is now the work.

TIER HISTORY

ResearchJan-2020 → Jan-2020
Bleeding EdgeJan-2020 → Jan-2022
Leading EdgeJan-2022 → Jan-2025
Good PracticeJan-2025 → present

EVIDENCE (102)

— Liveops survey of 815 enterprise executives shows 65% remain in hybrid Walk/Run stages requiring quality management infrastructure for human-AI workflows; only 14% reach full optimization.

— AmplifAI recognized as leading provider in 2026 CMP Research Prism for Automated QA/QM; analyst validation that coaching integration and 100% coverage are table-stakes.

— Critical analysis exposing coaching quality gaps and attrition drivers: agents leave when QA feels punitive, feedback is delayed, and coaching is sampled rather than continuous.

— Microsoft launches Quality Assurance Agent for real-time and post-interaction evaluation across AI and human interactions, addressing shift away from sampling.

— McKinsey finding that AI-driven QA achieves 90%+ accuracy vs 70% manual scoring while cutting costs in half; SQM Group documents $286K annual savings per 1% FCR improvement.

— Palomarr analyst ranking of 94 quality monitoring vendors by transcription, real-time analytics, AI tunability, and coaching automation identifies LevelAI, Cresta, and Observe.AI as leaders.

— Expert framework distinguishing AI agent monitoring from traditional QA, requiring 100% observability with metrics for resolution, accuracy, escalation, and compliance.

— Systematic QA process with issue-centric lifecycle tracking, annotation workflows, and eval suite as primary quality infrastructure for production AI systems.

HISTORY

  • 2020: Observe.AI and Calabrio establish AI-powered agent quality monitoring as a distinct capability; Observe.AI secures $80M in funding and lands partnerships with HCL and 3CLogic, enabling 100% call coverage for sentiment and compliance scoring.
  • 2021: Observe.AI reaches 160 customers with 20,000+ agent licenses; launches AI-powered coaching product suite (4X coaching session increase); integrates with Microsoft Azure. Calabrio expands to 100% omnichannel interaction analytics. Cloud adoption accelerates (68% of contact centers), validating infrastructure readiness. Implementation challenges emerge: nonlinear effort to achieve quality thresholds and risk of agent demotivation from automated scoring.
  • 2022-H1: Observe.AI reports 150% ARR growth and 40% enterprise customer increase; launches Auto QA for adaptive automation (up to 1,000x coaching insights increase). Calabrio earns G2 Leader recognition and integrates with Talkdesk. Independent survey shows 67% of contact centers still manual but CI adopters 10x more confident; ecosystem maturity advances via platform integrations and product innovation.
  • 2022-H2: Vendor ecosystem accelerates real-time coaching capabilities. Calabrio and Verint present advanced coaching and quality management features at industry conferences. Market adoption survey shows 78% of contact centers plan AI deployment within 3 years, with quality management as a top priority use case. Implementation focus shifts from manual sampling to 100% automated interaction evaluation.
  • 2023-H1: Observe.AI launches Real-Time AI product suite adding live guidance and supervisor coaching; Calabrio maintains leadership with mature Auto QM evaluation forms. Third-party adoption metrics show significant gaps: 62% of contact center managers cannot analyze enough calls for accurate performance evaluation (Invoca). Vendor innovation focuses on real-time coaching ROI with measurable behavioral impacts (5-6% win rate lifts). Two-tier market emerges between cloud-native leaders and traditional centers.
  • 2023-H2: Critical research from SQM Group reveals persistent adoption barriers: only 19% of managers believe QA programs improve CSAT, and 83% of agents don't believe QA helps their performance. Despite mature product capabilities and vendor innovation in ROI tooling, fundamental user skepticism remains a deployment barrier. Quality monitoring reaches mainstream commercial stage with 100% interaction evaluation becoming standard, but adoption unevenness persists between cloud-native and traditional contact centers.
  • 2024-Q1: Calabrio acquires Wysdom.AI to expand bot QA analytics; deployment evidence shows Awaken achieving 56% reduction in difficult calls and 10% sales uplift. ISG analyst research projects two-thirds of contact centers will increase training/coaching budgets by 2026. Level AI survey finds 100% of leaders considering AI adoption with 23% higher satisfaction among agents using real-time AI tools. Community skepticism persists about AI agent reliability in live customer interactions, highlighting deployment risks alongside vendor momentum.
  • 2024-Q2: NICE releases Real-Time Interaction Guidance for AI-driven agent coaching with contextual compliance prompts, confirming category-wide focus on real-time coaching as table-stakes capability. Calabrio continues product innovation with Bot Analytics tools. Vendor ecosystem shows steady product maturation in real-time guidance and evaluation, though no major new deployment case studies emerge in this quarter.
  • 2024-Q3: Market adoption metrics show 39% of CX leaders using AI-driven scoring for employee and customer evaluation (CallMiner survey, 700 leaders). AWS ecosystem integration advances with real-time sentiment analysis templates for contact center deployment. Research from Salesforce reveals fundamental vulnerabilities in AI evaluation systems: LLMs vulnerable to deceptive feedback with 50%+ performance degradation. Analysis of 47 failed enterprise AI deployments ($127M sunk) identifies testing gaps and insufficient human oversight as key adoption barriers. Industry data shows 74% of contact centers still rely on random sampling, with AI achieving 100% coverage—adoption bifurcation persists between cloud-native and traditional centers.
  • 2024-Q4: NICE releases AI for Agents with 100% conversation evaluation and real-time coaching, confirming vendor commitment to quality monitoring as table-stakes. Adoption momentum continues: 33% of contact centers actively using emotion recognition for sentiment analysis; Frost & Sullivan projects two-thirds of CX operations plan AI-driven coaching within 3-5 years. However, critical legal barriers emerge: class-action lawsuits under privacy statutes (CIPA) challenge automated quality management when customer consent absent. Market bifurcation persists—cloud-native leaders deploying 100% AI-automated evaluation while traditional centers remain largely manual; execution gaps widen between vendor innovation and customer deployment capability.
  • 2025-Q1: Named deployments confirm measurable ROI: AAA Northeast reduced AHT by 14 seconds via AI analytics (equivalent to 1 FTE); Australian energy provider cut QA scoring inconsistencies by 35%, improving FCR/CSAT; GE Appliances/Delta Dental show cost reduction and attrition/defect improvement. AWS Marketplace integration of Observe.AI signals cloud ecosystem maturity. McKinsey/Gartner data shows 30% CSAT improvement and 25% productivity gains from AI monitoring. Practitioner analysis emphasizes hybrid human+AI model necessity—pure automation risks employee pushback, judgment gaps, and legal compliance issues. Market remains bifurcated between cloud-native leaders and traditional centers.
  • 2025-Q2: Calabrio's survey reveals near-universal AI adoption (98%) but persistent implementation challenges: 61% of centers report more difficult conversations since AI deployment, 32% cite agent distrust as critical barrier. Calabrio releases 70+ new features (Auto QM, Trending Topics, Interaction Summary), confirming ecosystem maturity. Observe.AI documents 350+ enterprise deployments with 60% efficiency gains and 75% QA time reduction. Critical shift in evidence landscape: practitioner analysis exposes measurement gaps (e.g., telecom provider showed 12% AHT improvement but 7% revenue decline), revealing tension between operational metrics and business outcomes. Legal/compliance barriers persist; market bifurcation between cloud-native leaders and traditional centers widens.
  • 2025-Q3: Vendor ecosystem innovation continues: Omind launches AI QMS platform with 100% automation, 30% cost reduction, 95% compliance accuracy, 20% CSAT gains, and up to 59-second AHT improvement. Observe.AI case studies document RealDefense (103% quota lift, 13% revenue boost) and Nations Info Corp (50% save rate improvement, 43% AHT reduction). However, critical deployment risks crystallize: 75-80% of enterprises deploying AI QA grading; documented bias manifestations in scoring (accent, sentiment, gender, script-adherence bias) affecting agent reviews and coaching. Organizational failure rates spike: Gartner forecasts 85% of AI projects fail; S&P Global 2025 data shows 42% of companies abandoned most AI initiatives. Fundamental gaps persist—legacy system integration, change management, causation modeling between metrics and business outcomes, and compliance under CIPA privacy constraints. Market bifurcation widens: cloud-native leaders achieve strong ROI, traditional centers struggle with execution. Technology maturity exceeds implementation maturity.
  • 2025-Q4: Technical standardization emerges: Deepgram and UseScore publish production guidelines for speech-to-text sentiment analysis (5-10% WER) and scaling from 3-5% manual sampling to 100% coverage (70-90% workload reduction achieved 8-12 weeks post-deployment). Industry benefit data solidifies: 20-40% CSAT, 15-25% repeat-call reduction, 30-50% escalation gains consistently reported. Observe.AI validated as IDC MarketScape Leader in Workforce Engagement Management. However, deployment fundamentals remain unchanged: 61% of centers report conversation quality degradation post-deployment; 32% cite agent distrust; organizational failure rates sustained at 42% abandonment; bias risks (accent, sentiment, gender, script-adherence) persist across 75-80% deployed systems. No tier-advancement signals emerge; market bifurcation between cloud-native leaders (with strong ROI) and traditional centers (struggling with execution) persists unchanged.
  • 2026-Jan: Calabrio launches Omni Agent Intelligence for unified human+AI agent monitoring, confirming market evolution toward hybrid deployment frameworks. New product capabilities extend to quality measurement across autonomous and human agents. However, Gartner forecasts 40% of agentic AI projects will be canceled by 2027 due to cost, unclear ROI, inadequate risk controls, and integration friction (70% of developers report integration problems). Technical maturity advances (95-98% call-scoring accuracy achieved) but organizational adoption barriers persist: unoptimized QA processes, inadequate change management, and measurement gaps between operational metrics and business outcomes remain tier-limiting factors.
  • 2026-Feb: Calabrio QM deployment demonstrates advanced technical capability: 99%+ auto-scoring accuracy, 90% manual QM time reduction, 25% agent attrition improvement, 41% ACW reduction; healthcare deployment (CareAI) manages 53% of inquiries via automated quality evaluation. However, strategic optimization analysis reveals critical disconnect: 98% AI adoption across contact centers but only 12% claim fully optimized value; 86% remain in "pilot purgatory." Leadership and cultural barriers intensify—only 35% of agents understand AI tool usage, >50% fear automation, and 64% of leaders neglect empathy training. Deployment maturity remains unchanged with persistent challenges: bias risks in 75-80% of systems, 42% organizational abandonment rates, measurement gaps between operational and revenue metrics. Technology capability advances but implementation/organizational maturity static, preventing tier advancement.
  • 2026-Apr: New deployment evidence confirms scale ROI where adoption is mature: Verint Coaching Bot delivers €67.8M benefit and 20-second AHT reduction at a telco, $70M savings at an insurer, and +39 NPS at a mortgage lender; platform comparison data shows BPO customers scaling QA coverage from 3% to 100% with 5-point CSAT gains. The operationalization gap sharpens as the defining constraint: CMSwire data shows 88% of contact centers deployed AI but only 25% operationalized it into daily workflows, and only 52% allow shared visibility between agents and AI systems. A real failure case (insurance company's knowledge base error affecting 660 calls over 11 days before a customer complaint surfaced it) illustrates why 100% monitoring coverage has practical value beyond efficiency — it catches systematic errors that sampling misses.
  • 2026-May: Major vendor commitments formalize next-generation capabilities. Microsoft launches Quality Assurance Agent within Dynamics 365 Contact Center (GA April 2026), emphasizing shift from sampling to real-time evaluation across both AI and human agents. Palomarr analyst framework ranks 94 vendors on transcription accuracy, real-time analytics, and coaching automation, identifying LevelAI (9.8), Cresta (9.7), and Observe.AI (9.6) as leaders. Independent survey of 815 enterprise executives (Liveops/Peter Ryan Strategic Advisory) identifies continued maturity gap: 65% remain in Walk/Run stages (hybrid human-AI workflows) requiring quality management infrastructure, while only 14% reach Fly stage with continuous real-time optimization. McKinsey data confirms 90%+ AI accuracy vs 70% manual; $286K annual savings per 1% FCR improvement. Critical counterweight emerges: Ender Turing analysis documents coaching quality as attrition driver—agents leave when QA feels punitive, feedback is delayed, or coaching remains sampled. Research emphasizes systematic QA process with issue-centric lifecycle tracking as production necessity for AI agents. Coaching integration and 100% coverage validated as table-stakes by CMP Research Prism analyst framework. Overall signal: ecosystem maturity and deployment scale accelerating, but human-organizational barriers remain unchanged.