The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that generates knowledge base articles from support history and autonomously maintains, updates, and identifies gaps in existing knowledge. Includes article drafting from resolved tickets and coverage gap detection; distinct from self-service content which creates user-facing experiences rather than internal knowledge.
AI-powered knowledge-base generation has reached proven, accessible maturity -- every major CX platform ships it as a GA feature, deployments number in the tens of thousands, and the ROI case is well documented. The practice has stalled not because it failed but because it hit an architectural ceiling: autonomous article drafting works, yet fully autonomous maintenance does not. Hallucination research consistently shows that AI amplifies knowledge-quality problems faster than organisations can fix them, which means human review gates remain structurally necessary. For teams evaluating this space, the question is no longer whether to adopt KB generation tooling but how to build the data-hygiene and governance discipline that makes it reliable. The tooling is commoditised; the operational wrapper around it is not.
Zendesk, ServiceNow, Freshworks, Microsoft, and HubSpot all offer GA knowledge-base generation features, and the market has fully commoditised. Zendesk Knowledge Builder powers over 50,000 active knowledge bases; Freshworks serves 73,000+ customers with Freddy AI, and named deployments show real results -- Qualia reached 91% help-centre usage with a 30% ticket reduction, while ServiceNow's internal deployment hit 54% deflection and $5.5M in annual savings. The AI knowledge-management market grew from $5.23B in 2024 to $7.71B in 2025, projected to reach $35.83B by 2029.
That scale, however, has not solved the accuracy problem. Comprehensive April-May 2026 research shows hallucination rates spanning 0.7%-88% depending on model and task (Suprmind benchmark), with data governance as the decisive lever: 52% of enterprise AI responses hallucinate on ungoverned data versus near-zero on governed data using the same model (Atlan). Industry data from 2024 shows 39% of AI customer service implementations were rolled back or reworked due to hallucinations, with 76% requiring human-in-the-loop review before production. Peer-reviewed research demonstrates knowledge base semantic quality improves accuracy by 17-23 percentage points across frontier models (Claude Opus 4.7, Claude Sonnet 4.6, GPT-5.4), proving governance is the critical upstream work. Real-world deployments (MBH Architects, Docker, Nokia, OpenAI using Kapa.ai) show AI-assisted gap detection and maintenance workflows work, but all remain semi-autonomous with human review gates. Customer sentiment remains cautious: 94% of IT leaders concerned about vendor lock-in; vendors democratized KB features to standard plans (Zendesk April 2026) but accuracy and governance constraints, not tooling gaps, remain the binding limitations on full autonomy.
— Comprehensive benchmark aggregating hallucination rates (0.7%-88% depending on model and task) across frontier AI models; documents endemic hallucination problem limiting autonomous KB deployment without governance.
— AI-assisted KB maintenance deployed by Docker, Nokia, and OpenAI; demonstrates practical workflow for gap detection using RAG-powered analysis to identify coverage gaps and guide content creation.
— Adoption barrier data: 39% of AI customer service implementations were rolled back or reworked due to hallucinations in 2024; 76% of enterprises require human-in-the-loop review to catch hallucinations before deployment.
— Zendesk generalizes AI KB features to Suite plans (not premium-only) including generative article writing, unified RAG system for search and answers, signaling platform maturity and commoditization of KB AI.
— Peer-reviewed benchmark testing Claude Opus 4.7, Claude Sonnet 4.6, and GPT-5.4 shows semantic context improves accuracy by 17-23 percentage points; proves KB semantic quality is structural requirement, not model-dependent.
— Industry analysis: 52% of enterprise AI responses contain hallucinations on ungoverned RAG data vs. near-zero on governed data (same model); proves KB governance and maintenance—not tooling—is the critical lever for reliable AI systems.
— MBH Architects deployed firm-wide KB transformation spanning marketing proposals, practice knowledge, project data, and learning; demonstrates structured knowledge capture, AI-enabled retrieval, and scope-drafting agents with measured time savings.
— KCS is mature methodology for knowledge generation embedded in customer support workflows; reps create and refine knowledge in real-time during case resolution, with double-loop Solve/Evolve process for continuous KB improvement.