Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Enterprise search & RAG

GOOD PRACTICE

TRAJECTORY

Stalled

AI-powered search and retrieval-augmented generation across internal documentation and enterprise systems. Includes cross-system search federation and context-aware answer generation; distinct from domain-specific RAG which targets specialised corpora rather than general enterprise knowledge.

OVERVIEW

Enterprise search and RAG is a proven practice with mature tooling, documented ROI, and broad adoption -- yet one where execution discipline, not technology, now determines success or failure. The pattern combines keyword and semantic retrieval with LLM-powered answer generation over proprietary data, grounding generative AI in internal knowledge rather than public corpora. Hybrid retrieval (vector plus BM25) settled as the production standard after vector-only approaches proved unreliable for exact matches, structured data, and multi-hop reasoning. The technology works. The harder problem is everything around it: chunking strategy, document quality, governance, cost control, and evaluation frameworks. Forty-five percent of enterprise AI deployments now incorporate RAG, and adopters report strong economics, but failure rates remain high among organisations that treat it as a plug-and-play capability rather than an operational discipline. The practice's defining tension is this gap between technological maturity and organisational readiness -- a gap that has stalled further tier advancement despite a market exceeding $2B and infrastructure that continues to improve.

CURRENT LANDSCAPE

The infrastructure layer is production-grade and still improving. Elasticsearch 9.3 shipped bfloat16 vector compression (halving storage) and GPU-accelerated indexing with 12x throughput gains. Azure AI Search added agentic retrieval with expanded knowledge sources including OneLake and SharePoint. Vector database adoption surged 377% year-over-year. These are not early-adopter tools -- they are GA platform features embedded in mainstream enterprise stacks.

Adoption metrics tell a confident story on the surface: 92% of RAG adopters report ROI within 12 months, averaging 3.2x return. But the denominator matters. Of roughly 1,000 enterprises that attempted RAG deployments through 2025, only about 200 succeeded, with 51% of enterprise AI failures being RAG implementations. The pattern that emerges is not technology risk but operational neglect -- 70% of deployments lack systematic evaluation frameworks, 30-40% of infrastructure budgets are wasted on poorly observed pipelines, and 87% of enterprises measure answer quality while ignoring data freshness and pipeline reliability.

Document quality remains the most underestimated barrier. Standard chunking destroys the logical structure of technical documents -- tables, cross-references, embedded images -- producing hallucinations even when retrieval is technically correct. Recent production analysis identifies five specific failure modes: irrelevant retrieval from poor ranking, partial answers split across multiple chunks, outdated answers from stale knowledge bases, answer refusal when retrieval fails, and hallucinated sources detached from actual documents. The emerging discipline of RAGOps attempts to address this by treating retrieval pipelines as production systems requiring monitoring, governance, and lifecycle management rather than one-time integrations. Evaluation frameworks (six-layer maturity models evaluating corpus quality, retrieval accuracy, groundedness, task success, latency/cost, and escalation design) are now standard in mature deployments. For organisations willing to invest in that operational discipline, enterprise RAG delivers 12-18% precision improvement through hybrid search, 69% error reduction from contextual compression, and 75% accuracy gains on complex regulatory documents via agentic reasoning. For those expecting turnkey results, the failure rate remains punishing.

TIER HISTORY

ResearchMar-2023 → Mar-2023
Bleeding EdgeMar-2023 → Apr-2024
Leading EdgeApr-2024 → Oct-2024
Good PracticeOct-2024 → present

EVIDENCE (95)

What's new in Azure AI SearchProduct Launches

— Azure AI Search April 2026 updates: GA semantic ranker on free tiers, agentic retrieval with reasoning control, document sensitivity labels, advancing platform maturity.

— 70-80% of large enterprises have production RAG; enterprise AI spending exceeds $300B in 2026 with 40%+ on generative AI, confirming mainstream adoption.

— Agentic RAG market projects $3.8B→$165B (2024-2034); named deployments: Morgan Stanley (financial research), PwC (tax/compliance), ServiceNow (task automation).

— RAGAS established as de facto evaluation standard; AWS, Microsoft, Databricks, Moody's running 5M+ monthly evaluations, advancing measurement infrastructure.

— Uber deployed OpenSearch for semantic search on 1.5B items, evaluated multiple platforms, solving ingestion and performance bottlenecks at scale.

— Gartner study: 52% of enterprise AI hallucinate on ungoverned RAG vs near-zero on governed data; IBM: 72% of AI failures from inadequate context, not models.

— Critical analysis: 50-90% of LLM responses lack full support; 57% of citations unfaithful (post-hoc rationalization); documents citation faithfulness gap.

— Detailed benchmark of vector databases (Pinecone, Weaviate, Qdrant, Milvus) with latency, recall, and cost metrics for enterprise RAG.

HISTORY

  • 2023-H1: RAG emerged as standard enterprise AI pattern; major vendors announced production tooling (Databricks, Elastic, Azure) addressing deployment challenges. Academic analysis documented limitations and scenarios requiring RAG. Practitioner critique identified tunnel vision toward vector search; hybrid retrieval approaches gaining attention.
  • 2023-H2: Cloud providers shipped production RAG infrastructure; Azure rebranded to Azure AI Search with vector search GA. Elasticsearch deployed RAG in production (Support Hub). Evaluation frameworks (Ragas) and quality benchmarks proliferated, addressing production measurement gaps. Prototype-to-production gap identified as primary adoption barrier.
  • 2024-Q1: RAG stabilized as production standard with documented deployment scale (10TB+ docs/day, 500M+ embeddings, 100k+ users). Market reached $1.35B with 40%+ projected CAGR. Five critical barriers crystallized: retrieval method selection (hybrid > vector-only), prompt engineering, data quality/chunking, evaluation frameworks (RAGAs now peer-reviewed), and performance scaling. Vector-only approaches widely recognized as insufficient.
  • 2024-Q2: Enterprise RAG deployment accelerated with proven adoption at scale: Azure achieved 88% cost-per-vector reduction; KPMG and AT&T deployed to 40k+ and 80k+ users respectively. Simultaneously, independent research revealed production risks: Stanford study found 17-34% hallucination rates in legal RAG tools, while peer-reviewed industry deployments confirmed RAG effectiveness with proper architecture. Hybrid retrieval and data quality emerged as enforced operational requirements, not options.
  • 2024-Q4: Enterprise search & RAG transitioned to mainstream adoption: Menlo Ventures survey reported 28% adoption across 600 enterprise leaders, with RAG now used by 73% of production LLM systems (McKinsey). Platform maturation accelerated—Azure AI Search launched agentic retrieval GA and enterprise security features; Elastic's internal deployment achieved 75% relevance improvement. Research emphasis shifted from architecture to content design discipline; academic analysis identified data governance and security integration as primary adoption barriers. Market attention moved toward agentic search capabilities as evolution beyond basic RAG.
  • 2025-Q1: Mainstream enterprise RAG adoption revealed implementation headwinds: WRITER survey showed only ~33% ROI despite $1M+ annual investment, with 68% of C-suite reporting organizational friction from AI adoption. Technical barriers persisted: Salesforce's HERB benchmark revealed enterprise RAG struggles with multi-hop reasoning over heterogeneous data (documents, transcripts, messages, code), with best agentic methods achieving only 33% performance and retrieval identified as bottleneck. Security gaps remained critical: 13% of enterprises reported AI breaches with 97% lacking proper access controls, demonstrating unresolved governed RAG implementation despite frameworks existing. Product evolution continued with Azure AI Search expanding agentic capabilities and Elasticsearch maturing observability for RAG deployments. Bifurcation emerging between highly-committed Fortune 500 deployments scaling to 40k+ users and mainstream enterprises struggling with proof-of-value and operational adoption.
  • 2025-Q2: Product maturity accelerated with AWS Bedrock custom metrics GA and Azure production deployments (Japan Digital Design's operational case study). However, real-world deployment studies quantified critical quality failures: RAG systems in banking and insurance achieved only 71% citation accuracy and 23% incorrect answers despite perfect retrieval; domain-specific embedding fine-tuning remained mandatory. Infrastructure failures cited in 87% of failed implementations (Gartner), with vector database misconfigurations, monitoring gaps, and backup procedures unresolved. Market consensus shifted decisively: enterprise RAG's barrier was no longer algorithmic but organizational—governance, security integration, operational discipline, and change management remained blocking factors. Bifurcation deepened between Fortune 500 scaling toward governed agentic RAG and mainstream enterprises stuck in proof-of-concept.
  • 2025-Q3: Market growth continued with RAG market at USD 1.92B and 39.66% CAGR projected to 2030 (Mordor Intelligence); Gen AI adoption among enterprises accelerated to 30% scaling (5x growth from 2023, Capgemini). However, critical execution gaps persisted: Salesforce HERB benchmark revealed enterprise RAG quality failures with agentic RAG achieving only 32.96/100 on heterogeneous data, with retrieval as core bottleneck. Cost sustainability emerged as acute problem—72% of enterprise RAG implementations reported failing within first year due to uncontrolled infrastructure expenses; governance and formal policies remained critically underdeveloped (46% of organizations). Quality assurance challenges documented: deployment failures attributed to poor document quality, lack of evaluation loops, and absence of reranking strategies. Market bifurcation endured: Fortune 500 organizations advanced toward governed agentic retrieval with operational discipline, while mainstream enterprises remained blocked by execution complexity, cost overruns, and ROI realization barriers.
  • 2025-Q4: Enterprise RAG market continued growth ($2.33B in 2025, projected 42.7% CAGR to 2035) amid persistent execution challenges. GenAI enterprise spending accelerated sharply to $37B in 2025 (3.2x from 2024), with 76% of AI use cases purchased rather than built internally. RAG became consolidated as the primary production use case for internal enterprise AI—most companies building internal AI systems used RAG pipelines—yet quality and sustainability barriers mounted. Independent failure analysis documented high attrition: 42% of enterprise AI use cases failed in 2025, with 51% of those failures being RAG implementations (S&P Global), only 200 of 1,000 enterprises successfully deploying RAG. Document quality emerged as primary execution barrier: 40% of RAG implementations failed due to poor OCR, inconsistent formatting, and absence of domain-specific fine-tuning; semantic search failed 15-20% of the time in specialized domains (banking, insurance, legal) despite hype around vector embeddings. Practitioner adoption accelerated (36% of developers learning RAG per Stack Overflow 2025 survey) while quality concerns persisted (75% of developers wanted human validation of AI outputs). By year-end, enterprise RAG had consolidated as established practice with proven technology but unresolved organizational, cost, and operational implementation barriers.
  • 2026-Jan: Enterprise RAG adoption plateaued with infrastructure maturation but persistent execution barriers. GenAI usage reached 71% organizational penetration (up from 65% in 2024), with vector database adoption surging 377% year-over-year supporting RAG workloads; hybrid retrieval (vector + BM25) became industry standard achieving 20-40% better quality vs vector-only. Production platforms matured: Elasticsearch 9.2 shipped AI Agent Builder and DiskBBQ optimization; AWS and Azure continued expanding observability tooling. However, 70% of RAG deployments lacked systematic evaluation frameworks leading to silent degradation, and critical barriers persisted: 30-40% of infrastructure budgets wasted due to cost visibility gaps, data engineering challenges (governance, document quality, fragmentation) overshadowing technology maturity, only 17% of organizations realizing 5%+ earnings from GenAI despite widespread deployment. Bifurcation sharpened between Fortune 500 organizations optimizing adaptive retrieval and agentic capabilities vs mainstream enterprises stuck with POC execution gaps and ROI realization challenges.
  • 2026-Feb: Platform maturity continued: Elasticsearch 9.3 GA introduced bfloat16 vector compression (50% storage reduction) and GPU acceleration (12x vector indexing throughput); Azure AI Search expanded agentic retrieval capabilities with portal support for new knowledge sources and reasoning effort tuning. Market consolidation deepened: 45% of enterprise AI deployments incorporated RAG (up from 15% in 2023), with 92% of adopters reporting ROI within 12 months (3.2x average return), confirming mainstream adoption trajectory. However, measurement blind spots emerged as critical risk: 87% of enterprises focused on answer quality metrics while neglecting infrastructure health (data freshness, governance, pipeline reliability), creating silent failure modes in production systems. Document processing challenges persisted: standard chunking strategies destroyed logical structure in technical documents (tables, images, captions), necessitating semantic chunking and multimodal approaches for reliable enterprise deployments. By month-end, consensus solidified around RAGOps as operational discipline, addressing production reliability gaps and establishing enterprise RAG as technology with proven economics but unresolved execution complexity.
  • 2026-Mar: Enterprise RAG consolidated as production infrastructure at significant scale: enterprise search market valued at $7.76B growing to $16.41B (11.3% CAGR), with RAG now at 30-60% of enterprise AI use cases and 87% of enterprises with AI in production (up from 31% in 2020). Capacity's Azure AI Search deployment achieved 97% accuracy with 4.2x cost reduction; Ruhrkohle AG deployment achieved 40% search time reduction. Slack AI native enterprise search reached GA, connecting 55+ data sources with permission-aware results and federated architecture. A critical structural tension crystallized: industry research found enterprises feel they "cannot live without RAG, yet remain unsatisfied"—architecture proven, execution barriers unresolved, with EU AI Act imposing 15-30% performance reduction risk for BFSI (26% of the market). The core execution challenge remains unchanged: governance, document quality, and evaluation discipline continue to determine whether deployments sustain or degrade.
  • 2026-Apr: Production RAG maturity shifted focus to evaluation infrastructure and failure-mode taxonomy. A six-layer enterprise evaluation framework (corpus quality, retrieval accuracy, groundedness, task success, latency/cost, escalation design) emerged as the practical standard for mature deployments; hybrid search with contextual compression reported 69% error reduction and 12-18% retrieval precision gains over vector-only approaches. Agentic knowledge graph architectures achieved 75% accuracy improvement on complex regulatory corpora (Code of Federal Regulations), signalling a structural split between standard RAG for general enterprise knowledge and graph-augmented RAG for multi-hop reasoning over regulated domains. Vector database benchmarks (April 2026) across Pinecone, Weaviate, Qdrant, and Milvus now provide standardised latency, recall, and cost comparisons for enterprise selection decisions; Meta published a peer-reviewed HUMBR technique reducing hallucinations in enterprise RAG workflows; and regulated pharma enterprises are increasingly building internal RAG pipelines rather than purchasing commercial solutions to meet governance requirements.
  • 2026-May: Evaluation infrastructure matured into production standard, with RAGAS (Retrieval Augmented Generation Automated Evaluation) becoming the de facto framework for enterprise RAG quality assurance—AWS, Microsoft, Databricks, and Moody's collectively running 5M+ monthly RAGAS evaluations, signaling enterprise-grade measurement adoption. Uber deployed Amazon OpenSearch at 1.5B-item scale for semantic search, addressing algorithm flexibility and ingestion speed as deployment milestones; Azure AI Search April/May 2026 updates included GA semantic ranker on free tiers and agentic retrieval with reasoning-effort control. Market adoption consolidated around 70-80% of large enterprises running production RAG; enterprise AI spending exceeded $300B globally with 40%+ flowing to generative AI workloads. Critical gap remains persistent: citation faithfulness (50-90% of LLM responses lack full support; 57% of citations are post-hoc rationalizations), governance (52% of enterprise AI hallucinate on ungoverned data vs near-zero on governed systems, per Gartner), and data quality as systemic barriers even as infrastructure matured.

TOOLS