Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

API & schema generation from natural language

BLEEDING EDGE

TRAJECTORY

Advancing

AI generating API endpoints, database schemas, or data models from natural language descriptions of requirements. Includes REST/GraphQL API scaffolding and database schema design; distinct from infrastructure-as-code which targets deployment resources rather than application interfaces.

OVERVIEW

Generating API endpoints and database schemas from natural language remains an experimental capability despite rapid vendor investment. The promise is clear -- describe what you need in plain language and get a working REST endpoint, GraphQL schema, or SQL query -- but production reliability has not caught up. Benchmark accuracy on NL-to-SQL has climbed sharply (the BAR-SQL framework reaches 91.48% on BIRD, up from 40% two years ago), yet real-world deployment tells a fundamentally different story. Text-to-SQL accuracy collapses from 85-90% in abstract benchmarks to 29% on real enterprise-scale queries (BIRD-Interact); production systems fail silently with plausible-looking outputs that mask semantic errors. The reliability gap stems from four distinct failure layers: syntax (mostly solved), schema compliance (constrained decoding partially addresses), semantic validity (unresolved), and distribution shift (unresolved). Developer surveys document only 3% high trust in AI-generated schema code despite 90% adoption, with two-thirds of outputs requiring substantial modification. Isolated enterprise successes exist, but they depend on heavy guardrails -- semantic metadata layers, domain-specific fine-tuning, and constraint-based decoding -- rather than zero-shot generation. The defining tension for this bleeding-edge practice is that the fundamental bottleneck is semantic understanding, not schema formatting. Until that gap closes, adoption stays confined to prototyping, legacy bridging, and low-stakes query generation.

CURRENT LANDSCAPE

May 2026 evidence confirms the bench-to-production gap remains the critical barrier. Bytebase synthesis of four vendor case studies (OpenAI, Google Cloud, Vercel, Hex) documents a consistent pattern: enterprise text-to-SQL success depends not on model capability but on governance infrastructure — context limiting, rigorous evaluation, and deterministic validation layers. "Giving the system everything made it worse. Giving it less, but making that 'less' very clear, made it usable." The core problem persists: raw LLM output for schema/SQL generation requires intensive downstream validation before deployment. Getcollate's analysis of benchmark artifacts shows the gap starkly — 85%+ accuracy on curated Spider 1.0 collapses to 10.1% on real enterprise Spider 2.0 for GPT-4o, demonstrating that semantic context (business rules, metric definitions, documentation) is the limiting factor, not LLM architecture. Structured Output Benchmark research (April 2026) reveals the foundational challenge: LLMs produce syntactically valid JSON schemas with semantically incorrect hallucinated values, and this gap cannot be closed by constrained decoding alone.

Production deployments exist but remain confined to narrow use cases. Microsoft's ISE blog documents metadata enrichment as the critical infrastructure investment — metadata-starved systems fail at scale; custom implementations achieve ~75% accuracy with explicit column descriptions and domain context. Spotify's May 2026 natural language interface for ad campaign management demonstrates production NL-to-API with reported 85% user adoption, but this deployed as a convenience layer simplifying API calls rather than autonomous generation. Multi-turn text-to-SQL (Rose-SQL framework) achieves SOTA on SParC/CoSQL benchmarks, and AutoLink achieves 97.4% schema recall at enterprise scale (3000+ columns), signaling that structured reasoning and iterative exploration outperform zero-shot generation. Yet these advances remain contained within research and specialized use cases.

Security and governance become operational requirements at scale. dpriver analysis identifies 10 production risks (hallucinated schema, unauthorized access, PII exposure, cost explosions) requiring deterministic validation layers with parse-check-approve-audit pipelines, not prompts. Schema drift emerges as the critical blocking issue: dependency updates change API response formats silently, causing hidden behavioral regression in agents trained on stale schemas. LLM structured-output reliability remains stochastic across providers — some calls succeed, others fail, with no deterministic recovery mechanism. Production deployment remains anchored to non-critical use cases: rapid prototyping (high refinement overhead accepted), legacy API bridging (shallow, stable schemas), and exploratory analytics (queries discarded if wrong).

TIER HISTORY

ResearchMar-2023 → Jul-2023
Bleeding EdgeJul-2023 → present

EVIDENCE (94)

— Synthesis of OpenAI, Google Cloud, Vercel, and Hex case studies documenting that enterprise text-to-SQL success requires governance layers (validation, access control, audit), not better prompts—establishing maturity pattern.

— Microsoft vendor case study on LiveSQLBench: metadata enrichment (column descriptions, domain context) critical to accuracy; ~75% achieved with custom implementation, demonstrating infrastructure requirements for production.

— Spotify deployed production NL interface for advertisers to manage ad campaigns via plain language instead of manual API calls, reports 85% adoption; catalogs 30+ examples from tier-1 product teams showing production NL-to-API maturity.

— Training-free framework using small-scale LRMs for multi-turn text-to-SQL without expensive fine-tuning; SOTA performance on SParC/CoSQL, showing smaller models viable for production with structured reasoning.

— Identifies 10 production security risks (hallucinated schema, unsafe statements, unauthorized access, PII exposure) requiring deterministic validation layer, not prompts—blocking wider production adoption.

— Multi-source benchmark evaluating LLM structured output quality across 7 metrics (JSON compliance, value accuracy, faithfulness); identifies critical gap between syntactic validity and semantic correctness in schema generation.

— Benchmarks reveal 85% accuracy on curated Spider 1.0 vs 10.1% on real-world Spider 2.0 for GPT-4o; semantic context gaps, not LLM capability, determine success—showing benchmark artifacts mask production reliability.

— AAAI 2026 agent framework for enterprise-scale schema linking; 97.4% recall on Bird-Dev, 91.2% on Spider 2.0 while handling 3000+ column schemas, demonstrating production-viable approach to enterprise complexity.

HISTORY

  • 2023-H1: Research advances in schema understanding and text-to-SQL, with foundational benchmarks (BIRD) revealing significant accuracy gaps (40% vs 92% human). Early implementations in academic (DBCopilot) and vendor (Postgres/GPT-3) projects. Deployment limited to research and proof-of-concept stages.
  • 2023-H2: Vendor tooling and patent filings accelerate. GraphQL Editor deploys AI-powered schema generation from natural language (September 2023); Google patents schema-based NL-to-API integration (September 2023). Academic research deepens schema routing approaches for massive databases (DBCopilot arxiv, December 2023). No major production deployments; adoption remains in mockup and experimentation phases.
  • 2024-Q1: Vendor product launches and continued academic research. Neurelo launches Cloud Data API Platform (January 2024) with AI-assisted natural language query generation. Academic research advances GraphQL query generation (IJCAI 2024) and reinforces enterprise limitations (CIDR 2024: NL2SQL "far from resolved"). Practitioner feedback highlights API design quality concerns in AI-generated code. Deployment moves into early production but limited to non-critical schema and query generation tasks.
  • 2024-Q2: Vendor consolidation continues with Neurelo maintaining GA platform status and expanding production use for REST and GraphQL API auto-generation from database models. General AI-assisted development tools (Amazon Q) gain enterprise traction with broad productivity claims, though API/schema generation remains a subset of broader capabilities. Adoption remains constrained by accuracy limitations on complex schemas and quality concerns in AI-generated API design. No breakthrough in enterprise-grade NL-to-schema accuracy; deployment still predominantly in lower-stakes schema prototyping and query generation.
  • 2024-Q3: Research advances in schema linking and text-to-SQL continue (E-SQL achieves 66.29% BIRD accuracy; RoSL improves recall by 25.1% for smaller 8B models). Community adoption of GraphQL remains active but schema-related challenges persist (45K StackOverflow analysis). Open-source NL-to-GraphQL tools emerge (talk-to-graphql). Critical assessments surface recurring reliability concerns: 52% error rate in AI-generated API code, security vulnerabilities, and hallucinations. Neurelo tutorials show iterative schema refinement in production tool. Overall trajectory: incremental improvements on specific benchmarks (BIRD) but no breakthrough in production adoption; accuracy remains constrained by schema complexity, and production deployment limited to non-critical schema/query generation tasks.
  • 2024-Q4: Focused research effort on GraphQL query generation (EMNLP 2024 industry track reports ~50% accuracy on new 10,940-pair dataset from IBM/StepZen; open-source NL2GQL dataset released October 2024). Academic interest in schema generation from requirements specifications continues (November 2024 publications). Neurelo expands operational workflows with custom API endpoint deployment via natural language queries integrated into git-based version control (December 2024). Critical reliability barriers persist: the accuracy gap between LLM-generated and human-authored code remains significant. Industry consensus emerges: custom fine-tuning and domain-specific training data are essential; zero-shot generation inadequate for production schemas. No breakthrough in enterprise adoption; market remains characterized by research intensification and vendor optimization of non-critical use cases (rapid prototyping, mockups, low-stakes query generation).
  • 2025-Q1: Research shifts toward direct schema generation from natural language (SchemaAgent multi-agent framework with 381-pair benchmark; Nixa addresses dynamic schema discovery in multi-tenant SaaS). Vendor ecosystem expands with AI App Builder entering GA schema generation market. Open-source tools mature (GQLPT+APIPT for GraphQL/REST). Developer confidence remains low despite high adoption: Q1 2025 surveys show 90% use but 3% high trust, 66% requiring substantial modifications, accuracy across tools ranges 31–65%. Critical assessment emphasizes technical debt accumulation and systemic reliability barriers. Production deployment unchanged: non-critical experimentation only, no enterprise-grade schema adoption for critical systems.
  • 2025-Q3: GraphQL specification update (September) optimizes for AI/LLM integration with OneOf input objects and Schema Coordinates. User study (September) shows NL2SQL systems achieve 75% accuracy and 10–30% faster query completion vs. traditional SQL, but persistent user frustration with refinement cycles. Security vulnerabilities in production AI code assistants (Amazon Q Developer prompt injection/RCE, August) highlight ongoing risks. Ecosystem consolidation continues; no breakthrough in enterprise adoption. Production constraints unchanged: accuracy gaps, design quality below human baselines, security risks preclude critical system deployment.
  • 2025-Q4: Research advances in schema-aware generation (GenLink multi-model learning achieving 67.34% BIRD accuracy, first systematic normalization-impact study). Oracle releases GA GraphQL schema generation from relational databases. Production case study demonstrates API code generation from natural language with zero-shot success. Vendor ecosystem matures with Oracle and existing platforms. However, critical practitioner analyses identify four blocking issues—schema awareness gaps, accuracy limitations, poor optimization, security risks—alongside production brittleness from schema churn. Enterprise adoption for critical systems remains negligible; deployment limited to non-critical prototyping and low-stakes query generation. Accuracy and production reliability remain below thresholds for enterprise-grade schema/API generation.
  • 2026-Jan: Breakthrough in NL-to-SQL accuracy: BAR-SQL achieves 91.48% on BIRD benchmark, surpassing Claude 4.5 and GPT-5, indicating narrowing of the gap. Production deployments mature: IBM deploys zero-config NLQ-to-SQL at enterprise scale (98.7% success across 17K tables, 3.1s latency). AWS Amazon Q Developer reaches GA with SmugMug case study (100% productivity gain). However, critical barriers persist: LLM planning accuracy collapses to 30-49% with 300+ API endpoints, improving only with semantic metadata and declarative APIs. DevPals demonstrates legacy API bridging in production (60% integration TCO reduction, 90% error reduction). Patent disclosures (IBM, others) focus on semantic data layers and agentic guardrails to prevent hallucination in enterprise NL-to-SQL. Accuracy ceiling in January 2026 remains: zero-shot generation inadequate for heterogeneous schemas; semantic metadata, domain-specific fine-tuning, and constraint-based generation required for production reliability. NL-to-API remains limited to non-critical query generation, rapid prototyping, and legacy system integration.
  • 2026-Feb: Vendor ecosystem expands with AWS Bedrock structured outputs (constrained decoding for schema compliance), Oracle NetSuite N/LLM embedding native schema generation in ERP, and Apollo GraphQL agent skills for automated schema design—but each vendor acknowledgement includes caveats about AI generation quality and reliability. Real-world incident documentation surfaces schema drift patterns and API brittleness (type shifts, silent field changes causing data corruption). Practitioner testing reveals stochastic LLM API failures across Anthropic, Google, and AWS for structured output tasks. Deployment barriers persist: schema evolution causes hidden coupling; zero-shot generation inadequate; LLM reliability not deterministic. Enterprise adoption for critical schemas unchanged; non-critical prototyping and legacy bridging remain primary use cases.
  • 2026-Mar: Product ecosystem accelerates with SharpAPI, Netlify Agent Runners, and expanded Neurelo Series A funding ($5M). Real-world deployments surface: QueryLytic at B2B SaaS (schema compression, validation, multi-database support), MANTA production instances (ChemoMaker pharmacy, Manufacturing BI). Enterprise adoption metrics mature: Bank of America Erica (19.5M+ users, 100M+ requests, 30% call center reduction), Microsoft Power BI, Tableau Ask Data (63% self-service analytics increase). Constrained decoding frameworks proliferate (Guidance, Outlines, XGrammar) but JSONSchemaBench benchmark (10K schemas) reveals significant feature coverage gaps across all frameworks. Critical assessment surfaces: practitioner analysis quantifies nested JSON schema failure rates (15-25% at 3+ nesting levels); controlled research finds zero end-task success even with formal JSON schemas, indicating semantic understanding remains the bottleneck, not schema syntactic compliance. Vendor landscape confirms: production adoption accelerating for non-critical query generation and legacy API bridging, but fundamental reliability barriers persist. Schema optimization (PARSE framework) emerges as research direction, treating schema design itself as a tuning problem rather than static interface contract.
  • 2026-Apr: Bench-to-production gap widens on multiple fronts. SQLStructEval and Omni Analytics (4,602 failed queries) confirm that 81.2% of production SQL errors are semantic rather than syntactic, and GPT-5 drops from 86% on Spider 1.0 to 29% on enterprise-scale BIRD-Interact — establishing that benchmark scores overstate real-world reliability by a wide margin. dbt Labs benchmark validates the semantic layer approach: text-to-SQL at 85-90% accuracy vs 97-100% with structured semantic layer, confirming the bottleneck is schema understanding not LLM capability. AWS production deployment (Amazon Q with PostgreSQL schema generation in database migration) and normalized schema design research (16.8% QA accuracy gain from 3NF schemas) provide positive signals for constrained use cases, while structured output analysis identifies four unresolved failure layers — semantic validity and distribution shift remain outside constrained decoding's reach. Enterprise deployment evidence expanded: Microsoft engineer documented production use of Copilot Chat for database schema generation from natural language in enterprise context; schema drift documented as a critical production failure mode — healthcare case study found 12 of 28 microservices with schema drift causing silent failures until automated validation deployed; xAI shipped structured outputs GA alongside tool-calling failure analysis identifying schema mismatches and context limitations as primary root causes. Production deployment continues anchored to low-stakes use cases; enterprise-grade NL-to-schema for critical systems remains blocked by semantic reliability gaps and schema drift brittleness.
  • 2026-May: Governance patterns solidify as industry maturity signal. Bytebase synthesis of OpenAI, Google Cloud, Vercel, and Hex case studies establishes that production text-to-SQL success requires deterministic governance (context limiting, structured evaluation, validation layers), not better prompts. Spotify's May 2026 natural language interface for campaign management reports 85% adoption, demonstrating production NL-to-API for tier-1 vendor but as convenience layer rather than autonomous generation. Structured Output Benchmark (May 2026) quantifies core reliability challenge: LLMs produce syntactically valid JSON with semantically incorrect hallucinated values. Advanced research (Rose-SQL, AutoLink) achieves SOTA on multi-turn SQL and enterprise-scale schema linking, but remains confined to research/specialized use cases. Microsoft case study on LiveSQLBench documents metadata enrichment as critical infrastructure investment (~75% accuracy with descriptions; near-zero without). Security analysis identifies 10 production risks (hallucinated schema, PII exposure, cost explosions) requiring deterministic validation, not prompts. Semantic context (business rules, documentation, glossaries) confirmed as bottleneck across multiple independent studies. Enterprise adoption for critical systems unchanged; deployment remains anchored to prototyping, legacy API bridging, and exploratory analytics with human-in-loop validation.