Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Multimodal document understanding

LEADING EDGE
ALSO IN🔄 Operations & Process Automation👁️ Computer Vision & Sensing

TRAJECTORY

Advancing

AI that understands documents containing mixed content — tables, diagrams, images, and text — extracting meaning from each. Includes chart reading, diagram interpretation, and table extraction; distinct from standard OCR which processes text rather than mixed visual elements.

OVERVIEW

Multimodal document understanding -- AI that extracts meaning from documents combining tables, diagrams, charts, and text -- has proven its value at forward-leaning organisations but remains far from mainstream adoption. The technology works: production deployments at scale show strong ROI, with one major bank cutting invoice processing time by 85% across 50K monthly documents. Agentic specialist vendors are now outperforming generalist cloud platforms on domain-specific tasks. Yet the binding constraint has shifted from model capability to everything surrounding it. Vendor platforms still suffer recurring reliability failures in production, enterprise document infrastructure is poorly prepared for AI-driven workflows, and a persistent gap separates survey-reported adoption from operational reality -- most implementations remain rule-based rather than truly multimodal. The ecosystem has stratified into cloud platforms, foundation models, and specialist document-centric architectures, each competing on different axes. For organisations with strong document infrastructure and tolerance for integration friction, the returns are real. For most, the practice remains aspirational.

CURRENT LANDSCAPE

As of April 2026, the vendor ecosystem has solidified into three distinct layers, each with complementary strengths and constraints. Cloud platforms (AWS Textract, Azure AI Search, Google Document AI, Oracle) dominate by market reach and feature maturity, with Textract serving 1,700+ customers on billions of pages annually and Azure newly GA for multimodal search with native handling of embedded diagrams. Specialist agentic platforms (Indico, Convr, LandingAI) are winning on hard document types: Indico processes 1M+ pay stubs daily at 90%+ accuracy, LandingAI scaled a healthcare RCM deployment from 120K to 240K prior authorization pages daily (60%→90%+ accuracy), and Convr reports 97% accuracy on commercial insurance. Open-source ecosystems show maturation: Docling now hosts 24 models with community-developed visual inspection tooling (Docling Studio) enabling production RAG workflows. Independent benchmarking (BenchLM, April 2026) shows document understanding (OfficeQA Pro) as now essential for enterprise multimodal systems. However, the adoption-reality gap has widened. A 600-org AIIM survey reveals 61% still paper-dependent despite 78% claiming operational AI; 66% of new IDP deals are replacements of systems that failed. Vendor demo accuracy (98% on clean PDFs) drops to 70% on real client documents with handwriting, layout variation, and degraded scans. Practitioner analysis shows the "95% demo, 60% reality" pattern holds across industries: building document understanding requires composition of specialized components (classification, extraction, table handling, validation), not single monolithic tools. Production constraints shifting from model capability to infrastructure: manual exception-handling costs (~$4.83/page) dominate API costs; multimodal token costs (2-5× text) drive architectural choices toward hybrid OCR+vision pipelines; SOTA models still score below 50% on complex element parsing and layout perception. Organizational readiness remains critical: 61% still rely on paper despite automation claims, and only 38% rate their document data as suitable for AI use.

TIER HISTORY

ResearchJan-2024 → Jan-2024
Bleeding EdgeJan-2024 → Jul-2024
Leading EdgeJul-2024 → present

EVIDENCE (90)

DoclingNotable Repositories

— Major open-source document conversion toolkit (55.8k GitHub stars) using vision-language model (Granite-Docling-258M) to convert PDFs, DOCX, images into structured data, detecting tables, formulas, reading order, OCR—core multimodal document understanding infrastructure.

— Amazon Science benchmark for evaluating vision LLMs on long-context document understanding; tests ability to maintain grounded understanding across large document contexts beyond short OCR snippets; signals maturity of evaluation infrastructure.

Docling at Red Hat Summit 2026Conference Talks

— Conference presence at Red Hat Summit (May 11–13, 2026) with four sessions covering production multimodal document understanding use cases (aviation maintenance manuals, banking/insurance claims at scale, RAG on Kubernetes, Ray Data pipelines).

— Amazon Science research paper proposing schema optimization to address extraction unreliability. Problem: treating JSON schemas as static contracts leads to suboptimal extraction, hallucinations, and unreliable agent behavior. Solution: dynamic schema optimization for improved extraction fidelity.

— Research-backed analysis introducing InduOCRBench, a benchmark specifically designed to evaluate OCR robustness in RAG systems. Demonstrates OCR paradox: high character-level accuracy (CER/WER) doesn't guarantee effective document understanding, showing semantic/structural understanding gap that multimodal approaches address.

— Production case study: Sun Finance deployed AWS Textract + Amazon Bedrock + Rekognition for identity verification. Measured outcomes: accuracy 79.7%→90.8%, cost reduction 91%, processing time 20 hours→<5 seconds. Went live January 22, 2026. Demonstrates hybrid OCR+LLM+vector-search pattern at production scale.

— Production research on industrial KYC document extraction using VLMs; multistage pipeline with page-level retrieval achieves 87.27% accuracy on 120 production documents across 3000+ pages; demonstrates page retrieval as dominant factor in multimodal extraction.

— Production deployment case study: global investment firm reduced portfolio analysis from one week to hours using Claude multimodal agents to extract images from 100+ page PDFs and interpret visual content (maps, topography).

HISTORY

  • 2024-Q1: AWS Textract and Adobe announce major platform expansions with production deployments and GA launches. Multimodal document understanding moves beyond OCR into vision-language model territory. Academic benchmarks proliferate (MMDocIR, M-LongDoc), signaling research maturity. Cloud provider SDK reliability issues emerge as a deployment bottleneck (Azure 404 errors, latency). Market adoption reported at 1,700+ companies using Textract; survey data suggests 78% of organizations planning multimodal document processing implementation within the year.

  • 2024-Q2: Production deployments accelerate across vendors with named organizational wins (NYSE, Ryanair, Netsmart on Bedrock; UiPath CoE reporting 70K+ hours freed). Academic research shifts from capability benchmarks to rigorous limitation assessment: WONDERBREAD (Stanford) reveals validation gaps in workflow documentation; ICLR 2024 confirms hallucinations and compositionality flaws in open-source LMMs. Industry R&D (JPMorgan DocLLM) explores alternative architectures to improve scaling and reliability. Azure Document Intelligence surfaces architectural scaling limits (rate-capping at 15 TPS POST) and regional reliability issues, signaling vendor platform maturity challenges.

  • 2024-Q3: Named enterprise deployments continue (Deltek's multimodal RAG on AWS, Stride's document automation on UiPath), but operational friction increases. Open-source ecosystem gains traction (Docling 55.8k stars). Academic research identifies domain-specific limitations: GPT-4V fails on medical documents despite apparent expertise. UiPath on-premises deployments encounter cascading failures with dataset scale; Azure Document Intelligence exhibits latency (5-6 seconds per document) and API version instability. Market shows adoption expanding beyond finance/insurance into education and healthcare, but production reliability remains the primary constraint.

  • 2024-Q4: Bloomberg and UNC researchers demonstrate scalable multimodal RAG handling 40K pages with sub-2-second latency, confirming technical capability advancement in financial sector R&D. Baidu releases PP-DocBee with SOTA benchmark results, signaling international vendor competition. Federal sector deployments expand (Precise Software Solutions with 4x productivity gains on Textract). However, Azure Document Intelligence surfaces operational maturity constraints: intermittent validation errors (InvalidContentDimensions) in production, SDK versioning confusion delaying GA adoption, and unresolved developer experience friction. ACL 2024 research establishes consensus on enterprise adoption barriers: data limitations, model calibration issues, and evaluation gaps in field-level performance. Market maturity increases but production reliability and integration friction remain primary adoption constraints.

  • 2025-Q1: Research ecosystem accelerates with BigDocs (7.5M multimodal samples, 15.14% benchmark gains), Docopilot (native document VLM outperforming Gemini 1.5-Pro), and Document Haystack (long-context needle-in-haystack benchmarks). Academic surveys (ACL 2025 Multimodal RAG) signal community maturity. Production deployments show strong ROI: major international bank achieves 85% invoice processing time reduction and $2.1M annual savings with 50K monthly volume. However, Azure Document Intelligence exhibits new reliability issues (intermittent freezes in production); field evidence confirms model limitations in layout sensitivity and hallucinations persist. Market splits between generalist foundation models and specialist document-focused architectures.

  • 2025-Q2: AWS Textract announces June updates for improved low-resolution and complex document handling; vendor momentum sustained across platform improvements. Base64.ai reports multimodal deployments achieving 99.7% accuracy in insurance/pharma applications, confirming enterprise adoption ROI. Academic research intensifies: IJCAI 2025 tutorial formalizes MLLM-driven document understanding curriculum; PLOS ONE and independent RMIT evaluation advance both performance benchmarks (0.88 NDCG multimodal retrieval) and critical assessment of real-world limitations (vendor name/date extraction failures in expense processing). However, Azure Document Intelligence suffers major production outage in June (US East region, InternalServerError on multiple models), reinforcing vendor platform maturity as primary adoption constraint. Specialist models (PP-DocBee-2) demonstrate significant performance gains (11.4% improvement, 73% latency reduction) on domain-specific document types. Adoption velocity driven less by capability advancement and more by vendor operational reliability and domain-specific model selection.

  • 2025-Q3: Ecosystem expansion with Oracle Cloud releasing Document Understanding 2.0 (multilingual support, Label Studio integration), signaling third-tier vendor commitment. Research consensus solidifies: two authoritative surveys (arXiv, ACL 2025) formalize MLLM-based document understanding field maturity and identify efficiency, generalizability, robustness as advancement frontiers. Adoption paradox deepens: SER Group survey reports 78% of organizations claiming document processing implementation, yet critical assessment reveals 61% still rely on paper, 48% expect paper volumes to increase, and most remain rule-based (not true AI). Azure Document Intelligence reliability deteriorates further: September outages in East US (30+ minute processing delays) and West Europe (indefinite hanging on prebuilt models, 100% failure rates in healthcare applications), reinforcing vendor platform maturity as critical bottleneck. Production reliability and organizational integration friction remain primary adoption constraints, not capability.

  • 2025-Q4: Agentic automation emerges as market differentiator: Indico Data processes over 1M pay stubs daily with 90%+ accuracy, replacing hyperscaler solutions at 70%, demonstrating vertical specialization advantage. However, infrastructure readiness gap widens: Apryse survey (December, 465 organizations) reveals 64.5% have AI in production but only 38.1% rate document data as excellent, with 76.6% storing 25-75% of data in documents and 82.8% planning document automation investment. Azure Document Intelligence continues reliability failures: November 2025 production outage with hanging errors on custom models, reinforcing vendor platform constraints. Implementation friction persists: 78% survey adoption claims conflict with reality (61% still paper-dependent, 48% expecting paper volume growth). Market maturity reflected in three-tier vendor ecosystem and specialist model gains, but adoption velocity constrained by infrastructure investment gaps and vendor platform reliability, not capability advancement.

  • 2026-Jan: Deployment momentum accelerates with named enterprise wins and international vendor innovation: Allegis Global Solutions achieves 80-90% success rates using UiPath Document Understanding with agentic automation for constantly changing invoice formats; Oracle integrates Document Understanding into Oracle Integration Cloud with support for invoices, receipts, passports, and custom documents; Document Force releases enhanced models with 200%+ GPQA improvement and 1/3 cost reduction. Independent benchmarking (AIMultiple, 300-document study) confirms Azure Document Intelligence at 96% accuracy on printed text, validating SOTA multimodal LLMs as OCR alternative. Academic research continues validation: SANER 2026 conference paper from Fujitsu demonstrates multimodal LLM success on structural recognition but identifies persistent challenges in semantic interpretation of complex diagrams. However, vendor reliability remains critical constraint: Azure Document Intelligence continues January 2026 production failures with hanging extraction requests causing application downtime, signaling unresolved platform maturity issues despite ecosystem expansion. Market maturity reflected in deployment velocity and international vendor entry, but organizational infrastructure readiness and vendor platform reliability remain primary adoption bottlenecks over capability.

  • 2026-Feb: Research consensus on maturity combined with practical deployment acceleration: Two new comprehensive academic surveys (arXiv) and an industry report document the MLLM-driven document understanding landscape, synthesis of VDR methodologies, and agentic automation market expansion (LlamaIndex 500M+ documents, 90%+ automation; Convr 97% accuracy on insurance). UiPath advances product maturity with Field Groups and Document Understanding API v2; Textract demonstrates production ROI (Associa: 48M documents, unknown-document accuracy 50%→85%). However, February brings continued evidence of Azure platform brittleness: HTTP 500 failures on copied custom models affecting multi-environment deployments; concurrent evidence of capability limitations (French financial documents show 34-62% chart accuracy, multi-turn dialogue failures at 50%, mismatched decoder problem constraining modality understanding). Market bifurcates: agentic specialist vendors (Convr, LlamaIndex, Indico) with vertical focus outperforming generalist platforms; architectural and deployment reliability remain primary adoption constraints over raw model capability.

  • 2026-Mar: Deployment velocity continues with strong ROI validation and methodological advances, while capability limitations become increasingly explicit. New evidence: Fullerton Health (9-market healthcare deployment, 87% field accuracy, 300× efficiency); industry ROI analysis confirms 60-80% cost reduction and 6-18 month payback across financial services and lending (mortgage processing $8.5-28K monthly savings). Academic peer-reviewed benchmarks (CVPR, EACL, IJCAI tracks) formalise maturity: SEA-Vision benchmark reveals critical adoption barrier in low-resource languages (11 languages, pronounced degradation); VAREX (1,777 docs, 20 models) shows schema compliance is below-4B bottleneck; EACL study validates image-only MLLMs match OCR+MLLM pipelines on business documents, reducing infrastructure requirements. However, quality constraints sharpen: FREAK and Reading-Not-Thinking papers document severe hallucination issues and text-as-pixels modality gap (60+ point degradation on math tasks, fixed with self-distillation). ViG-LLM (Amazon Science) addresses privacy-constrained extraction without external OCR. LandingAI achieves 99.16% accuracy on DocVQA with agentic visual grounding. Market signal: agentic architectures winning on complex documents; image-only MLLM pipelines viable for business use; multilingual adoption limited by language resource gaps; infrastructure readiness and vendor reliability remain primary constraints over raw capability.

  • 2026-Apr: Adoption-reality gap widens despite platform maturity. New evidence: AIIM survey (600 enterprises) shows 61% still paper-dependent despite 78% operational AI, 66% IDP tool replacement rate reflecting prior failures. Multi-industry case studies (Quantiva) confirm "95% demo, 60% reality" pattern across financial/media/music—document AI requires composed architecture (classification, extraction, tables, validation) not monolithic tools. Independent benchmarking (BenchLM, April 2026) elevates document understanding (OfficeQA Pro) to essential enterprise capability with Qwen3.6-35B leading at 89.9%, signalling evaluation maturity. Ecosystem standardisation signals emerge: Abbyy's VLM-focused strategy and the DocLang working group (IBM, Red Hat under the Linux Foundation) indicate consolidation toward agentic architectures. Real-world financial-document testing (120 documents, three frontier models) shows 87-89% headline accuracy masking field-level error patterns and confidence calibration failures, confirming production deployment requires model-specific tuning. Platform expansion: Azure AI Search GA for multimodal documents, LandingAI scales healthcare 120K→240K pages daily (60%→90%+ accuracy). Production constraints sharpening: multimodal LLM costs 2-5× text tokens; hybrid OCR+vision pipelines and vision-guided chunking (84.4% retrieval accuracy) emerging as architecture response. Specialist vendors and open-source (Docling 24 models, Studio tooling) outperforming cloud platforms on hard document types; infrastructure readiness and organisational data quality remain primary adoption constraints.

  • 2026-May: Evaluation maturity and deployment acceleration accelerate alongside explicit limitation acknowledgment. New evidence: (1) Docling Linux Foundation donation (March 2026) and Red Hat Summit presence (May 11–13) demonstrate production adoption in aviation, banking, insurance at scale; IBM's 258M-parameter Granite-Docling-258M model signals specialized architecture preference. (2) Academic research confirms production reality: peer-reviewed KYC study (arXiv) shows multistage pipeline achieving 87.27% on 120 real financial documents (3000+ pages), while Amazon Science's Document Haystack benchmark formalizes long-context evaluation gap. (3) Practitioner assessment solidifies: critical analysis documents why LLM-only extraction fails (hallucinations, layout collapse, inconsistent results) and advocates explicit OCR-first/LLM-second hybrid pattern—evidence of architectural consensus post-pilot. (4) Concrete deployment wins: Proxet case study shows global investment firm reduced week-long portfolio analysis to hours using Claude agentic multimodal agents; Sun Finance (live January 2026) achieved 79.7%→90.8% accuracy, 91% cost reduction ($4/doc→$0.36/doc), 20 hours→<5 seconds via Textract+Bedrock+Rekognition hybrid. (5) Schema optimization (Amazon Science PARSE paper) addresses extraction reliability bottleneck—structured extraction still insufficient without semantic validation. Market signal: deployed ROI now proven in financial services and loan origination, evaluation infrastructure mature (Document Haystack, BenchLM benchmarks), but production adoption still constrained by hybrid architecture complexity, schema design burden, and organizational data readiness rather than model capability alone.