Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

AI procurement & vendor risk assessment

BLEEDING EDGE

TRAJECTORY

Stalled

Standards, criteria, and risk assessment frameworks for evaluating, procuring, and monitoring third-party AI tools and services. Includes vendor evaluation rubrics and ongoing risk monitoring; distinct from general procurement which doesn't address AI-specific risks.

OVERVIEW

AI procurement and vendor risk assessment is the practice of establishing standards, evaluation criteria, and ongoing monitoring frameworks to manage the risks of deploying third-party AI tools and services. As enterprises rapidly adopt generative AI, they face a new category of risk: the vendor itself may be unproven, opaque about its training data, misaligned with governance requirements, or operationally unstable. This practice sits at the intersection of security, compliance, and procurement — applying the vendor risk discipline (common in regulated industries like finance and healthcare) to the novel domain of AI tooling. The core tension is between adoption velocity and risk tolerance: enterprises want to move fast, but vendor risks in AI are still poorly understood.

CURRENT LANDSCAPE

By mid-April 2026, AI procurement and vendor risk assessment had matured into an operationalized discipline with formalized policy, market consolidation, and production deployments demonstrating real efficiency gains—yet enterprise execution capability remained fragmented and strategic uncertainty about vendor viability persisted. Tooling maturity reached clear GA status: OneTrust's AI-Ready Governance Platform offered AI-powered third-party risk assessment automation with 70%+ acceleration in review cycles; Panorays earned Forrester Wave Leader recognition (Q2 2026) for agentic AI capabilities in cybersecurity risk assessment; market sizing showed VRM software reached $12.3B in 2025, projected at 15% CAGR to $39B by 2033. Production deployments confirmed real value: Pima Community College deployed FortifyData's AI Auditor to reduce vendor compliance review from 6-8 hours per analyst to 1-2 hours per vendor (75% reduction); organizations using AI-driven procurement tools showed 3.7x greater resilience to market disruptions (Keelvar survey, March 2026); global procurement adoption jumped to 73% piloting or actively scaling (CompanionLink, April 2026), with 43% of organizations actively deploying and 12% at large-scale implementation (Hackett Group, March 2026). However, critical market signals revealed hidden fragility: Ncontracts survey of financial services professionals found AI vendor risk now tied with cybersecurity as top third-party concern, yet 72% reported only partial awareness of how to manage it—exposing a yawning capability gap between adoption and governance readiness. Practitioner-documented failures highlighted real deployment risks: OpenAI's June 2025 8-hour outage halted operations across thousands of enterprises; Builder.ai insolvency ($1.3B) locked customers out; Azure GPT-4 regional deprecation forced emergency migrations. Federal procurement policy crystallized with binding vendor obligations: GSA's draft AI procurement clause (GSAR 552.239-7001, April 2026) imposed new disclosure, use rights, and American AI system requirements on federal contractors; California Executive Order N-5-26 (March 2026) mandated state agencies establish AI vendor certification and governance requirements by late July 2026. Yet the regulatory turn revealed structural problems: industry groups warned of unworkable sourcing requirements and conflicting safety policy mandates; data readiness emerged as the critical failure point with 74% of procurement leaders deploying AI despite acknowledging their data was not AI-ready (SpecLens, March 2026). The core tension persisted and sharpened: procurement scaled AI adoption rapidly and governance frameworks matured, but vendor lock-in risks materialized, data readiness gaps widened, and regulatory requirements imposed conflicting mandates. Vendor viability assurance, integrated governance execution, and proof of measured ROI remained binding constraints on tier advancement.

TIER HISTORY

ResearchJun-2024 → Oct-2024
Bleeding EdgeOct-2024 → present

EVIDENCE (73)

— Framework for managing foundation model dependencies in vendor supply chains. Maps NIST AI 600-1 requirements to vendor questionnaire sections and contract controls for AI-specific risk.

— Analysis of real-world data movements across GenAI SaaS: 82% of top 100 most-used tools classified as medium/high/critical risk; 39.7% of data into AI tools involves sensitive data. Critical evidence for vendor risk assessment decision-making.

— Datasite case study: Three-group sign-off (deal, CISO, compliance) now required; ISO 42001 AI management certification competitive advantage; red flags include single-vendor lock-in without data isolation.

— Documented incidents of vendor access terminations (Anthropic, OpenAI, Windsurf) with business impact. Demonstrates real vendor risk failure modes and SLA liability gap in AI procurement.

— Performance 'jagged' with evaluation gaps vs real-world use. Contract implications: vendor capability claims become actionable warranties; high-risk systems require technical docs, testing environments, explainability, audit logs.

— GA vendor risk platform with automated AI document analysis, real-time scanning, and risk scoring under 60 seconds. Demonstrates vendor tooling maturity for AI-driven assessment automation.

— Specialized AI agent automating vendor risk assessment across SOC2, questionnaires, and control mapping with 90% time savings (3-5 days→2-4 hours per vendor). Production evidence of AI agents deployed for procurement workflow.

— KPMG survey of 2,110 C-suite leaders: 95% have AI strategy, 8% achieve measurable ROI. Governance and vendor assessment cited as barriers; weak vendor assessment increases compliance gaps and vendor lock-in risk.

HISTORY

  • 2024-Q2: Early vendor risk assessment frameworks emerging in healthcare; GRC platforms beginning to integrate AI-specific third-party risk intelligence; federal procurement struggling with pace-of-change and vendor transparency gaps.
  • 2024-Q3: Structured third-party AI assessment guidance formalized by IAPP and enterprise vendors; government procurement pilots showing early productivity gains; widening evidence of vendor tool quality gaps and customer dissatisfaction highlighting real risks in vendor selection.
  • 2024-Q4: AI procurement platforms reaching production scale with major enterprise deployments (Fairmarkit, Globality, Beroe); 94% adoption across procurement teams but only 35% reporting high impact; vendor risk management frameworks published by major firms (Debevoise, Aon); regulatory landscape solidifying (EU AI Act enforcement) but vendor transparency and standardized assessment criteria remain fragmentary.
  • 2025-Q1: Industry standardization accelerates with Data & Trusted AI Alliance VAF framework providing shared language for vendor risk and value assessment; dedicated vendor risk tooling expands (OneTrust document scanning, OnTrust AI platform); critical evaluation frameworks and skepticism emerge over ROI sustainability and vendor transparency challenges.
  • 2025-Q2: Vendor risk tooling matures with OneTrust spring release and proliferating practitioner frameworks (FS-ISAC, AIGL, ETA); federal policy shifts pro-innovation stance (M-25-21/22); Builder.ai collapse ($1.3B vendor insolvency) demonstrates supply-chain fragility; Deloitte survey shows early-stage adoption with hybrid approaches; critical analyses document hype-cycle downsides and adoption barriers (integration costs, legacy systems, expertise gaps); gap widens between framework standardization and enterprise implementation capability.
  • 2025-Q3: Procurement AI adoption accelerates with Conduent deploying Fairmarkit; 50% of procurement teams using AI but 95% of pilots fail production (Gartner); large-firm AI adoption declines 14%→12% amid ROI challenges; US DOJ revamps procurement with cross-functional vendor vetting; Builder.ai fraud documented ($450M); recalibration evident as organizations struggle with integration complexity, vendor viability, and measured returns.
  • 2025-Q4: Adoption breadth masks maturity gap: 100% of procurement leaders implemented AI but only 6% achieved advanced maturity (ProcureAbility); 80% saw no material GenAI ROI contribution (McKinsey); governance gap widens—81% lacking central control over vendor/AI tools. McKinsey survey of 300+ leaders highlights potential 25-40% efficiency gains but Gartner data shows 30% projects abandoned post-PoC. Government and enterprise frameworks mature (VAF, FS-ISAC, NIST AI RMF) but traditional procurement methods fail for probabilistic AI systems—new vendor assessment approaches emerging (Optiv, OMB M-25-15). Core tension sharpens: adoption velocity vs. governance capability and vendor viability assurance.
  • 2026-Feb: Vendor risk assessment practice matures as formalized discipline with GA tooling (OneTrust AI-Ready Governance Platform + Fall 2025 Third-Party Risk Agent, enterprise frameworks); government procurement signals new vendor risk standards (DoW AI model parity mandate, GSA/Anthropic de-risking); production deployments confirm tooling value (Pima Community College 75% efficiency gain). However, NBER survey reveals critical ROI gap: 80%+ firms report zero measurable AI impact despite 69% adoption, undermining vendor value claims. Procurement shift accelerates: AI becomes top-3 strategic priority (Hackett Group), but only 11% of organizations report deployment readiness (ProcureAbility). Fundamental tension sharpens: vendor viability verification, governance execution capability, and proof of ROI remain binding constraints.
  • 2026-Apr: Vendor risk assessment tooling reaches clear market maturity with new agentic capabilities. VRM market sizing $12.3B (2025)→$39B (2033); Panorays earns Forrester Wave Leader (agentic AI scores); UpGuard and V7 Go launch GA agentic vendor risk agents (90% time savings on assessments). Adoption accelerates to 73% piloting or scaling, 43% actively deploying. Production deployments scale: Pima CC 75% efficiency gain, procurement AI teams 3.7x more resilient to disruptions. Vendor selection criteria shift fundamentally: Stanford HAI 2026 AI Index shows capability parity across frontier models (all meet 95%+ of business requirements), collapsing performance-based differentiation; model transparency index fell from 58→40 in one year. Cyberhaven analysis finds 82% of top 100 most-used GenAI SaaS classified as medium/high/critical risk; 39.7% of data flows involve sensitive data. Supply-chain risk frameworks formalize: NIST AI 600-1 requirements now mapped to vendor questionnaire sections and contract controls for foundation model dependency management. Procurement evaluation gap identified: traditional RFP checklists and uptime SLAs fail for probabilistic systems; enterprises shift to 'bring-your-own-eval' methodologies with distributional scoring and model-change notification contracts. Enterprise procurement standards crystallize: three-group sign-off (deal, CISO, compliance) now required; ISO 42001 AI management certification table-stakes; data isolation and governance documentation first-round gating criteria. Policy accelerates: GSA draft clause GSAR 552.239-7001 imposes binding vendor obligations; California EO N-5-26 mandates state AI vendor certification; EU AI Act (Aug 2026) enforcement begins; export control enforcement escalates (Applied Materials $252M settlement, Super Micro indictment $2.5B). Vendor viability risk documented: Anthropic, OpenAI, Windsurf cases show unilateral access terminations with no appeals process and zero liability for downstream business losses. Critical capability gap persists: Ncontracts survey finds AI vendor risk parity with cybersecurity as top concern, yet 72% report only partial governance readiness; KPMG finds 95% have AI strategy but only 8% achieve measurable ROI; Forrester/Hackett show 69% confident in AI vision vs. 31% in execution. Binding constraints remain: vendor viability assurance, integrated governance capability across deal lifecycle, measured ROI proof, and data readiness (74% deploy despite acknowledging data unreadiness).