The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
Standards, criteria, and risk assessment frameworks for evaluating, procuring, and monitoring third-party AI tools and services. Includes vendor evaluation rubrics and ongoing risk monitoring; distinct from general procurement which doesn't address AI-specific risks.
AI procurement and vendor risk assessment is the practice of establishing standards, evaluation criteria, and ongoing monitoring frameworks to manage the risks of deploying third-party AI tools and services. As enterprises rapidly adopt generative AI, they face a new category of risk: the vendor itself may be unproven, opaque about its training data, misaligned with governance requirements, or operationally unstable. This practice sits at the intersection of security, compliance, and procurement — applying the vendor risk discipline (common in regulated industries like finance and healthcare) to the novel domain of AI tooling. The core tension is between adoption velocity and risk tolerance: enterprises want to move fast, but vendor risks in AI are still poorly understood.
By mid-April 2026, AI procurement and vendor risk assessment had matured into an operationalized discipline with formalized policy, market consolidation, and production deployments demonstrating real efficiency gains—yet enterprise execution capability remained fragmented and strategic uncertainty about vendor viability persisted. Tooling maturity reached clear GA status: OneTrust's AI-Ready Governance Platform offered AI-powered third-party risk assessment automation with 70%+ acceleration in review cycles; Panorays earned Forrester Wave Leader recognition (Q2 2026) for agentic AI capabilities in cybersecurity risk assessment; market sizing showed VRM software reached $12.3B in 2025, projected at 15% CAGR to $39B by 2033. Production deployments confirmed real value: Pima Community College deployed FortifyData's AI Auditor to reduce vendor compliance review from 6-8 hours per analyst to 1-2 hours per vendor (75% reduction); organizations using AI-driven procurement tools showed 3.7x greater resilience to market disruptions (Keelvar survey, March 2026); global procurement adoption jumped to 73% piloting or actively scaling (CompanionLink, April 2026), with 43% of organizations actively deploying and 12% at large-scale implementation (Hackett Group, March 2026). However, critical market signals revealed hidden fragility: Ncontracts survey of financial services professionals found AI vendor risk now tied with cybersecurity as top third-party concern, yet 72% reported only partial awareness of how to manage it—exposing a yawning capability gap between adoption and governance readiness. Practitioner-documented failures highlighted real deployment risks: OpenAI's June 2025 8-hour outage halted operations across thousands of enterprises; Builder.ai insolvency ($1.3B) locked customers out; Azure GPT-4 regional deprecation forced emergency migrations. Federal procurement policy crystallized with binding vendor obligations: GSA's draft AI procurement clause (GSAR 552.239-7001, April 2026) imposed new disclosure, use rights, and American AI system requirements on federal contractors; California Executive Order N-5-26 (March 2026) mandated state agencies establish AI vendor certification and governance requirements by late July 2026. Yet the regulatory turn revealed structural problems: industry groups warned of unworkable sourcing requirements and conflicting safety policy mandates; data readiness emerged as the critical failure point with 74% of procurement leaders deploying AI despite acknowledging their data was not AI-ready (SpecLens, March 2026). The core tension persisted and sharpened: procurement scaled AI adoption rapidly and governance frameworks matured, but vendor lock-in risks materialized, data readiness gaps widened, and regulatory requirements imposed conflicting mandates. Vendor viability assurance, integrated governance execution, and proof of measured ROI remained binding constraints on tier advancement.
— Framework for managing foundation model dependencies in vendor supply chains. Maps NIST AI 600-1 requirements to vendor questionnaire sections and contract controls for AI-specific risk.
— Analysis of real-world data movements across GenAI SaaS: 82% of top 100 most-used tools classified as medium/high/critical risk; 39.7% of data into AI tools involves sensitive data. Critical evidence for vendor risk assessment decision-making.
— Datasite case study: Three-group sign-off (deal, CISO, compliance) now required; ISO 42001 AI management certification competitive advantage; red flags include single-vendor lock-in without data isolation.
— Documented incidents of vendor access terminations (Anthropic, OpenAI, Windsurf) with business impact. Demonstrates real vendor risk failure modes and SLA liability gap in AI procurement.
— Performance 'jagged' with evaluation gaps vs real-world use. Contract implications: vendor capability claims become actionable warranties; high-risk systems require technical docs, testing environments, explainability, audit logs.
— GA vendor risk platform with automated AI document analysis, real-time scanning, and risk scoring under 60 seconds. Demonstrates vendor tooling maturity for AI-driven assessment automation.
— Specialized AI agent automating vendor risk assessment across SOC2, questionnaires, and control mapping with 90% time savings (3-5 days→2-4 hours per vendor). Production evidence of AI agents deployed for procurement workflow.
— KPMG survey of 2,110 C-suite leaders: 95% have AI strategy, 8% achieve measurable ROI. Governance and vendor assessment cited as barriers; weak vendor assessment increases compliance gaps and vendor lock-in risk.