Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

AI acceptable use policy development

GOOD PRACTICE

TRAJECTORY

Advancing

Development of organisational policies governing acceptable use of AI tools and systems by employees and contractors. Includes policy template development and use case approval frameworks; distinct from AI regulatory compliance which targets external rather than internal governance.

OVERVIEW

AI acceptable use policy development has crossed from experimental into proven territory. The practice — defining what employees and contractors may and may not do with AI tools — now has GA tooling, international standards (ISO 42001), dedicated vendor platforms, and regulatory mandates in multiple jurisdictions. The question facing most organisations is no longer whether to create an AUP but how to make one that actually works in practice. That distinction matters, because a persistent governance-to-deployment gap defines the field: organisations adopt AI far faster than they operationalise the policies meant to govern it. Gartner projects 80% of organisations will have formalised AI policies by the end of 2026, yet surveys consistently show fewer than half back those policies with functioning risk frameworks or enforcement mechanisms. The structural challenge has shifted from policy creation to policy integration — standalone AUPs designed for ChatGPT-era tools are becoming obsolete as AI embeds into enterprise platforms. Organisations that treat AUP development as a one-time compliance exercise rather than a continuous governance function risk both regulatory exposure and unchecked shadow AI use.

CURRENT LANDSCAPE

U.S. state AI laws that took effect on 1 January 2026 — California's Transparency in Frontier AI Act, Texas's Responsible AI Governance Act, and Illinois employment-discrimination rules — have turned AUP development from a best-practice recommendation into a compliance obligation for many organisations. The EU AI Act timeline, accelerating toward August 2026, adds further pressure. Yet regulation is outpacing implementation. Gallagher's 2026 survey found that 63% of organisations have operationalised AI systems, but fewer than 47% have formal risk management frameworks, incident response plans, or ethical impact assessments. Shadow AI compounds the problem: UpGuard data shows 80% of workers using unapproved tools, rendering paper policies ineffective.

The vendor ecosystem has matured accordingly. Credo AI leads Forrester's Wave for AI governance with production deployments across Fortune 100 firms; Microsoft holds ISO 42001 certification for Copilot; and the Credo AI-Carahsoft partnership has opened governance tooling to U.S. public-sector procurement. Gartner projects the governance-tooling market will grow from $309M to $4.8B by 2034. But tooling alone does not close the gap. In the public sector, 70% of civil servants use AI while only 18% consider their government's governance effective. Enterprise governance roles remain scarce at smaller firms — 36% have them, compared with 59% at large enterprises. The result is a sharply bifurcated landscape: well-resourced organisations operationalising standards-aligned governance with measurable efficiency gains, while mainstream firms remain caught between policy awareness and operational enforcement.

TIER HISTORY

ResearchMar-2023 → Mar-2023
Bleeding EdgeMar-2023 → Jan-2025
Leading EdgeJan-2025 → Jul-2025
Good PracticeJul-2025 → present

EVIDENCE (85)

— Cyberhaven Labs analyzed billions of data movements across GenAI tools showing top 1% early adopters using 300+ tools vs cautious enterprises using <15—revealing extreme adoption divergence.

— iManage Knowledge Work Benchmark: 85% of organizations at some stage of AI adoption, but maturity splits sharply—only 27% fully integrated, with 36% experiencing policy violations.

— iSHIR assessment: 70% report piloting AI but fewer than 20% scaled to enterprise—policy positioned as the critical blocking issue between experimentation and production deployment.

AI strategies and compliance plan - GSAIndustry Reports

— U.S. General Services Administration published comprehensive AI strategies and compliance plan, establishing federal procurement and governance expectations for contractor AI deployment.

— Stanford HAI 2026 AI Index: policy adoption improved (11% with no policy vs 24% prior), but incidents rose to 362 in 2025. ISO 42001 cited by 36% of organizations as governance influence.

— ProGEO.ai AIMM Index of 112 marketing professionals shows 76.8% have corporate AUP but only 43.8% enforce with technical controls—documenting critical policy-enforcement gap.

— Keep Aware analysis: 75% of knowledge workers use AI daily but most organizations have policies without enforcement—'what exists is not a true policy but a memo.' Cites visibility and control gaps.

— Airbnb implemented platform-wide AUP banning AI-generated evidence in response to documented fraud case (Manhattan superhost with fabricated damage claims), demonstrating real-world policy enforcement at scale across 12M+ listings.

HISTORY

  • 2023-H1: Emergence of corporate AI acceptable use policies driven by ChatGPT adoption and data privacy concerns. Legal firms publish policy templates; educational institutions lead sectoral adoption. Fewer than 50% of organisations have formal AI governance; surveys show strong demand for policy guidance and human oversight mechanisms. Multiple governance frameworks (ISO 42001, NIST AI RMF) published to support policy development. Real-world deployment failures (cybersecurity, IP leakage, accuracy issues) underscore policy necessity.

  • 2023-H2: Critical adoption-policy gap widens: 56-88% of employees use GenAI but only 8-28% of organisations have formal policies, with educational sector most immature (8% adoption). ISO/IEC 42001 published as first international AI management standard; however, awareness and implementation capacity remain limited. Security risks (misinformation, unauthorised use, IP leakage) drive urgency. Policy development remains concentrated in well-resourced and regulated sectors; mainstream adoption blocked by skills gaps and uncertainty about enforcement mechanisms.

  • 2024-Q1: Regulatory pressure increases with DOJ mandate to assess AI risks in corporate compliance programmes, signalling enforcement implications. Sectoral gaps persist: 79% of K-12 schools still lack clear policies. Board-level engagement remains weak (12% have held substantive AI discussions). Vendor ecosystem matures with Credo AI GRC platform launch; practitioner frameworks evolve linking NIST standards to policy development. Critical assessments of governance gaps (Amnesty International) highlight human rights blindspots. Overall adoption trajectory remains slow despite regulatory momentum.

  • 2024-Q2: Federal government mandates AI governance across agencies (OMB M-24-10), creating procurement leverage for policy adoption including chief AI officers and use-case inventories. Vendor ecosystem consolidates towards sector-specific controls frameworks (IBM Financial Services) and practical guidance (Google Cloud). Practitioner frameworks standardise around governance: ITIF publishes policy-response taxonomy; IAPP/FTI release maturity assessments. Industry surveys document persistent governance gaps—adoption outpaces implementation—despite regulatory acceleration and maturing vendor tooling. Sectoral disparities persist; adoption-policy misalignment remains structural barrier to maturity.

  • 2024-Q3: ISO 42001 ecosystem matures with 15 accredited certification bodies by August, signalling standardization progress. Vendor tools consolidate with Credo AI governance advisory services and Google/Microsoft content filtering suites. Higher education adoption remains weak: 31% of college students uncertain about AI policies, only 16% cite institutional guidance. Critical enforcement gap emerges: 80% of organisations deploy guidelines but 60% of employees bypass them, highlighting need for compliance mechanisms beyond policy documents. Practitioner guidance advances with sector-specific frameworks (PRSA PR, compliance consultancies), but shadow AI and non-compliance remain systemic challenges.

  • 2024-Q4: Regulatory enforcement deepens: DOJ updates compliance guidance to explicitly require AI risk assessment and governance integration, creating concrete compliance program implications. C-suite policy adoption accelerates: 44% of executives report organisational GenAI policies (4.4x growth from 2023). Functional area fragmentation emerges: only 60% of HR departments have AUPs despite 94% of HR professionals using AI. Enterprise governance gaps persist: Deloitte survey shows 58% deployment of GenAI but 21–41% have zero controls; only 47% confident in governance adaptation. Critical practitioner assessment: academics highlight platform-integration challenge—AI policies designed for standalone tools become obsolete as AI embeds into existing platforms (Adobe, Google, Microsoft), requiring architectural shift in policy design. Standards ecosystem matures with ISO 42001 certification availability, but adoption remains concentrated in regulated sectors.

  • 2025-Q1: Policy adoption accelerates globally with board-level engagement rising (Glass Lewis: 40% of European large-caps have formal policies), but implementation deficits widen. Pacific AI survey quantifies the gap: 75% report having policies but only 59% have governance roles; Boardspan shows boards self-grade AI oversight at C-, lowest-scoring governance topic. Negative-signal evidence dominates: 63% lack adequate governance frameworks, with major failures averaging $4.2M in costs. Large enterprises operationalize governance at scale (IBM case study: 58–62% reduction in data clearance times across 1000+ datasets). Platform-integration challenge continues as policies designed for standalone tools become obsolete amid AI embedding into enterprise platforms (Adobe, Google Workspace, Microsoft 365). Landscape bifurcates: regulated sectors and well-resourced enterprises advance maturity; mainstream organizations trapped in policy-implementation gap.

  • 2025-Q2: Vendor ecosystem matures with standards alignment: Microsoft achieves ISO/IEC 42001 certification for Copilot (April); Credo AI/IBM integrate policy automation into watsonx.governance (April). Functional area fragmentation worsens: marketing teams show 63% lack policies; only 27% of organizations review AI-generated outputs before use. Governance process maturity remains low: only 23% have standardized AI intake processes, 36% use manual spreadsheets. Pacific AI survey (June) reconfirms gap: 75% have policies but 59% lack governance roles, 54% lack incident response. Adoption barriers persist (Deloitte): compliance and regulatory requirements impede AI deployment despite policy availability. Platform-integration challenge continues: standalone-tool policies becoming obsolete as AI embeds into enterprise platforms. Bifurcated landscape deepens: enterprises with standards-aligned governance advance; mainstream organizations lag in implementation despite high policy awareness.

  • 2025-Q3: Governance tool ecosystem accelerates: Credo AI named Forrester Wave leader with highest marks in AI policy management and compliance workflows; enterprise adoption spreads across Fortune 100 financial services, restaurant, and MedTech sectors. Healthcare and specialized vendors operationalize production-level AUPs (John Snow Labs: risk-based prohibited uses with cross-functional governance). Comprehensive synthesis of Q3 studies documents persistent maturity crisis: 93% of companies use AI but only 7% have fully embedded governance frameworks; 72% lack company-wide responsible use policies; 62% lack documented governance plans. Governance execution remains severely constrained: only 30% have production AI systems deployed, 48% lack monitoring, pressure to move fast remains top barrier. Small enterprises drastically lag large firms in governance capability (36% have governance roles vs. 59%+ at large enterprises). Negative-signal dominance continues: governance tool adoption and policy automation accelerating in sophisticated enterprises while mainstream organizations remain trapped in policy-implementation gap, with deep chasm between stated policies and operational capacity.

  • 2025-Q4: Enterprise AI adoption accelerates to 80% while governance implementation lags sharply: 78% use AI but only 25% have fully implemented governance programs. Board-level engagement remains structurally weak (32% have AI committees, 12% have risk frameworks). Compliance function under acute time pressure: 47% of compliance leaders cite time as barrier despite regulatory mandates. Vendor ecosystem signals strong market growth: Gartner projects $309M (2025) to $4.8B (2034) market expansion; Credo AI reports 150% customer growth with 70% faster use-case reviews. Negative evidence dominates: 97% of orgs with AI breaches lacked access controls; 63% lack formal AUPs despite deployment; critical governance failures documented in ethics research. Process maturity remains low: only 23% have standardized intake processes, 36% use manual spreadsheets. Capability gap widens: governance role adoption 36% at SMEs vs. 59%+ at large firms. Bifurcated landscape deepens into year-end: well-resourced enterprises operationalizing standards-aligned governance with measurable ROI; mainstream organizations with nominal policies but sparse operational enforcement.

  • 2026-Jan: U.S. state AI laws become effective January 1 (CA Transparency Act, TX Responsible AI Act, IL employment discrimination rules), creating immediate AUP compliance drivers. Regulatory environment converges toward 'governance beyond principles to practice' with Hiroshima Global Forum signaling shift to operational standards (AI Safety Institutes as coordination nodes, continuous governance for agentic systems). ISO 42001 ecosystem reports surge in organizational interest; Schellman audit firm documents steady stream of AUP preparation questions from enterprises building formal governance programs. Public sector adoption expands: Credo AI-Carahsoft partnership makes governance tooling available to federal/state/local agencies via procurement vehicles. Industry analysis confirms pilot-to-production gap stems from governance confidence, not model limitations, driving rapid enterprise spending shift to GRC capabilities and dedicated roles. IE University framework standardizes seven core AUP components (inventory, risk classification, ownership, lifecycle controls, documentation, monitoring, auditability), providing operational blueprint for organizations in preparation phase. Structural bifurcation persists: sophisticated enterprises operationalizing standards-aligned governance with measurable efficiency gains; mainstream and SME organizations remain constrained by policy-implementation gaps and sparse enforcement capacity.

  • 2026-Feb: Regulatory environment and policy formalization accelerate: Gartner projects 80% of organizations will formalize AI policies by 2026; Insentra reports steady policy adoption momentum. Gallagher survey documents paradox—63% operationalized AI but less than 47% have formal risk frameworks, incident response, or ethical assessments. Public sector governance gaps widen: 70% of civil servants use AI but only 18% rate government governance effective, indicating enforcement failures despite adoption. Shadow AI remains endemic at scale: 80% of workers use unapproved tools. Governance-innovation gap persists as critical barrier: 83% of AI leaders express major concern about governance, but only 26% advance beyond pilot stage, pointing to confidence deficits rather than capability constraints. Platform-integration challenge continues as AUPs become obsolete amid AI embedding into enterprise systems (Adobe, Google Workspace, Microsoft). Bifurcated landscape deepens: well-resourced and regulated sectors operationalize ISO-aligned governance with dedicated roles and measurable efficiency; mainstream and SME organizations trapped in policy-awareness-to-implementation gap with weak enforcement mechanisms.

  • 2026-Apr: Sector expansion accelerates: Wisconsin's Department of Public Instruction published detailed K-12 AUP frameworks and Airbnb operationalized a platform-wide AUP banning AI-generated damage-claim evidence across 12M+ listings, demonstrating enforcement at scale. Irish financial-services firms reported governance framework adoption doubled year-over-year to 16%; Harvard Law School issued board-level guidance integrating AUP development into compliance obligations for regulated sectors. The adoption-implementation gap persists: a multi-country survey of 6,500+ respondents shows 65% AI adoption but 58% lack security/privacy training, 57% of US SMEs use AI while 77% lack a formal AUP, and iManage's global benchmark of 85% organizational AI adoption found 36% experiencing policy violations — confirming that policy existence continues to lag far behind enforcement capacity.