Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

Pick a role above to explore practices

BLEEDING EDGE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

LEADING EDGE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
👥 PEOPLE & TALENT
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

GOOD PRACTICE

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🔬 RESEARCH & KNOWLEDGE
⚖️ LEGAL, COMPLIANCE & RISK
🎧 CUSTOMER OPERATIONS
🏛️ AI GOVERNANCE & SAFETY
📊 DATA & ANALYTICS
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💼 SALES & REVENUE
🎬 CREATIVE & GENERATIVE MEDIA
👁️ COMPUTER VISION & SENSING
💹 FINANCE & ACCOUNTING
🔄 OPERATIONS & PROCESS AUTOMATION
👥 PEOPLE & TALENT
🚗 AUTONOMOUS SYSTEMS & VEHICLES
🦾 PHYSICAL AI & ROBOTICS
🎓 EDUCATION & LEARNING
PERSONAL EFFECTIVENESS

ESTABLISHED

⌨️ SOFTWARE ENGINEERING
✍️ CONTENT & MARKETING
🛡️ IT OPERATIONS & SECURITY
🎯 PRODUCT & DESIGN
💹 FINANCE & ACCOUNTING
👥 PEOPLE & TALENT

🏛️ AI Governance & Safety

Practices for evaluating, governing, and ensuring the responsible deployment of AI systems. Deeply polarised: model evaluation and bias auditing are good practice, but nearly half the domain is bleeding-edge — alignment research, interpretability, and AI safety benchmarking lack production-grade tooling. Regulatory pressure is accelerating adoption of the mature practices while the frontier remains largely academic.

22 practices: 6 good practice, 8 leading edge, 8 bleeding edge

Where AI Stands in AI Governance & Safety

AI governance is now defined by a single uncomfortable fact: enforcement has arrived before readiness. The EU AI Act's high-risk deadline lands in August 2026 with penalties reaching 7% of global revenue. The first US federal AI law (AI Accountability Act, passed Senate 67-33 in March 2026) imposes 4% revenue penalties on systems affecting more than 10,000 people annually. Finland has activated full market surveillance. Italy's AGCM imposed interim measures on Meta. Colorado's enforcement begins June 2026. These are no longer theoretical obligations -- they carry quantified financial exposure running to tens of billions for the largest technology firms.

Yet the organisations subject to these mandates remain structurally unprepared. Only 25% of enterprises have implemented AI governance frameworks. Eighty-three percent lack AI system inventories. Fifty-eight percent of compliance professionals operate at basic or dependent maturity levels, relying on manual spreadsheets. Stanford's 2026 AI Index documents a transparency collapse: the Foundation Model Transparency Index fell from 58 to 40 out of 100 year-over-year, with 80 of 95 foundation models released in 2025 lacking training code or parameter disclosure. The gap between regulatory expectation and organisational capability is not closing -- it is widening as the deadlines approach.

The domain's maturity profile splits cleanly into two tiers. Foundational governance practices -- acceptable use policies, human oversight mechanisms, model interpretability, production drift monitoring -- have crossed into proven territory where tooling is commoditised, vendor support is universal, and the question is operational execution rather than technical feasibility. The majority of practices, however, remain at the frontier: regulatory compliance, adversarial testing, procurement risk assessment, data governance, incident tracking, and responsible AI training all have capable tooling but negligible organisational adoption. The binding constraint across the entire domain is not technology but institutional capacity -- the ability to inventory, classify, govern, and continuously monitor AI systems at the pace they are being deployed. Organisations that treated governance as a 2027 problem discovered in April 2026 that it is already a 2026 problem.

What's New, 2026-04-15 to 2026-04-29

This scan cycle brought substantial new evidence across the domain, with 181 new data points confirming and extending the structural dynamics already in play. No practice changed tier or trend -- the domain's shape is stable -- but the evidence base has sharpened considerably on several fronts.

The most significant signal is the acceleration of enforcement economics. The RegTech market surpassed $19 billion at 23% CAGR, with AI-powered compliance tools now delivering quantified ROI: 30-50% cost reductions, 60%+ onboarding acceleration, and leading banks achieving 50% compliance review time reduction. Compliance is no longer just a cost centre -- it has crossed the economic viability threshold where tooling pays for itself. Simultaneously, the cost of non-compliance has become concrete: a European deal was repriced down by EUR 7 million for documentation gaps, a EUR 90 million HR carve-out was withdrawn entirely for non-compliance, and a EUR 35 million minority stake earned a 1.5-2x revenue premium for strong governance. Compliance posture now moves deal valuations.

Frontier lab governance decisions provided the cycle's most notable corporate signal. Anthropic withheld Claude Mythos from general release after red-teaming identified thousands of vulnerabilities, restricting access to 40 vetted partners under Project Glasswing. This represents the Responsible Scaling Policy being operationalised at genuine commercial cost -- a meaningful precedent. However, critical analysis from independent observers noted that across the industry, governance policies frequently do not gate deployments, red teams report findings post-shipping, and risk registers do not block releases. The gap between governance theatre and governance substance remains wide.

The hallucination enforcement pipeline continues to escalate. A tracked database now documents 1,227 AI hallucination incidents globally (811 in the US), with 5-6 new cases surfacing daily and $145,000+ in Rule 11 sanctions in Q1 2026 alone. Stanford's preregistered study confirmed 17-33% hallucination rates across major legal AI platforms (Lexis+ AI, Westlaw AI, Ask Practical Law AI). Courts are sanctioning AI-assisted filings with increasing severity -- the Obi v. Cook County decision imposed $5,000 for 13 hallucinated citations in a single motion.

Key Tensions

  • Enforcement timelines versus organisational readiness. The EU AI Act high-risk deadline (August 2026), Colorado SB 205 (June 2026), and the US AI Accountability Act (September 2027) create stacked compliance obligations. Yet only 8 of 27 EU Member States have designated enforcement authorities, technical harmonised standards will not be ready until end-2026, and 83% of enterprises lack the AI inventories needed to even begin classification. The result is a regulatory environment where obligations are clear, penalties are quantified, and the infrastructure to comply does not exist at the organisational level. Evidence-production cost for post-hoc compliance runs 6-12 weeks of forensic engineering per system.

  • Adoption velocity versus governance capacity. Eighty-one percent of technology firms are scaling agentic AI, yet only 21% have mature governance for agent deployments. The top 1% of early adopters use 300+ AI tools while cautious enterprises use fewer than 15 -- an extreme divergence that makes uniform governance frameworks structurally impractical. Seventy percent of organisations report piloting AI, but fewer than 20% have scaled to enterprise production, with policy positioned as the critical blocking issue. The organisations deploying fastest are accumulating the most governance debt.

  • Tooling maturity versus measurement credibility. Governance platforms, compliance tools, and evaluation frameworks have reached commodity status. But independent testing consistently undermines confidence in their outputs: guardrails are bypassed at 78%+ success rates, benchmark accuracy does not predict production performance, and the Foundation Model Transparency Index has collapsed. The tools work; the question is whether what they measure is meaningful. Stanford documented that benchmark improvements do not translate to regulatory readiness -- a finding that calls into question the entire compliance-through-tooling thesis.

  • Frontier lab safety posture versus industry practice. Anthropic's decision to withhold Claude Mythos at commercial cost -- restricting to 40 partners after identifying thousands of red-team vulnerabilities -- demonstrates that responsible scaling can be operationalised. But critical analysis shows this remains exceptional: across the broader industry, governance policies do not consistently gate deployments, and 19% of AI medical queries produce harmful outputs. The gap between what frontier labs demonstrate is possible and what the average enterprise practises creates a false sense of industry-wide progress.

  • Compliance as value creation versus compliance as cost. New evidence this cycle quantified both sides of the ledger. Organisations with mature governance deliver 3.4x effectiveness improvements and see 42% versus 21% ROI on AI investments. Deal valuations now reflect compliance posture (1.5-2x revenue premiums for strong governance, EUR 7 million repricing for documentation gaps). Yet 56% of CEOs report zero revenue or cost impact from AI overall, and only 15% of decision-makers see measurable EBITDA lift. The organisations capturing value from AI and those capturing value from governance are increasingly the same set -- but they remain a small minority.

Top 10 Evidence Items

  1. EU AI Act Compliance: Inside the 2026 Deal Room (case-study) — Three real M&A transactions with named figures (EUR 180M deal repriced EUR 7M down, EUR 90M carve-out withdrawn entirely, EUR 35M stake commanding 1.5-2x revenue premium) make the compliance-as-value-creation thesis concrete and verifiable rather than theoretical. https://www.theindustrylens.blog/post/eu-ai-act-compliance-governance-gap

  2. Stanford's 2026 AI Index: Frontier Model Transparency Scores Collapsed 31% in One Year (research-paper) — The sharpest available counter-signal to the "tooling is maturing" narrative: major labs simultaneously withdrew disclosures, dropping the industry average from 58/100 to 40.69/100, directly undermining compliance-through-tooling claims. https://groundy.com/articles/stanfords-2026-ai-index-frontier-model-transparency-scores-collapsed-31-in-one/

  3. Proofpoint Research Reveals Half of Global Organizations Experienced AI Incidents (adoption-metric) — With 42% experiencing incidents despite having controls in place and 52% lacking confidence those controls would detect compromise, this large-sample survey (1,400+ professionals, 12 countries) quantifies exactly how the detection infrastructure gap plays out in production. https://www.globenewswire.com/news-release/2026/04/28/3282300/0/en/proofpoint-research-reveals-half-of-global-organizations-experienced-ai-incidents-despite-having-ai-security-controls-in-place.html

  4. How compliance teams are tackling the RegTech surge (adoption-metric) — The 58% of compliance professionals still at basic/spreadsheet-driven maturity with only 16% at advanced, mapped against a $19B RegTech market growing at 23% CAGR, illustrates why enforcement penalties are landing on structurally underprepared organisations rather than merely unprepared ones. https://fintech.global/2026/04/27/how-compliance-teams-are-tackling-the-regtech-surge/

  5. Berkshire, Chubb, and Travelers Are Removing AI Coverage (adoption-metric) — The insurance industry's withdrawal from AI liability (80% state regulatory approval for exclusions across CGL, D&O, E&O, EPLI) is the private sector's definitive verdict on AI risk quantifiability, with direct consequences for enterprises that assumed coverage would absorb governance failures. https://insuranceintel.substack.com/p/berkshire-chubb-and-travelers-are

  6. Your Insurance Policy Just Changed. Your AI Deployment Didn't (opinion) — The mechanistic detail here matters: ISO Form CG 40 47 01 26 is now in 82% of policies, W.R. Berkley issued an absolute AI exclusion, and four specific coverage gaps (CGL, D&O, E&O, EPLI) opened without explicit policyholder notification -- a structural trap for organisations that have not reviewed their 2026 policy schedules. https://lawandkoffee.substack.com/p/your-insurance-policy-just-changed

  7. 2026 AI Adoption & Risk Report: Data Governance Gaps Widening (adoption-metric) — The finding that top 1% early adopters use 300+ tools while cautious enterprises use fewer than 15 -- with 39.7% of data movements into AI tools containing sensitive data -- is the empirical basis for the summary's claim that uniform governance frameworks are structurally impractical at current adoption divergence levels. https://www.cyberhaven.com/press-releases/cyberhaven-2026-ai-adoption-risk-report

  8. Agencies Issue Revised Model Risk Guidance (FDIC, OCC, Federal Reserve) (product-ga) — The April 2026 joint replacement of the 2011 SR 11-7 guidance with lifecycle-oriented, AI-specific requirements represents the most consequential US domestic regulatory move of this scan cycle for financial services organisations, establishing enforceable expectations around model inventory and vendor governance. https://www.fdic.gov/news/press-releases/2026/agencies-issue-revised-model-risk-guidance

  9. Deloitte's 2026 AI Enterprise Survey: 21% Governance Maturity Gap for Agentic AI (adoption-metric) — The specific 74%-to-21% gap (enterprises expecting agentic AI deployment vs those with mature governance for it), drawn from 3,235 leaders across 24 countries, is the most precisely quantified expression of the adoption-velocity-versus-governance-capacity tension in this cycle's evidence. https://www.libertify.com/interactive-library/state-of-ai-enterprise-2026-deloitte-survey/

  10. GPT-5.5 Tops Every AI Benchmark. It Also Hallucinates More Than Any Competitor. (opinion) — The paradox of a model simultaneously achieving the highest benchmark accuracy and the highest hallucination rate (86%) exposes the structural flaw in compliance-through-benchmarking: accuracy-only evaluations reward confident wrong answers, meaning governance frameworks built on benchmark performance are measuring the wrong thing. https://liveinthefuture.org/stories/gpt55-hallucination-benchmark-incentive-paradox