Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Stakeholder communication & AI literacy programmes

LEADING EDGE

TRAJECTORY

Advancing

Programmes to educate employees, customers, and stakeholders about AI capabilities, limitations, and responsible use. Includes AI literacy training and stakeholder briefing materials; distinct from L&D content generation which creates general rather than AI-governance-specific training.

OVERVIEW

AI literacy programmes have reached deployment scale at governments, vendors, and leading institutions globally, but adoption remains highly uneven. The U.S. Department of Labor's AI Literacy Framework (February 2026) and the EU AI Act Article 4 mandate (February 2025) create regulatory requirements, yet only 1% of U.S. higher education institutions have institution-wide programmes despite 88% of workers expecting AI use by 2028. Government-led initiatives show genuine commitment—India's YUVA AI for All, Dubai's K-12 partnership, Google's free training for all 6 million U.S. educators—but enforcement and measurement remain inconsistent. The core tension has shifted from access to effectiveness: the challenge is no longer getting programmes launched but ensuring behavioural change persists. Microsoft's data on 300,000 employees shows adoption enthusiasm peaks week three and collapses without sustained coaching. Most organisations still distribute tools far faster than they build governance, training, or capability around them, creating organisational debt. Success requires addressing the adoption-to-proficiency gap through role-based, integrated learning, not checkbox training.

CURRENT LANDSCAPE

The regulatory push is now global and binding. The EU AI Act Article 4 (effective February 2025) mandates organisational AI literacy with penalties up to €35M or 7% of global turnover; the U.S. Department of Labor published a national AI Literacy Framework (February 2026) with five content areas and seven delivery principles. However, enforcement and institutional responsiveness remain inconsistent: only 8 of 27 EU member states designated enforcement contacts by March 2026, and only ~1% of U.S. higher education institutions have prioritised institution-wide AI literacy despite market demand signals.

Government-led and vendor-scale deployments represent the current frontier. India's YUVA AI for All campaign, Dubai's KHDA-DP World-MIT partnership targeting 80,500 students (Feb 2026), Google's commitment to train all 6 million U.S. K-12 and higher education faculty (3-year initiative, launching May 2026), and the U.S. Department of Labor's $243 million AI apprenticeship initiative (April 2026, 191% growth in AI apprenticeships since 2020) signal ecosystem-wide recognition of urgency. Measured outcomes exist: Day of AI Australia shows understanding gains from 20% to 64% post-programme; IIT Madras and Google partnered to train Indian government officials on responsible AI governance; multilevel analysis across 2.3 million students validates that institutional AI readiness directly predicts student literacy through teacher capability mechanisms. Ireland's ADAPT centre reports 96% of trained teachers demonstrate improved AI and ethics understanding.

Yet implementation gaps remain severe and widening. Adoption outpaces readiness: 83% of organisations use AI but only 25% have governance frameworks; 88% use AI in at least one business function yet only 20% achieve revenue ROI. A Deloitte survey of 3,235 leaders identifies 'readiness scissors': adoption metrics rising while talent readiness (20%) declines. Talent development lags tool deployment: 59% of enterprise leaders report AI skills gaps; only 36% of workers receive any training despite 66% expressing enthusiasm. Enterprise programmes often fail: DataCamp's 2026 survey of 500+ leaders found that organisations with structured upskilling capture 2× ROI compared to unstructured approaches (42% vs 21%), yet most organisations default to self-directed learning without structure or measurement. The critical bottleneck is not knowledge but sustained capability development and governance integration—most programmes emphasise tool adoption over critical evaluation and institutional change.

TIER HISTORY

ResearchJun-2023 → Jul-2023
Bleeding EdgeJul-2023 → Jul-2025
Leading EdgeJul-2025 → present

EVIDENCE (91)

— U.S. Department of Labor invests $243M to embed AI literacy into registered apprenticeships across construction, manufacturing, healthcare, IT; AI apprenticeships grew 191% (2020-2022), with workers earning 56% premium on AI skills.

— Global policy analysis: EU AI Act Article 4 legally mandates AI literacy (Feb 2025) but only 8 of 27 member states designated enforcement contacts by March 2026; Beijing and Tokyo operationalized mandates effectively while most nations remain aspirational only.

— Global AI literacy initiative reaching 2M+ students across 175 countries via Day of AI curriculum with 70,000 trained teachers; PATH Initiative (2026) scales AI skilling through post-secondary institutions and workforce development.

— 3-year initiative launching May 2026 to train all 6M K-12 and higher education faculty in US; concrete educator use cases in social studies, ELA, financial literacy; 74M students targeted via educator capacity-building and micro-credentialing.

— Government-scale AI literacy programme via SMS delivery reaching underserved populations; addresses 59% enterprise AI skills gap with accessibility-first stakeholder communication strategy covering five competency areas.

— Indian government official capacity-building on responsible AI procurement and deployment; multi-sector partnership (academia, vendor, research) positions stakeholder engagement as prerequisite for 'trust and inclusion' in policy-relevant AI systems.

— Peer-reviewed multilevel analysis across 1,007 institutions, 156,125 teachers, 2.3M+ students shows school AI readiness directly predicts student literacy through teacher capability mechanisms, validating institutional investment in training programmes.

— Critical adoption gap: only ~1% of US higher ed institutions prioritize AI literacy institution-wide despite 88% of employees expecting AI use by 2028 and $4.4B market opportunity; reveals implementation gap despite regulatory/market signals.

HISTORY

  • 2023-H1: Major cloud providers launched competitive AI education platforms; organizational adoption of AI literacy training remained fragmented and often vendor-driven rather than governance-led. Large-scale employee sentiment research showed widespread anxiety and knowledge gaps about AI use in work. Educational institutions began formalizing AI literacy curricula with mixed results on equity. Communications and HR departments lacked standardized frameworks for stakeholder education about AI risks and opportunities.

  • 2023-H2: IBM and Microsoft expanded AI training commitments to scale; University of Florida launched campus-wide AI Across the Curriculum program demonstrating institutional-level deployment. Research revealed significant limitations: short-term training interventions fail to reduce over-reliance on incorrect AI outputs, and most initiatives focus on K-12 rather than diverse stakeholder needs. U.S. AI Literacy Act proposed but not yet implemented. Institutional adoption remains nascent—only 8% of higher-ed institutions had formal AI policies by mid-2023.

  • 2024-Q1: Vendor-led training scaled further (Microsoft 6M learners, sector-specific programmes in telecom); adoption frameworks matured (AI Literacy Heptagon published). Yet implementation gaps persisted: only 38% of US executives supporting employee AI literacy despite 80% of workers wanting to learn; 66% of K-12 teachers not using AI tools due to time, knowledge, and integrity concerns. Peer-reviewed success case (Monash 36-student programme) showed structured curriculum could deliver gains in understanding and ethics. Critical analyses highlighted that rapid deployment often bypassed stakeholder engagement and communication—the core governance lever for sustainable adoption.

  • 2024-Q2: Regional expansion accelerated (Microsoft's 2.5M-person ASEAN commitment, UNSW's funded K-12 scaling). Measurement frameworks matured with peer-reviewed synthesis of assessment tools emphasizing multidimensional literacy beyond technical skills. Large-scale adoption metrics revealed paradox: 75% of global knowledge workers using generative AI despite organizational leaders lacking coherent literacy strategies, and 53K-student UK survey documented classroom adoption patterns. Educational and workplace programmes proliferated, but implementation remained challenged by resource constraints, structural barriers, and misalignment between rapid AI deployment and meaningful stakeholder engagement.

  • 2024-Q3: Institutional and government deployments accelerated: University of South Carolina achieved 64% faculty/staff completion rates in IBM AI training; Sri Lanka launched government-led AI Clubs reaching 300+ students; systematic review identified 24 AI literacy assessment instruments enabling standardized measurement; Microsoft expanded Coursera specialization to 10,400+ learners. EU AI Act Article 4 (effective February 2025) created regulatory mandate for organizational AI literacy, expected to drive compliance-driven programme development. Critical research documented limitations: students using AI tutoring learned less than non-AI peers, and only 23% of employees felt adequately trained despite programme proliferation. Gender and age gaps in training effectiveness remained unaddressed, highlighting persistent equity challenges in stakeholder communication approaches.

  • 2024-Q4: Vendor and regulatory commitments scaled: Google committed $15M for public sector AI training; Microsoft and LinkedIn expanded AI Skills Navigator portal; IBM's research documented 50% AI talent gap fueling corporate upskilling demand. K-12 education and workplace training programmes continued scaling, but implementation gaps persisted and widened. EdWeek Research survey showed 58% of teachers still lacked any AI training despite 14% increase in trained educators, with trained cohorts showing no increase in classroom tool adoption—evidence that programme proliferation does not automatically drive behavioral change. Regulatory mandate (EU AI Act Article 4 effective Feb 2025) expected to accelerate compliance-driven enterprise programme development in 2025. Persistent challenge: training completion rates disconnected from organizational readiness and sustainable impact; isolated vendor-led initiatives and single-session training continued to fall short of transforming stakeholder understanding and deployment practices.

  • 2025-Q1: EU AI Act Article 4 entered force (Feb 2025), creating legal requirements for organizational AI literacy and catalyzing compliance-driven programme development across European enterprises. Fortune 100 companies deployed cross-functional literacy programmes emphasizing trust-building and organizational participation. Vendor platforms expanded (Microsoft AI Skills Fest, LinkedIn Career Essentials, Google and IBM training grants). Research validated measurement frameworks (Hong Kong study: structured 18-hour programmes increased AI empowerment and narrowed gender gaps). However, empirical evidence revealed persistent limitations: short-term interventions did not reduce over-reliance on incorrect AI outputs; survey data showed 62% of leaders recognized skill gaps but only 25% implemented organization-wide training; Gartner data confirmed adoption paradox (79% call AI critical but only 20% use daily). Core challenge: programme proliferation driven by regulatory mandate and vendor investment was not translating to sustained behavioral change or organizational readiness.

  • 2025-Q2: Regulatory and policy momentum accelerated: European Commission and OECD launched draft AI literacy framework (May 2025) with 22 competences for K-12 education and 1,000 stakeholders; U.S. White House issued executive order (April 2025) creating Task Force on AI Education and Presidential AI Challenge. Deployment evidence strengthened: UNSW-Day of AI Australia expanded to 100,000+ students with Google.org backing; 342-company study documented 28.4% productivity gains and 23% higher innovation from structured programmes. Large-scale survey (KPMG, 48,000+ respondents) confirmed that AI literacy lagged adoption globally. Yet critical barriers persisted: SRI Education identified teacher preparation gaps as primary bottleneck; expert analysis of U.S. executive order highlighted equity concerns and assessment framework gaps; critical voices warned that uncritical AI literacy initiatives risk degrading rather than advancing critical thinking. Programme proliferation continued alongside policy standardization, but behavioral change and organizational readiness gaps remained the core adoption challenge.

  • 2025-Q3: National deployments at scale: Taiwan launched "AI Literacy for All" initiative (August 2025) targeting 300,000 students and 4,400 teachers, informed by survey data showing literacy-usage gap; Google consolidated AI Literacy hub (September 2025) reaching 1.7M young people with documented learning impact; Ireland deployed two-phase national initiative for older adults with co-creation methodology and 60,000+ participant target. Vendor ecosystem matured: Microsoft launched fee-based Foundational AI Bootcamp (July 2025) via Training Services Partners in 40+ countries. Yet critical research documented adoption barriers: peer-reviewed findings that AI knowledge alone does not drive adoption due to trust and psychological concerns; measurement frameworks identified as underdeveloped (55% manager-employee confidence gap); educational research confirmed that uncritical AI use correlated with lower exam performance. Core tension intensified: programme scaling was advancing rapidly, but evidence on behavioral change, measurement maturity, and mitigation of adoption risks remained limited. Success required solving harder problems of assessment, equity, and designing programmes addressing institutional barriers rather than merely expanding training access.

  • 2025-Q4: Governance maturity consolidated yet quality gaps persisted. Regulatory frameworks finalized: EU AI Act Article 4 and OECD AI Literacy Framework established standardized international competence models, driving organizational compliance-based deployment. National programmes operationalized: ADAPT Ireland expanded teacher training from 340 to 500+ teachers (96% demonstrated improved AI/ethics understanding); government-led initiatives continued (Philippines UNESCO training, regulatory guidance in Netherlands). Vendor consolidation accelerated: Microsoft, Google, LinkedIn standardized enterprise AI literacy product offerings across 40+ countries. Yet empirical evidence revealed systemic governance challenges: 500-leader compliance survey showed 88% reported governance gaps in AI-assisted communications; Dutch DPA survey found only 54% of organizations prioritizing AI literacy, 11% with no focus. Board-level literacy underdeveloped: empirical research on 250 EU firms identified board AI competence as prerequisite for stakeholder trust. Critical instruction quality problems emerged: analyses confirmed tool-centric programmes underperformed, and educational research documented that uncritical AI adoption correlated with worse student outcomes. Core challenge remained unresolved: programme scale and regulatory drivers had accelerated dramatically, but evidence on sustained behavioral change, instruction effectiveness, and risk mitigation lagged deployment.

  • 2026-Jan: Government-led rollout accelerated across emerging markets: India launched YUVA AI for All national campaign (MeitY) for structured stakeholder education; Philippines began AGAP.AI phased deployment under DepEd; Rajasthan government enacted AI-ML Policy 2026 integrating National AI Literacy Programme. Empirical adoption data from enterprise sector revealed persistent implementation gaps: Compliance Week-konaAI survey showed 83% AI adoption but only 25% with governance frameworks; Deloitte's 3,200+ leader study documented access-to-use behavioral gap with tools widely distributed but inconsistently deployed. Critical behavioral insight emerged: Microsoft's tracking of 300,000 employees showed AI adoption enthusiasm peaks week 3 then sharply drops without management skills training, confirming that literacy and behavioural change require sustained coaching beyond initial onboarding. Regulatory momentum continued: policy-driven programmes in India, Philippines, and Rajasthan signaled shift from vendor-led to government-directed stakeholder communication strategy. Yet core tension persisted: deployment scale now global and policy-driven, but measurement of sustained behavioural change and critical instruction quality remained the limiting adoption factor.

  • 2026-Feb: Federal and regional policy frameworks matured, but critical implementation gaps persisted. U.S. Department of Labor published national AI Literacy Framework (Feb 13) with five foundational content areas and seven delivery principles, establishing federal guidance for organizational program design. Regional government deployments continued scaling: Dubai launched major K-12 initiative (Feb 5) via KHDA-DP World-MIT partnership targeting 80,500 students across Grades 6-8 with 3,600 teachers through Feb 2030. Large-scale vendor initiatives accelerated: Google announced unprecedented educator training (Feb 27) partnering with ISTE+ASCD to reach all 6M K-12 and higher ed faculty with free Gemini training for 74M students. Enterprise survey data revealed persistent implementation friction: People Managing People survey showed 66% of workers positive about AI but only 36% received training; HR professionals rated 42% of employees minimally prepared or unprepared. DataCamp survey (Feb 27) validated ROI of structured programs: organizations with systematic upskilling experienced 2x ROI (42% vs 21%), yet 72% of leaders reported significant capability gaps despite high expectations. Critical assessment emerged: governance practitioners warned that DOL framework enabled organizational "governance theater" where compliance appearance masked behavioral change deficits—underscoring persistent challenge that literacy program proliferation had not yet translated to institutional readiness or sustained adoption behavior change.

  • 2026-Apr: Government investment scaled sharply: U.S. DOL committed $243M to embed AI literacy into registered apprenticeships (191% growth since 2020), and DOL/NSF's TechAccess initiative added $369M across 56 state hubs with a standardized five-competency AI Literacy Framework. MIT RAISE reached 2M+ students across 175 countries with 70K trained teachers, and Google launched a 3-year commitment to train all 6M U.S. K-12 and higher education faculty by 2029. However, the EU AI Act Article 4 enforcement gap widened: only 8 of 27 member states designated enforcement contacts by March 2026, and only ~1% of U.S. higher education institutions prioritize institution-wide AI literacy despite regulatory and market signals. A peer-reviewed multilevel study of 2.3M students confirmed that institutional readiness—not programme existence—is the primary predictor of literacy outcomes, reinforcing that scale of deployment alone does not close the behavioral adoption gap.