The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
Practices for disclosing AI involvement in content generation, decision-making, and customer interactions. Includes automated disclosure insertion and transparency reporting; distinct from content provenance which uses technical rather than disclosure-based approaches.
AI disclosure and labelling has reached the point where the infrastructure exists but the impact remains unproven. Binding mandates now span a dozen-plus jurisdictions, 72% of S&P 500 companies disclose AI risks, and platforms like Meta operate content-labelling systems at scale. The compliance machinery works. What it has not yet demonstrated is that disclosure actually builds trust or changes behaviour. Peer-reviewed research consistently finds that AI labels reduce perceived authenticity and trigger trust drops of 16-20% across contexts, while having little effect on sharing or decision-making. In biomedical publishing, only 5.7% of manuscripts disclose AI use despite surveys estimating 28-76% actual usage. This gap between regulatory ambition and demonstrated effectiveness defines the practice's leading-edge status: forward-leaning organisations and regulators have committed, but the field has yet to resolve whether current disclosure approaches deliver on their stated purpose.
As of April 2026, disclosure enforcement has consolidated globally with binding mandates and mature operational infrastructure, yet evidence of real-world effectiveness paradoxically deteriorated. Regulatory deadlines are converging and imminent: EU AI Act Article 50 enforcement (August 2, 2026) binds 27 member states to machine-readable and human-visible marking for AI-generated content with €15 million or 3% global turnover penalties; India's IT Amendment Rules (effective February 20, 2026) now require continuous, clearly visible labels for AI-generated content throughout its duration, tightened from "prominent visibility" after MeitY documented only ~30% compliance across YouTube, Instagram, and X; US state-level adoption accelerated with 25 laws in 2026 alone (vs 6 prior), including New York's Synthetic Performer Law (effective June 9), establishing binding disclosure across 13+ jurisdictions spanning EU, US states, India, China (September 2025), and South Korea (January 2026). This represents enforcement convergence rather than fragmentation.
Platform deployment reached operational scale in Q2 2026. Google Ads deployed mandatory AI-generated labels across all formats (Search, Display, YouTube, Performance Max) with March 5, 2026 enforcement and categorical deepfake prohibition. Meta expanded Advantage+ disclosure in April 2026, closing exemptions for "cosmetic transformations" and mandating labeling for all substantially AI-generated variants with C2PA watermarking and phased enforcement. Instagram tightened policy April 30, requiring AI disclosure for substantially AI-assisted Reels, not just wholly AI-generated content. However, brand adoption showed persistent gaps: World Federation of Advertisers (April 2026) found 78% of multinationals deploy AI-generated content, 67% have policies, but only 40% conduct formal audits and 80% lack technical implementation. Platform interoperability remains a critical blocker: IPTC standards analysis revealed inconsistent adoption (Instagram uses IPTC, LinkedIn uses C2PA, only Pinterest attempts both incompletely), preventing claimed C2PA universality.
Disclosure effectiveness, however, entered a critical contested phase. Peer-reviewed research documented paradoxes that directly undermine policy objectives: NYU Stern and Emory University field experiments show AI-generated ads outperform human ads by 19%, yet disclosure reduces click-through by 31.5%—a significant adoption friction point. TBWA\Australia and Ideally research documents a "synthetic authorship penalty": AI disclosure worsens consumer trust rather than building confidence, contradicting core policy assumptions. Stanford's Foundation Model Transparency Index (April 2026) shows vendor transparency collapsed from 58/100 (2024) to 40.69/100 (2025), with major labs (OpenAI, Google, Anthropic, Meta) simultaneously withdrawing disclosures. In high-stakes domains, policy-practice decoupling persists: PNAS study of 5.2 million academic papers found only 0.1% disclose AI use despite 70% having official policies; biomedical publishing shows 5.7% actual disclosure vs 28-76% estimated usage. By April 2026, the practice had achieved regulatory binding and operational maturity—infrastructure works, compliance machinery functions—but the fundamental goal of building trust and enabling informed choice remained demonstrably unachieved. The tension between regulatory acceleration and contested effectiveness defines the leading-edge status.
— Meta's April 2026 Advantage+ expansion closes exemption for cosmetic transformations, mandates labeling for all substantially AI-generated variants with C2PA watermarking and phased global enforcement.
— World Federation of Advertisers found 78% of multinationals deploy AI-generated content; 67% have policies, but only 40% conduct audits, 80% lack technical implementation—confirming adoption-compliance gap.
— TBWA\Australia and Ideally research documents 'synthetic authorship penalty': AI disclosure worsens consumer trust, contradicting policy assumption that transparency builds confidence.
— India's MeitY tightened IT Rules disclosure requirements due to compliance failures: only ~30% of AI-generated test posts correctly labeled across YouTube, Instagram, X; mandatory continuous visibility now required.
— EU AI Act Article 50 (effective Aug 2, 2026) mandates machine-readable marking and human-visible disclosure for AI-generated audio, video, images, text with €15M or 3% turnover penalties for non-compliance.
— Foundation Model Transparency Index shows major AI labs (OpenAI, Google, Anthropic, Meta) simultaneously withdrew disclosures; industry average collapsed from 58/100 (2024) to 40.69/100 (2025). Critical negative signal.
— Google Ads deployed mandatory AI-generated label requirement across all formats (Search, Display, YouTube, Performance Max) with March 5, 2026 enforcement and categorical deepfake prohibition.
— Legal guidance specifying EU AI Act Article 50 operational compliance: provider marking (metadata, invisible watermarks, C2PA), deployer disclosure, governance structures, August 2 binding deadline.
2023-H1: AI disclosure practices emerged as a regulatory and consumer-expectation focal point. 86% of consumers expected AI-generated content to be disclosed; regulators began formalizing expectations (FTC, NAAG conferences, federal guidance). However, implementation was nascent and vendor resistance to transparency was documented. Disclosure mechanisms (labels, automated insertion) were still experimental.
2023-H2: Disclosure requirements moved from guidance to early deployment. EU AI Act finalized transparency provisions; US Senate introduced mandatory disclosure legislation; judicial orders began requiring AI disclosure in legal filings. Platforms (Kickstarter, Instagram, YouTube) launched AI labeling policies. Microsoft implemented disclosure features in Copilot. However, transparency gaps persisted: Microsoft Copilot audit vulnerabilities, low data transparency across 25 models, and inconsistent judicial adoption revealed implementation challenges.
2024-Q1: Disclosure transitioned into operational implementation with emerging maturity gaps. Meta deployed AI-generated image labeling at scale (February) using C2PA/IPTC standards with enforcement; US Executive Order 90-day assessment showed 90% completion on transparency requirements; India issued AI content labeling advisory. However, independent research documented significant challenges: peer-reviewed study of 14 CE-certified medical AI products found 29.1% median transparency score; Mozilla assessment rated human-facing labels as "poor"; PwC survey showed only 33% business adoption of disclosure vs. 67% stakeholder demand. Regulatory mandates had not yet driven meaningful organizational maturity.
2024-Q2: Platform labeling and vendor transparency reporting accelerated. Meta's April announcement of AI content labeling on Facebook, Instagram, and Threads reflected maturing platform policy; Microsoft published inaugural Responsible AI Transparency Report (May) and filed EU Code of Practice reports documenting C2PA deployment on LinkedIn. SEC escalated regulatory enforcement: first AI-washing cases against Delphia and Global Predictions (March), followed by SEC Enforcement Director warnings on individual liability for disclosure failures (April). However, critical assessments persisted: Harvard analysis warned transparency requires sustained long-term effort, not quick fixes; Stanford Foundation Model Transparency Index showed major vendors scoring low (40% reporting, 20% risk); CE-certified medical products maintained 29.1% median transparency. Organizational adoption gap remained: 33% business disclosure rate vs. 67% stakeholder demand. By end of Q2 2024, platforms and vendors had operationalized disclosure mechanisms, but organizational maturity, disclosure effectiveness, and independent assessment scores indicated the practice remained in early deployment phase with significant implementation challenges.
2024-Q3: Disclosure practices shifted from guidance to binding mandates across multiple jurisdictions. EU AI Act entered force August 1, with transparency obligations for chatbots and AI-generated content labeling; White House issued federal AI inventory guidance (due Dec 16) requiring transparency and disclosure of use cases; three US states enacted binding disclosure laws (Utah May, Illinois August, Colorado pending February 2026). Organizational deployment matured: Clifford Chance demonstrated 60%+ daily adoption of AI with transparency frameworks and mandatory AI Principles training; 46% of Fortune 100 companies disclosed AI risks in SEC filings. However, effectiveness concerns mounted: consumer study showed AI labels reduce purchase intent (negative signal for disclosure); CE-certified medical products maintained 29.1% median transparency; major vendors remained low on transparency indices. Regulatory adoption accelerated while trust outcomes and implementation depth remained contested, suggesting compliance-driven disclosure had outpaced effectiveness.
2024-Q4: Disclosure matured into operational deployment phase with emerging tensions between regulatory mandates and real-world effectiveness. Vendor tooling advanced: Microsoft released "AI reports" feature (November) enabling developers to embed disclosure documentation in development workflows; DOJ compliance guidance (September) extended disclosure requirements to organizational risk assessments. Critical assessments intensified: Stanford Foundation Model Transparency Index showed average transparency scores declined 58→40 across major vendors; Partnership on AI documented practitioner barriers including user perception misalignment and platform fragmentation; Data Innovation analysis critiqued mandatory labeling as impractical, recommending voluntary C2PA standards. Shareholder activism demonstrated investor demand for disclosure: Open MIC campaign achieved 21-53% support for AI risk disclosure resolutions. By December 31, binding regulatory mandates and vendor deployment mechanisms had become widespread, but independent assessments revealed declining vendor transparency, unresolved disclosure effectiveness, and structural implementation barriers—suggesting the practice had transitioned from early adoption to a contested plateau where compliance pressure and technical capability outpaced demonstrated trust outcomes.
2025-Q1: Disclosure implementation accelerated at platform and regulatory levels, but fundamental research revealed critical limitations of disclosure as a trust mechanism. Meta expanded AI labeling to advertising products (February); regulatory enforcement intensified with SEC comments to 56 companies on disclosure accuracy and balance (January). However, peer-reviewed research published in early 2025 documented that AI disclosure paradoxically reduces trust and that current explanations fail to help users calibrate accuracy perception. Government disclosure gaps persisted: UK Department for Work and Pensions operated production AI system processing 25,000 daily claims without transparency to data subjects, revealing compliance failures even under binding regulatory frameworks. This pattern—platforms deploying labeling at scale while research demonstrates disclosure ineffectiveness—suggests the practice had reached a critical inflection point where infrastructure adoption had outpaced demonstrated trust and effectiveness outcomes.
2025-Q2: Disclosure entered a phase of contested evidence, with robust empirical research documenting paradoxes and limitations while regulatory mandates and platform deployment continued at scale. Peer-reviewed studies revealed fundamental challenges: labeling reduces belief in claims but has little impact on sharing behavior (n=7,579, PNAS Nexus); AI labels reduce perceived accuracy of news yet do not shift policy support (n=3,861); disclosing AI use triggers significant trust drops (16-20% across grading, advertising, design contexts); and excessive transparency paradoxically reduces adoption via cognitive overload. Regulatory mandates accelerated globally: China finalized mandatory AI labeling framework (effective Sept 2025) requiring explicit and implicit labels for all AI-generated content with three-tier classification; EU General-Purpose AI Code of Practice finalized with transparency commitments for providers. Platform deployment matured: Meta and Microsoft continued operationalizing labels across products. By June 30, the practice had become characterized by a fundamental tension: robust evidence of disclosure limitations was mounting while regulatory adoption and platform implementation continued expanding, suggesting the field was grappling with the gap between transparency mandates and demonstrated effectiveness.
2025-Q3: Disclosure reached maturity as a regulated practice with critical research challenging its effectiveness. China's mandatory AI labeling standard (GB45438-2025) took effect September 1, requiring explicit and implicit labels for all AI-generated content with technical enforcement and platform responsibility. Academic publishing integrated disclosure frameworks: journal editors adopted templates and submission system integration, though qualitative research revealed persistent confusion over disclosure thresholds and sufficiency standards (blurred boundaries between necessity and excess transparency). Medical AI domain emphasized transparency urgency: UW researchers documented model failures (COVID prediction relying on image artifacts) and advocated transparency/explainability as risk mitigations. Critically, economic research proved disclosure has fundamental tradeoffs: an arXiv study showed mandatory disclosure optimal only under intermediate conditions and that enforcement reduces creator surplus and suppresses high-quality AI content. By September 30, regulatory deployment had reached global scale (China, EU frameworks operational), domain-specific adoption was growing (academic publishing, medical AI), but peer-reviewed evidence of disclosure limitations and economic/trust paradoxes remained unresolved—confirming the practice was operationally mature but fundamentally contested on effectiveness.
2025-Q4: Disclosure entered a phase of regulatory consolidation with increasing tension between mandated transparency frameworks and real-world practice adjustments. The EU formalized Code of Practice governance for AI-generated content marking (November 2025) with working groups developing provider and deployer obligations; simultaneously, federal regulatory momentum in the U.S. shifted: an Executive Order in December sought to preempt state AI laws including disclosure requirements, signaling federal pullback. Critical research on vendor transparency continued: Stanford Foundation Model Transparency Index 2025 reported further decline to 40/100 average, with major companies (xAI, Midjourney) scoring 14/100 and withholding training data and societal impact information. Corporate governance adoption accelerated: Fortune 100 board-level AI risk oversight rose to 48% (triple from prior year), with 44% of companies mentioning AI in director qualifications. However, real-world deployment showed contrary signals: Microsoft reduced default AI disclaimers in Copilot Chat (November) in response to user feedback, demonstrating practical tension between regulatory mandates and product design preferences. India advanced draft AI content labeling rules (November) requiring permanent watermarks on synthetic content, but industry raised concerns about compliance costs, technical feasibility, and broad applicability. By December 31, disclosure governance had expanded globally (EU Code of Practice, India rules, SEC enforcement focus) and corporate adoption metrics rose, but vendor transparency declined further, platforms adjusted disclosure downward, federal regulatory momentum paused, and effectiveness research remained contested—confirming the practice at a critical inflection point where compliance infrastructure outpaced demonstrated trust outcomes and real-world deployment preferences.
2026-Jan: Disclosure entered a phase of regulatory institutionalization with evidence mounting of implementation gaps and organizational maturity deficits. The EU Code of Practice first draft (January 28, 2026) completed Article 50 transparency framework with two working groups and August 2, 2026 compliance deadline, signaling binding enforcement across member states. US regulatory mandates accelerated: California's Assembly Bill 2013 (effective January 1, 2026) made training data transparency legally binding, requiring developers publish sources, purposes, and copyright status. Corporate adoption surged: 72% of S&P 500 companies disclosed AI risks in 2025 (sixfold increase from 12% in 2023), confirming disclosure as mainstream governance practice. However, organizational and research evidence revealed persistent disclosure paradoxes. Academic research documented "AI disclosure penalty": labels reduce perceived authenticity in advertisements and creative work. Only 70% of knowledge workers consistently disclosed AI use despite widespread adoption. High-stakes domains showed critical gaps: biomedical research disclosed AI in only 5.7% of 25,114 manuscripts despite surveys showing 28-76% actual usage, revealing accountability and integrity risks. Platform deployment remained at scale—Meta and YouTube continued operationalizing AI ad labels with 83% of ad executives deploying AI creatively—but consumer perception remained skeptical (only 45% positive sentiment), with disclosure narrowing rather than closing gaps. By January 31, 2026, disclosure had become legally binding in multiple jurisdictions and operationally embedded in platform workflows, but the evidence base documented organizational compliance gaps and fundamental questions about disclosure's effectiveness at building trust or enabling informed decision-making, suggesting the practice remained caught between regulatory acceleration and demonstrated limitations.
2026-Feb: Disclosure entered full regulatory and corporate operationalization with accelerated adoption but deepening evidence of implementation gaps. Regulatory enforcement consolidated: EU Code of Practice first draft (Jan 28) set August 2 compliance deadline; California's AB 2013 made training data transparency binding (Jan 1); 12+ jurisdictions moved toward enforcement clustering in Q1-Q2 2026. Corporate adoption surged to 72% S&P 500 disclosure (sixfold increase from 2023), with named companies embedding AI governance frameworks. However, critical research revealed persistent paradoxes: vendor transparency declined sharply (Stanford Index 58→40), with major developers withholding training data; label design failures risked banner blindness and habituation; and organizational disclosure gaps persisted despite mandates (70% knowledge worker compliance). By end of February, disclosure had become universally expected at platform and corporate levels but was fundamentally contested on effectiveness and implementation quality, with mounting evidence that regulatory acceleration had outpaced demonstrable trust outcomes.
2026-Apr: Disclosure reached inflection point between regulatory enforcement binding and persistent implementation/effectiveness gaps. Regulatory acceleration completed: US state-level adoption exploded with 25 laws passed in 2026 (vs. 6 prior), 19 new statutes in March-April across 13+ jurisdictions with explicit disclosure requirements; EU AI Act Article 50 binding August 2, 2026 with €35M penalties; India IT Rules effective February 20 with 3-hour enforcement; New York Synthetic Performer Law effective June 9, 2026; multi-jurisdictional convergence spanning EU, US, India, China, South Korea. Platform deployment matured: Meta (18M labeled ads), TikTok (94.7% synthetic face detection), YouTube (realistic vs. non-realistic classification), Google Ads (AI Generated labels March 5, 2026) all operational. However, critical implementation gaps widened: IPTC standards analysis revealed platforms use inconsistent metadata standards (Instagram/IPTC, LinkedIn/C2PA, only Pinterest checks both), preventing interoperability despite C2PA universality claims; brand adoption showed 78% use but only 40% audit compliance, 80% lack technical implementation. Most critically, peer-reviewed evidence documented disclosure limitations at scale: March 2026 JCOM study found 'truth-falsity crossover effect' (labels reduce true content credibility while boosting false claims), directly contradicting policy goal; PNAS study of 5.2M papers found 0.1% disclosure compliance despite 70% having official policies, revealing policy-reality decoupling. Vendor transparency continued declining (Stanford Index 40/100, major developers scoring 14/100). By April 15, disclosure governance was globally binding and operationally embedded, but fundamental effectiveness, implementation quality, and trust outcomes remained contested—practice confirmed at critical tension between mandatory compliance infrastructure and unproven/contested impact outcomes.