Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Content authenticity — deepfake detection & provenance

LEADING EDGE

TRAJECTORY

Stalled

AI that detects deepfakes, authenticates content origin, and applies provenance metadata and watermarks to verify media integrity. Includes C2PA standard implementation and synthetic media detection; distinct from content safety which filters harmful outputs rather than verifying authenticity.

OVERVIEW

Content authenticity encompasses two distinct tracks -- deepfake detection and content provenance -- that have matured along divergent paths. Detection tools identify manipulated video, audio, and images; provenance systems attach cryptographic metadata at creation time to prove origin and edit history. Both are operationally deployed at forward-leaning organisations, but neither has crossed into broad adoption.

The defining tension is structural. Detection is locked in an arms race with synthesis: commercial tools achieve 83-96% accuracy in controlled benchmarks but degrade sharply in real-world conditions (73% live, 63% video), and humans perform little better than chance (0.07/1.0 on recent surveys). Provenance via the C2PA standard has accelerated from experimental to production deployment: hardware integration now extends beyond professional cameras (Snapdragon, Google Pixel 10) into enterprise infrastructure (DigiCert, Canon, Qualcomm), with regulatory catalysts (EU AI Act Article 50 August 2026, California SB 942 in force) driving adoption across newsrooms and commercial workflows. Yet metadata stripping, platform fragmentation, and selective manufacturer implementation persist as barriers to durability. Structural headwinds for detection: research misalignment (decade of work optimizing for face-swap election interference that didn't materialize, while actual harms—NCII, voice fraud, biometric attacks—remain under-defended); cyber insurance now excludes deepfake fraud coverage (post-Jan 1, 2026), forcing enterprises to treat detection as forensic support rather than primary defense. The field's centre of gravity remains shifting from "detect fakes" toward "prove authenticity at the source" -- but that transition depends on coordination that most of the ecosystem has not yet undertaken.

CURRENT LANDSCAPE

Detection has hit a documented performance ceiling. Independent benchmarking of 30 commercial detection solutions shows 92.5% visual and 96% audio accuracy under lab conditions, but live detection drops to 73% and video detection to 63%—a 45-50% accuracy collapse under real-world conditions. University of Edinburgh research (March 2026) demonstrates that fingerprinting-based detectors, a major detection paradigm, are defeated by adversarial attacks in 80%+ of cases with full attacker knowledge, and over 50% of cases with simple techniques like JPEG compression. Red team assessments find employees identify deepfakes only 38% of the time, confirming that training humans to spot fakes is an ineffective strategy. The gap between perceived and actual readiness is stark: 99% of security leaders report confidence in their defences, while only 8.4% scored above 80% in simulated exercises. Detection vendors like Reality Defender have expanded into hiring fraud prevention, financial services, and voice authentication (partnering with ValidSoft for biometric verification), and launched public APIs (April 2026) making detection available as commodity developer capability. Yet the consensus among researchers is clear: pixel-level analysis cannot keep pace with synthesis. Deepfakes online grew from 500,000 in 2023 to 8 million by 2025. Detection remains operationally deployed in fraud prevention workflows but within acknowledged structural constraints.

Provenance tells a more encouraging story, though barriers persist. The C2PA standard now underpins a $1.63 billion market (2025) projected to reach $5.12 billion by 2030. Hardware integration has arrived: Qualcomm embedded C2PA-compliant signing in Snapdragon processors; Google Pixel 10 signs all photos by default; Sony, Nikon, Leica, and Canon professional cameras support C2PA. Germany's ARD deployed provenance-verified video-on-demand across broadcast infrastructure. Adobe, OpenAI, Google, and TikTok have implemented C2PA support. The coalition counts over 6,000 members. The IPTC Media Provenance Summit (April 16, 2026, Toronto) convenes institutional leadership from broadcasters (BBC, CBC, France Télévisions), camera manufacturers (Sony), compliance bodies, and privacy organizations—signaling transition from experimental to production-stage newsroom and broadcast adoption. Yet critical assessments document persistent barriers: metadata is routinely stripped during distribution (platforms like Instagram, LinkedIn, YouTube strip credentials on upload), manufacturer implementation remains selective and incomplete, and creator-platform circular dependencies slow uptake. Emerging architecture divergence: ETH Zurich researchers propose sensor-level cryptographic signing (March 2026) as alternative to C2PA's processor-level approach, addressing intercept vulnerabilities. Provenance infrastructure is production-ready; making it durable across the entire content lifecycle and distribution chain remains the unsolved problem.

TIER HISTORY

ResearchJan-2019 → Jan-2019
Bleeding EdgeJan-2019 → Apr-2024
Leading EdgeApr-2024 → present

EVIDENCE (127)

— Critical position paper: detection research optimized for face-swap election interference threat that didn't materialize; actual harms (NCII, voice scams, fraud) under-defended.

— Major camera manufacturer (Canon) launches production C2PA-compliant provenance system for news organizations with validation from Reuters; covers full provenance chain from capture through publication.

— Major structural business model change: cyber insurance excludes deepfake fraud coverage post-Jan 1, 2026; detection accuracy collapses 95%→50-65% on real-world media.

— Strategic analysis of C2PA ecosystem consolidation, regulatory drivers (EU AI Act Article 50 enforcement August 2026, California SB 942 January 2026), and hardware-level adoption signals with specific market size estimates.

— Tier-1 analyst report (Everest Group) assessing 14 deepfake detection providers. Structured benchmarking across platform integration, accuracy, explainability, modality coverage, and compliance. Signal of enterprise maturity in trust & safety.

— Production platform launch (May 5, 2026) combining C2PA provenance with AI detection, deployed by six founding organizations including Journalism Trust Initiative, UncovAI, and Sciences Po MediaLab. Supported by French Ministry of Culture. Shows ecosystem adoption of standards-based provenance + detection integration.

— Critical forensic assessment of C2PA's actual deployment limitations, showing barriers to effectiveness including credential stripping, limited platform adoption, and narrow current use case (AI disclosure only).

— Major infrastructure vendor (DigiCert: 100K+ organizations, 90% Fortune 500) launches production C2PA signing platform as managed service, with IDC analyst validation, addressing enterprise provenance infrastructure gap.

HISTORY

  • 2019: Deepfake detection emerged as an active research area with multiple datasets (Celeb-DF, VidTIMIT) and detection papers; first commercial deployment by Truepic in financial underwriting; industry standards initiatives launched (C2PA, Content Authenticity Initiative) signaling ecosystem mobilization, though platform adoption barriers remained unresolved.
  • 2020: Deepfake detection research revealed persistent limitations—Equal Error Rates of 15-30% on high-quality, second-generation videos despite 90%+ accuracy on controlled datasets. Content provenance standards matured: C2PA v1.4 implementation guidance released, CAI white paper published with multi-vendor backing (Adobe, BBC, Microsoft, NYT), and first open-source C2PA tooling (PyC2PA) launched. Industry commitment to provenance infrastructure solidified, but platform adoption remained uncertain.
  • 2021: Detection research documented fundamental vulnerabilities: adversarial attacks defeated detectors at 99% success rates, while analysis revealed dataset oversampling problems limiting real-world generalization. Technical improvements continued (LRNet, Face-Cutout augmentation, explainability work), but interpretability gaps persisted. C2PA 1.0 specification released June 2021 as formal open standard, with Twitter joining steering committee and Truepic raising $26M in Series B funding (led by Microsoft M12, with Adobe backing), signaling market-driven investment in verification infrastructure despite unresolved platform integration challenges.
  • 2022-H1: C2PA v1.0 formally released as industry standard (January), with Sony joining steering committee. Detection platforms expanded with government/media partnerships (Reality Defender with DHS, DoD, ABC, Washington Post). Provenance tooling matured: Adobe released open-source SDKs (JS, CLI, Rust) for C2PA implementation (June). Real-world deployment broadened beyond financial services: Old Republic Insurance adopted Truepic Vision for automated warranty inspections. Dual trajectory clear: provenance infrastructure solidifying (standards, tools, early deployments) while detection methods remained challenged by adversarial vulnerability and poor real-world generalization.
  • 2022-H2: Major vendor market entry continued (Intel's FakeCatcher with claimed 96% accuracy), but research deepened concerns about detection reliability: bias in demographics reduced fairness, humans achieved only 62% accuracy on synthetic images, and academic surveys documented persistent transferability and robustness gaps with 'lack of reliable evidence in real-life usages.' Provenance standards gained traction in journalism: BBC and CBC demonstrated C2PA workflows, while Nikon Z9 and Leica M11 announced C2PA camera support. Detection and provenance tracks further diverged in maturity: provenance infrastructure consolidating around standards and hardware integration while detection remained mired in fundamental accuracy and generalization challenges.
  • 2023-H1: Detection research intensified focus on fundamental generalization gaps: ICCV 2023 showed performance dropping to 51% AUC across different synthesis methods, and new frameworks systematized real-world evaluation limitations. C2PA v1.3 (April) extended generative AI transparency capabilities. Real-world provenance deployments multiplied across financial verticals (PCMI claims automation, 12th Tech floor plan auditing), while ecosystem debate widened to cover text-generated content provenance. Expert consensus shifted to framing detection as an unwinnable "arms race" requiring mitigation rather than elimination.
  • 2023-H2: Detection limitations solidified across modalities: UCL research (August) showed humans detect only 73% of deepfake speech; IEEE Access survey (December) documented persistent gaps in real-time and generalizable solutions. Real-world detector performance fell 45-50% below benchmarks; facial detection systems unreliable due to unseen generators and preprocessing artifacts. Provenance infrastructure matured: C2PA v1.4 released (November) with 1,500 CAI members; Truepic extended C2PA to AI-generated images via Hugging Face integration (October), moving provenance from capture to creation time. Reality Defender enterprise deployments expanded (Visa, NATO, NBCUniversal), but positioned as mitigation rather than elimination. Detection and provenance tracks fully diverged: provenance on path to ecosystem standardization with hardware integration; detection locked in arms-race dynamic with no credible enterprise adoption scenario.
  • 2024-Q1: Detection research continued documenting structural constraints: WACV 2024 showed demographic bias in detectors (Black men misclassified at 39% vs 16% for white women), while Deepfake-Eval-2024 confirmed real-world performance collapse (45-50% AUC drop from benchmarks). Comprehensive ACM surveys synthesized technical maturity alongside fundamental limitations, and Reality Defender deployed for 2024 election monitoring. Provenance ecosystem consolidated sharply: Google joined C2PA steering committee (February), committing major platform resources; Truepic and SmartFrame deployed C2PA-secured image system for Six Nations Rugby and Manchester City F.C., expanding provenance beyond finance into brand protection. Detection remained operationally deployed but constrained by real-world generalization; provenance infrastructure solidified toward multi-sector standardization.
  • 2024-Q2: Detection threat landscape intensified: Sumsub reported 245% YoY deepfake surge globally with election-nation spikes of 500-1625% YoY, driving demand for detection but critical assessments (Reuters Institute/WITNESS) found commercial tools unreliable and recommended use only alongside manual OSINT—fundamental constraint on detection-alone adoption. Provenance infrastructure reached production scale: Sinclair Inc. deployed C2PA across 185 U.S. TV stations via AWS (April); Ballotpedia authenticated 8,000+ political candidates (cumulative) using Truepic's system; C2PA v2.3 specification released (April) with major vendor adoption (Adobe, Google, OpenAI, Meta). Technical clarification emerged: C2PA provides "non-repudiable attribution" but cannot guarantee claim veracity; platform metadata-stripping limits durability—defining realistic scope of provenance technology. Provenance solidifying toward multi-sector production deployment; detection locked in arms-race requiring augmentation with manual analysis.
  • 2024-Q3: Detection continued bifurcated trajectory: Singapore government launched multi-pronged strategy with S$20M investment, Online Criminal Harms Act enforcement, and Centre for Advanced Technologies in Online Safety (CATOS) for detection research and industry collaboration (July). Research findings deepened consensus on tool constraints: CHI 2024 study of journalists found emerging deepfake detection software produces inaccurate results with unreliability limiting real-world adoption in high-stakes verification workflows; peer-reviewed multimodal survey showed state-of-the-art detectors fail to generalize to content from unseen generators. Major cybersecurity vendor Trend Micro added deepfake detection to enterprise Vision One platform (July), signaling adoption by established security players but not addressing fundamental generalization gaps. On provenance, open-source tooling matured with c2patool (Rust) active development continuing through September, demonstrating ecosystem readiness for implementation. Detection remained operationally deployed but constrained by tool unreliability and poor generalization to real-world content; provenance track continued its separate path toward ecosystem standardization and developer tooling maturity.
  • 2024-Q4: Detection track reached empirical ceiling: peer-reviewed meta-analysis of 56 papers (86,155 participants) found human deepfake detection at chance level (55.54%), while AI support improved to 65.14%—formalizing fundamental human vulnerability; Reality Defender and other vendors announced expanded platform integrations (web conferencing, call center) and Accenture invested strategically in RD for enterprise adoption, yet CEO acknowledged core limitations (watermarking platform-dependent, inference unreliable without diverse training data). Provenance infrastructure consolidated firmly: Content Authenticity Initiative reached 3,700+ members with adoption by TikTok, OpenAI, Meta, LinkedIn, Amazon, Sony, and U.S. DoD (October); Partnership on AI released cross-vendor case studies revealing real-world barriers (label fatigue, user confusion, metadata stripping), Fortune reported C2PA implementation challenges with few cameras/tools applying credentials by default, and practitioner analysis documented security vulnerabilities in early deployments. Detection remained operationally deployed for high-stakes scenarios (election monitoring, forensic support, fraud prevention) but with acknowledged tool limitations and platform-dependent architecture; provenance infrastructure solidified toward production multi-sector deployment with bounded scope (attribution without veracity guarantee) and persistent platform-fragmentation challenges limiting durability.
  • 2025-Q1: Detection research and deployment continued along established constraints: CSIRO study of 16 leading detectors found none reliably identify real-world deepfakes, with performance varying by synthesis type and training data coverage—confirming generalization as structural barrier. Consumer surveys documented extreme human vulnerability (iProov: only 0.1% could distinguish real/fake stimuli) alongside trust erosion (Deloitte: 59% struggle to identify AI-generated content). Threat escalation drove enterprise adoption: Reality Defender expanded voice deepfake detection in banking and financial services, with 2025 threat data predicting $40B in fraud losses. However, platform-level failures emerged: Meta/Facebook failed to label AI-generated content with C2PA metadata despite infrastructure support, and commercial detectors overstated accuracy claims—signaling persistent implementation gaps between capability and deployment effectiveness. Provenance track saw tooling expansion: enterprise C2PA signing platforms (Capture) launched targeting news and creative sectors, broadening ecosystem beyond prior financial and sports verticals. Detection and provenance remained bifurcated: detection locked in arms-race dynamic with acknowledged technical ceiling, deployed as forensic and fraud-prevention augmentation; provenance infrastructure maturing toward multi-sector tooling and standard consolidation, but with restricted real-world effectiveness due to metadata stripping and platform fragmentation.
  • 2025-Q2: Detection security vulnerabilities emerged as new constraint layer: CVPR 2025 research revealed backdoor attacks via poisoned training data can compromise detector reliability, expanding threat surface beyond content synthesis to infrastructure itself. UC Berkeley generalization studies confirmed zero-shot transfer remains unachieved, affirming arms-race dynamic without solution. Reality Defender expanded production deployments across banking, media, government. Provenance ecosystem matured operationally with strategic infrastructure alignment: ONVIF partnered with C2PA to integrate provenance into surveillance systems (June 2025); Adobe launched Content Authenticity public beta with LinkedIn integration (April), reaching creator-level adoption. C2PA membership reached 250+ companies signaling vendor consolidation. However, real-world barriers hardened: manufacturer adoption remained delayed and selective (Samsung only metadata-applying to AI-edits), circular dependencies between creators and platforms emerged, and documented platform implementation failures persisted (Meta declined to apply C2PA despite support). Detection and provenance both confirmed as operationally mature within structural constraints, with no paths to fundamental technical limitations resolving.
  • 2025-Q3: Detection deployments expanded with quantified threat data: Reality Defender integrated into hiring workflows preventing fraud (CrowdStrike: 320+ remote job fraud incidents); Gartner predicts 1 in 4 job candidates globally could be fake by 2028. Regula survey found 33% of companies report deepfake fraud as top-three threat (fintech 38.6%, aviation 37%, banking 33%), driving adoption urgency without resolving underlying technical limitations. Critical assessment documented CSIRO finding that none of 16 leading detectors consistently identify real-world deepfakes, prompting shift from detection to authentication for forensic contexts. Provenance infrastructure reached hardware-level integration milestone: Qualcomm embedded Truepic's C2PA-compliant secure media library in Snapdragon 8 Elite Gen 5, enabling native signing/verification at device capture for billions of devices. Truepic Risk Network (September) consolidated provenance into cross-institutional fraud signal sharing. However, adoption barriers hardened: manufacturer fragmentation persisted, creator-platform circular dependencies unresolved, and World Privacy Forum raised privacy concerns about C2PA credential metadata and trust list equity risks.
  • 2025-Q4: Detection track formalized performance constraints through independent benchmarking: Wavestone analysis of 30 commercial solutions (November) confirmed 92.5% visual and 96% audio accuracy with live detection only 73% accurate; Ceartas benchmark documented commercial tools averaging 83% but video detection dropping to 63% (50% performance loss). Critical gap between confidence and capability emerged: 99% of leaders reported confidence but only 8.4% scored above 80% in simulated tests; tools claiming 96% lab accuracy delivered 50-65% real-world results. Provenance track expanded to broadcast infrastructure: ARD (German public broadcaster) deployed C2PA-signed VOD with frame-by-frame verification on AWS (November), representing production-scale adoption. Critical analysis documented adoption barriers: implementation inconsistencies, metadata stripping, low internet adoption despite 4,500+ C2PA members. Both tracks operationally mature within acknowledged structural constraints: detection deployed in hiring/financial services with quantified performance ceiling; provenance hardware-integrated (Snapdragon) and broadcast-deployed but with persistent platform fragmentation and implementation challenges.
  • 2026-Jan: Detection research explicitly acknowledged arms-race dynamic as unsustainable: Siwei Lyu (SUNY) published analysis showing deepfakes scaled 16x from 2023 to 2025 (500K to 8M), arguing pixel-level detection inadequate and advocating shift to infrastructure-level defenses via cryptographic provenance. Empirical evidence solidified human limitations: Breacher.ai red team assessments across 300 targets found employees identify deepfakes only 38% of the time, confirming visual detection training failure and supporting strategic pivot toward verification-based defenses. Voice deepfake response consolidated: Reality Defender and ValidSoft partnership combined voice biometrics with detection, indicating industry recognition that detection alone insufficient for audio. Provenance market expansion accelerated with C2PA forecasts showing $1.63B (2025) → $2.06B (2026) → $5.12B (2030) at 25.9% CAGR, but independent academic analysis (University of Zurich) documented persistent technical barriers: metadata stripping, weak UI, incomplete hardware support despite growing tool ecosystem (Sony/Nikon/Leica cameras, Adobe/TikTok/OpenAI platforms). Both tracks remain operationally deployed within structural constraints: detection positioned as fraud-prevention augmentation for high-stakes workflows; provenance ready for production but effectiveness constrained by platform fragmentation and creator dependencies.
  • 2026-Apr: Detection commoditised further as Reality Defender launched a public API with multi-language SDKs via Y Combinator (April 2026), signalling the shift from enterprise appliance to developer commodity; simultaneously, University of Edinburgh research (IEEE SaTML 2026) confirmed fingerprinting-based detectors are defeated 80%+ of the time with full attacker knowledge and 50%+ with basic techniques, a Microsoft study concluded no single authentication method prevents digital deception, and INTERPOL's global threat assessment documented a tenfold surge in deepfake fraud — with Meta's Oversight Board ruling its detection "not robust or comprehensive enough" after a fake Israel video spread during the Iran conflict, exposing real-time detection failure in crisis conditions. The NTIRE 2026 Robust Deepfake Detection Challenge (337 participants, 57 final submissions) confirmed that robust detection remains unsolved despite foundation models and ensemble approaches, with spatial/temporal degradation causing systematic detector failure; threat volume data quantified the scale — 3,165 deepfake incidents in March 2026 alone (up from 4 in January 2020, a 791x increase), and 62% of enterprises report exposure. YouTube expanded AI likeness detection to all verified public figures (April 2026), demonstrating platform-scale production deployment. Provenance architecture diverged: ETH Zurich proposed sensor-level cryptographic signing as a more tamper-resistant alternative to C2PA's processor-level approach; the C2PA Conformance Programme launched with two assurance levels, Google Pixel 10 becoming the first device to achieve Level 2 (hardware-backed) certification; AFP successfully validated C2PA-signed photo authentication during the US elections; yet a formal-methods security analysis of C2PA (12-author team, April 2026) found the specifications fail to achieve their claimed security goals and recommended against relying on C2PA for high-stakes uses (financial, journalism, legal); SSL.com became the first publicly trusted CA to issue C2PA-conformant certificates, enabling provenance signing at organisational scale via standard PKI; and the IPTC Media Provenance Summit (Toronto, April 16, 100+ experts from 67 organisations including BBC, CBC, Reuters, Getty, Adobe) advanced newsroom workflow frameworks and formalised the CAWG identity layer, with a documented empirical signal that provenance increases audience trust — though fewer than 1% of global news content uses C2PA in practice.
  • 2026-May: Research critique sharpened the structural misalignment in detection: a position paper documented that a decade of deepfake detection research was optimised for face-swap election interference that never materialised at scale, while actual harms (NCII, voice scams, biometric fraud) remain under-defended — a systemic research misdirection finding. Operational barriers crystallised simultaneously: cyber insurance exclusions for deepfake fraud (effective January 2026) confirmed enterprises now treat detection as forensic support rather than primary defence, and independent benchmarks show real-world accuracy collapsing from 95% (lab) to 50-65% (production). Provenance reached a hardware milestone: Canon launched a C2PA-compliant authenticity imaging system validated by Reuters for news organisations, covering the full provenance chain from capture through publication — the first major camera manufacturer to deliver newsroom-ready end-to-end provenance.