The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that detects deepfakes, authenticates content origin, and applies provenance metadata and watermarks to verify media integrity. Includes C2PA standard implementation and synthetic media detection; distinct from content safety which filters harmful outputs rather than verifying authenticity.
Content authenticity encompasses two distinct tracks -- deepfake detection and content provenance -- that have matured along divergent paths. Detection tools identify manipulated video, audio, and images; provenance systems attach cryptographic metadata at creation time to prove origin and edit history. Both are operationally deployed at forward-leaning organisations, but neither has crossed into broad adoption.
The defining tension is structural. Detection is locked in an arms race with synthesis: commercial tools achieve 83-96% accuracy in controlled benchmarks but degrade sharply in real-world conditions (73% live, 63% video), and humans perform little better than chance (0.07/1.0 on recent surveys). Provenance via the C2PA standard has accelerated from experimental to production deployment: hardware integration now extends beyond professional cameras (Snapdragon, Google Pixel 10) into enterprise infrastructure (DigiCert, Canon, Qualcomm), with regulatory catalysts (EU AI Act Article 50 August 2026, California SB 942 in force) driving adoption across newsrooms and commercial workflows. Yet metadata stripping, platform fragmentation, and selective manufacturer implementation persist as barriers to durability. Structural headwinds for detection: research misalignment (decade of work optimizing for face-swap election interference that didn't materialize, while actual harms—NCII, voice fraud, biometric attacks—remain under-defended); cyber insurance now excludes deepfake fraud coverage (post-Jan 1, 2026), forcing enterprises to treat detection as forensic support rather than primary defense. The field's centre of gravity remains shifting from "detect fakes" toward "prove authenticity at the source" -- but that transition depends on coordination that most of the ecosystem has not yet undertaken.
Detection has hit a documented performance ceiling. Independent benchmarking of 30 commercial detection solutions shows 92.5% visual and 96% audio accuracy under lab conditions, but live detection drops to 73% and video detection to 63%—a 45-50% accuracy collapse under real-world conditions. University of Edinburgh research (March 2026) demonstrates that fingerprinting-based detectors, a major detection paradigm, are defeated by adversarial attacks in 80%+ of cases with full attacker knowledge, and over 50% of cases with simple techniques like JPEG compression. Red team assessments find employees identify deepfakes only 38% of the time, confirming that training humans to spot fakes is an ineffective strategy. The gap between perceived and actual readiness is stark: 99% of security leaders report confidence in their defences, while only 8.4% scored above 80% in simulated exercises. Detection vendors like Reality Defender have expanded into hiring fraud prevention, financial services, and voice authentication (partnering with ValidSoft for biometric verification), and launched public APIs (April 2026) making detection available as commodity developer capability. Yet the consensus among researchers is clear: pixel-level analysis cannot keep pace with synthesis. Deepfakes online grew from 500,000 in 2023 to 8 million by 2025. Detection remains operationally deployed in fraud prevention workflows but within acknowledged structural constraints.
Provenance tells a more encouraging story, though barriers persist. The C2PA standard now underpins a $1.63 billion market (2025) projected to reach $5.12 billion by 2030. Hardware integration has arrived: Qualcomm embedded C2PA-compliant signing in Snapdragon processors; Google Pixel 10 signs all photos by default; Sony, Nikon, Leica, and Canon professional cameras support C2PA. Germany's ARD deployed provenance-verified video-on-demand across broadcast infrastructure. Adobe, OpenAI, Google, and TikTok have implemented C2PA support. The coalition counts over 6,000 members. The IPTC Media Provenance Summit (April 16, 2026, Toronto) convenes institutional leadership from broadcasters (BBC, CBC, France Télévisions), camera manufacturers (Sony), compliance bodies, and privacy organizations—signaling transition from experimental to production-stage newsroom and broadcast adoption. Yet critical assessments document persistent barriers: metadata is routinely stripped during distribution (platforms like Instagram, LinkedIn, YouTube strip credentials on upload), manufacturer implementation remains selective and incomplete, and creator-platform circular dependencies slow uptake. Emerging architecture divergence: ETH Zurich researchers propose sensor-level cryptographic signing (March 2026) as alternative to C2PA's processor-level approach, addressing intercept vulnerabilities. Provenance infrastructure is production-ready; making it durable across the entire content lifecycle and distribution chain remains the unsolved problem.
— Critical position paper: detection research optimized for face-swap election interference threat that didn't materialize; actual harms (NCII, voice scams, fraud) under-defended.
— Major camera manufacturer (Canon) launches production C2PA-compliant provenance system for news organizations with validation from Reuters; covers full provenance chain from capture through publication.
— Major structural business model change: cyber insurance excludes deepfake fraud coverage post-Jan 1, 2026; detection accuracy collapses 95%→50-65% on real-world media.
— Strategic analysis of C2PA ecosystem consolidation, regulatory drivers (EU AI Act Article 50 enforcement August 2026, California SB 942 January 2026), and hardware-level adoption signals with specific market size estimates.
— Tier-1 analyst report (Everest Group) assessing 14 deepfake detection providers. Structured benchmarking across platform integration, accuracy, explainability, modality coverage, and compliance. Signal of enterprise maturity in trust & safety.
— Production platform launch (May 5, 2026) combining C2PA provenance with AI detection, deployed by six founding organizations including Journalism Trust Initiative, UncovAI, and Sciences Po MediaLab. Supported by French Ministry of Culture. Shows ecosystem adoption of standards-based provenance + detection integration.
— Critical forensic assessment of C2PA's actual deployment limitations, showing barriers to effectiveness including credential stripping, limited platform adoption, and narrow current use case (AI disclosure only).
— Major infrastructure vendor (DigiCert: 100K+ organizations, 90% Fortune 500) launches production C2PA signing platform as managed service, with IDC analyst validation, addressing enterprise provenance infrastructure gap.