The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that monitors and moderates user-generated or AI-generated content to ensure brand safety and policy compliance. Includes automated content filtering and brand safety scoring; distinct from content safety in AI governance which governs AI outputs rather than published content.
Content moderation and brand safety is standard infrastructure for digital advertising and platform governance. Every major advertiser deploys automated content classification, and not doing so requires justification to stakeholders, regulators, and brand partners alike. The practice is established -- but it is also stalled. The core tension that defined this field a decade ago persists: automated tools handle categorical content (copyright, CSAM) reliably, yet consistently fail on contextual judgment -- sarcasm, cultural nuance, political speech. Vendors like DoubleVerify and Integral Ad Science have built multi-hundred-million-dollar businesses on classification at scale, and the market continues to grow. But repeated investigations have exposed systemic accuracy gaps, and the industry is shifting from rigid blocklists toward contextual AI and brand suitability frameworks. The arrival of generative AI content has compounded the challenge, introducing novel threat categories that legacy classification was never designed to handle. Emerging evidence of political bias in LLM-based moderation and systematic language coverage gaps (98% of African languages invisible to training data) reveal hard maturity ceilings. Moderation works. It also demonstrably does not work well enough -- and that paradox now defines the field.
Deployment metrics confirm operational maturity at unprecedented scale. April 2026 platform enforcement data documented 2.0-2.5M moderation actions/day across 8 Very Large Online Platforms (VLOPs) with regulatory coordination driven by EU DSA compliance. TikTok removed 538,000+ AI-generated unauthorized videos in April 2026 alone, demonstrating platform-scale detection of synthetic content threats. Q4 2025 data showed 175M videos removed globally with 99.1% proactive detection. DoubleVerify achieved MRC accreditation for TikTok viewability and SIVT detection in April 2026—the first independent third-party validation for platform-specific brand safety measurement—signaling vendor ecosystem maturity. DoubleVerify's 2025 revenue of $748.3M (14% YoY growth) and Novacap's $1.9B acquisition of Integral Ad Science in September 2025 demonstrate sustained investor confidence. The brand safety verification market is consolidated and mandatory—IAS and DoubleVerify now measure across Meta Threads, TikTok Pangle, LinkedIn CTV, and all major social and streaming platforms.
Regulatory enforcement is reshaping the landscape at unprecedented speed. The U.S. TAKE IT DOWN Act (May 19, 2026 deadline) mandates platforms deploy AI-driven detection and removal systems for nonconsensual AI-generated intimate images with 48-hour removal requirements, creating a structural compliance gap between major platforms with existing infrastructure and thousands of smaller platforms lacking technical capability. The EU DSA moved from policy to enforcement: Meta faced its first major DSA fine for election disinformation, with specific findings showing 40% higher organic reach for unverified false claims versus corrections and only 62% accuracy in minority-language moderation—directly triggering mandates for algorithmic auditing and real-time moderation transparency. Yet credibility pressures intensify while systemic gaps widen. An FTC investigation alleges IAS engaged in advertiser-driven platform boycotts. A shareholder lawsuit accuses DoubleVerify of overbilling for bot impressions and misrepresenting tool capabilities. Critical assessments now dominate: Singapore's regulator (IMDA) documented that platforms fail to proactively detect CSAM and terrorism content despite policy commitments. A Global Voices investigation revealed that only 42 of 2000+ African languages appear meaningfully in LLM training—approximately 98% of African languages are "essentially invisible to moderation systems," while TikTok's removal of content from Kenya climbed from 450K (Q1 2025) to 592K (Q2 2025). Meta's platform-scale AI cleanup deleted millions of accounts for bot/spam activity in May 2026, with documented false positives indicating system limitations.
Generative AI and platform policy shifts pose an unresolved systemic challenge. Meta/Instagram rolled out mandatory AI-content labeling on Reels (April 30, 2026) closing loopholes in synthetic content detection. DoubleVerify launched "AI SlopStopper" in April 2026 to detect low-quality AI-generated content across social platforms, showing vendor innovation in response to emerging threat landscape. Yet real-time detection and enforcement remains unproven at scale, and regulatory fragmentation (EU DSA, US TAKE IT DOWN Act, China ex-ante content mandates) creates compliance uncertainty. The field's paradox now sharpens: moderation is operationalized at billions of daily decisions with measurable fraud reduction and vendor scale, yet credibility erodes amid evidence of political bias in LLM systems, systematic under-coverage of non-Western languages, documented regulatory enforcement failures against leading platforms, and continued failures against adversarial synthetic media tactics.
— Comprehensive global policy enforcement actions on content moderation including UK AI-CSAM criminalization, Turkey age restrictions, EU Meta underage-access enforcement, and cross-jurisdiction hate speech assessments.
— Analysis of DSA enforcement accountability challenges, European Ombudsman finding of Commission maladministration in X risk-assessment transparency, and Meta preliminary breach findings on child protection.
— Empirical platform enforcement data showing 2.0-2.5M moderation actions/day across 8 VLOPs with category distribution and cross-platform coordination signals indicating regulatory-driven enforcement alignment.
— Platform-scale deployment of AI moderation tool removing millions of accounts for bot/spam activity, with documented outcomes and reported false positives indicating system limitations.
— First major DSA enforcement case against Meta for systemic content moderation failures. Includes specific metrics: 40% higher organic reach for unverified false claims vs. corrections; 62% accuracy in minority-language moderation. EU-mandated algorithmic auditing and real-time moderation transparency represent material shifts in platform accountability.
— Critical regulatory mandate requiring all platforms to deploy AI-driven detection and removal systems for nonconsensual AI-generated intimate images by May 19, 2026, with detailed analysis of infrastructure challenges.
— Real-time regulatory compliance monitoring showing active DSA enforcement signals including France's marketplace product safety removals and Commission investigations into platform design and illegal goods.
— eMarketer analysis: board-level brand safety prioritization in 2026; AI-generated 'slop' content creating novel moderation classification challenges for advertisers.