Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Content moderation & brand safety

ESTABLISHED

TRAJECTORY

Stalled

AI that monitors and moderates user-generated or AI-generated content to ensure brand safety and policy compliance. Includes automated content filtering and brand safety scoring; distinct from content safety in AI governance which governs AI outputs rather than published content.

OVERVIEW

Content moderation and brand safety is standard infrastructure for digital advertising and platform governance. Every major advertiser deploys automated content classification, and not doing so requires justification to stakeholders, regulators, and brand partners alike. The practice is established -- but it is also stalled. The core tension that defined this field a decade ago persists: automated tools handle categorical content (copyright, CSAM) reliably, yet consistently fail on contextual judgment -- sarcasm, cultural nuance, political speech. Vendors like DoubleVerify and Integral Ad Science have built multi-hundred-million-dollar businesses on classification at scale, and the market continues to grow. But repeated investigations have exposed systemic accuracy gaps, and the industry is shifting from rigid blocklists toward contextual AI and brand suitability frameworks. The arrival of generative AI content has compounded the challenge, introducing novel threat categories that legacy classification was never designed to handle. Emerging evidence of political bias in LLM-based moderation and systematic language coverage gaps (98% of African languages invisible to training data) reveal hard maturity ceilings. Moderation works. It also demonstrably does not work well enough -- and that paradox now defines the field.

CURRENT LANDSCAPE

Deployment metrics confirm operational maturity at unprecedented scale. April 2026 platform enforcement data documented 2.0-2.5M moderation actions/day across 8 Very Large Online Platforms (VLOPs) with regulatory coordination driven by EU DSA compliance. TikTok removed 538,000+ AI-generated unauthorized videos in April 2026 alone, demonstrating platform-scale detection of synthetic content threats. Q4 2025 data showed 175M videos removed globally with 99.1% proactive detection. DoubleVerify achieved MRC accreditation for TikTok viewability and SIVT detection in April 2026—the first independent third-party validation for platform-specific brand safety measurement—signaling vendor ecosystem maturity. DoubleVerify's 2025 revenue of $748.3M (14% YoY growth) and Novacap's $1.9B acquisition of Integral Ad Science in September 2025 demonstrate sustained investor confidence. The brand safety verification market is consolidated and mandatory—IAS and DoubleVerify now measure across Meta Threads, TikTok Pangle, LinkedIn CTV, and all major social and streaming platforms.

Regulatory enforcement is reshaping the landscape at unprecedented speed. The U.S. TAKE IT DOWN Act (May 19, 2026 deadline) mandates platforms deploy AI-driven detection and removal systems for nonconsensual AI-generated intimate images with 48-hour removal requirements, creating a structural compliance gap between major platforms with existing infrastructure and thousands of smaller platforms lacking technical capability. The EU DSA moved from policy to enforcement: Meta faced its first major DSA fine for election disinformation, with specific findings showing 40% higher organic reach for unverified false claims versus corrections and only 62% accuracy in minority-language moderation—directly triggering mandates for algorithmic auditing and real-time moderation transparency. Yet credibility pressures intensify while systemic gaps widen. An FTC investigation alleges IAS engaged in advertiser-driven platform boycotts. A shareholder lawsuit accuses DoubleVerify of overbilling for bot impressions and misrepresenting tool capabilities. Critical assessments now dominate: Singapore's regulator (IMDA) documented that platforms fail to proactively detect CSAM and terrorism content despite policy commitments. A Global Voices investigation revealed that only 42 of 2000+ African languages appear meaningfully in LLM training—approximately 98% of African languages are "essentially invisible to moderation systems," while TikTok's removal of content from Kenya climbed from 450K (Q1 2025) to 592K (Q2 2025). Meta's platform-scale AI cleanup deleted millions of accounts for bot/spam activity in May 2026, with documented false positives indicating system limitations.

Generative AI and platform policy shifts pose an unresolved systemic challenge. Meta/Instagram rolled out mandatory AI-content labeling on Reels (April 30, 2026) closing loopholes in synthetic content detection. DoubleVerify launched "AI SlopStopper" in April 2026 to detect low-quality AI-generated content across social platforms, showing vendor innovation in response to emerging threat landscape. Yet real-time detection and enforcement remains unproven at scale, and regulatory fragmentation (EU DSA, US TAKE IT DOWN Act, China ex-ante content mandates) creates compliance uncertainty. The field's paradox now sharpens: moderation is operationalized at billions of daily decisions with measurable fraud reduction and vendor scale, yet credibility erodes amid evidence of political bias in LLM systems, systematic under-coverage of non-Western languages, documented regulatory enforcement failures against leading platforms, and continued failures against adversarial synthetic media tactics.

TIER HISTORY

ResearchJan-2017 → Jan-2017
Bleeding EdgeJan-2017 → Jan-2020
Leading EdgeJan-2020 → Jan-2024
Good PracticeJan-2024 → Jul-2024
EstablishedJul-2024 → present

EVIDENCE (146)

— Comprehensive global policy enforcement actions on content moderation including UK AI-CSAM criminalization, Turkey age restrictions, EU Meta underage-access enforcement, and cross-jurisdiction hate speech assessments.

— Analysis of DSA enforcement accountability challenges, European Ombudsman finding of Commission maladministration in X risk-assessment transparency, and Meta preliminary breach findings on child protection.

— Empirical platform enforcement data showing 2.0-2.5M moderation actions/day across 8 VLOPs with category distribution and cross-platform coordination signals indicating regulatory-driven enforcement alignment.

— Platform-scale deployment of AI moderation tool removing millions of accounts for bot/spam activity, with documented outcomes and reported false positives indicating system limitations.

— First major DSA enforcement case against Meta for systemic content moderation failures. Includes specific metrics: 40% higher organic reach for unverified false claims vs. corrections; 62% accuracy in minority-language moderation. EU-mandated algorithmic auditing and real-time moderation transparency represent material shifts in platform accountability.

— Critical regulatory mandate requiring all platforms to deploy AI-driven detection and removal systems for nonconsensual AI-generated intimate images by May 19, 2026, with detailed analysis of infrastructure challenges.

— Real-time regulatory compliance monitoring showing active DSA enforcement signals including France's marketplace product safety removals and Commission investigations into platform design and illegal goods.

— eMarketer analysis: board-level brand safety prioritization in 2026; AI-generated 'slop' content creating novel moderation classification challenges for advertisers.

HISTORY

  • 2017: Early AI-driven brand safety tools (OpenSlate) launched in response to platform moderation crises (YouTube, Facebook). Platform-owned moderation acknowledged inadequacy; algorithmic solutions promised but years from readiness. Third-party auditing emerged as interim solution.
  • 2018: Platforms scaled hybrid AI-plus-human moderation; Facebook disclosed 583M banned accounts in Q1 demonstrating operational investment. Simultaneously, leaked guidelines and contractor lawsuits exposed infrastructure limitations and human costs. Expert consensus shifted toward skepticism that fully automated solutions would ever mature.
  • 2019: Third-party brand safety vendors (DoubleVerify, Integral Ad Science) secured platform partnerships and expanded filtering criteria to 75+ categories. Year-end assessments documented persistent moderation failures despite AI deployment; critical voices argued algorithmic solutions could not solve content moderation at scale.
  • 2020: Brand safety vendor ecosystem matured with sustained capital investment (DoubleVerify $350M funding) and global deployments at scale (Taboola 1.4B users, Facebook/YouTube partnerships). Simultaneously, empirical studies revealed high false-positive rates—21-53% of major news publishers' articles over-blocked, vendor disagreement exceeding 40%. Industry pivoted from "brand safety" blocklists to "brand suitability" contextual AI, acknowledging that rule-based automation caused collateral damage but seeking more granular categorization.
  • 2021: Vendor ecosystem showed strong financial metrics (DoubleVerify 44% YoY revenue, 112% ABS growth; TikTok/IAS integration; Discord/Sentropy acquisition). UK Government and independent analysts confirmed persistent algorithmic limitations: poor contextual interpretation, language bias, and over-blocking that cost the industry $898M in missed monetization. Industry consensus solidified that AI moderation was necessary but insufficient—deployed at scale despite known limitations.
  • 2022-H1: Brand safety vendors continued expansion with DoubleVerify reporting 43% YoY growth in H1 2022 and enterprise adoption from major advertisers; DoubleVerify's market analysis across 80 countries showed 93% advertiser adoption of brand safety controls, though fraud schemes rose 70% YoY. Academic research questioned whether current moderation approaches could build legitimacy: CSCW 2022 study found expert panels more trusted than algorithms, while SoK paper argued for shift to collaborative human-AI systems. Critical assessments mounted: News Media Alliance documented how brand safety tools mislabeled reputable publishers, and researchers highlighted that AI-driven tools remained unable to deliver on promises of sophisticated contextual judgment.
  • 2022-H2: Vendor maturity confirmed with live deployments across gaming, programmatic, and podcast advertising. Spectrum Labs (Riot, Wildlife Studios) processed 5B requests daily; Teads+IAS achieved 99% brand safety ratio. Academic and vendor research showed dual progress and limitations: CM-Refinery framework reduced annotation by 92.54% while improving accuracy; Twitter field experiments validated engagement gains from moderation (+13%/week). Consumer demand remained strong (88% support safe placements). Critical evidence persisted: AEI report documented AI struggles with context and subjectivity; independent analysis showed tool limitations despite scale. By year-end, industry consensus solidified that moderation required hybrid human-AI systems deployed at enterprise scale, yet algorithmic solutions remained insufficient for nuanced judgment.
  • 2023-H1: Regulatory scrutiny intensified with EU Digital Services Act requiring platform transparency on moderation accuracy metrics. Vendor deployments expanded with DoubleVerify supporting major advertisers (Merck in 60 markets, Airbnb in LatAm, Amazon Prime Video), and IAS achieving 25x brand safety gains at Guardian Health & Beauty and enhancing YouTube integration with 30+ language support. Academic critical assessment deepened: Oxford ethics framework argued AI moderation must include user appeals and transparency but acknowledged current systems fail these standards; hybrid systems remained necessary despite acknowledged limitations. Tech Mahindra's scaled deployment of 60,000 moderators with AI support evidenced continued reliance on human judgment for context and nuance.
  • 2023-H2: Vendor ecosystem showed robust growth with IAS reporting 13% YoY revenue growth ($113.7M Q2) and expanding brand safety measurement to TikTok (30+ markets), Facebook/Instagram Reels, and YouTube Shorts; Google announced 99% brand safety effectiveness for YouTube with CPMs 80% higher using exclusions, signaling platform-level deployment maturity. Critical evidence mounted: peer-reviewed Reddit study documented that moderators lack foolproof AIGC detection tools and rely on heuristics, with 20% of popular subreddits already restricting AI-generated content—highlighting practical limitations despite vendor scale. Industry focus shifted to combating AI-generated 'made-for-advertising' content (21% of programmatic spend) with DoubleVerify and IAS introducing AI-driven detection tools. The period confirmed that moderation remained a hybrid, operationalized practice at scale despite unresolved detection and contextual judgment challenges.
  • 2024-Q1: Vendor innovation focused on AI-generated content threats: DoubleVerify launched first-to-market pre-bid MFA tiered categories combining AI and human auditing to classify AI-made sites as High/Medium/Low risk, directly addressing explosion of generative AI content used in ad fraud. Simultaneously, critical assessments documented persistent limitations: NTU research identified dataset biases and censorship risks in AI filtering systems; Northeastern study found 80%+ of US/UK/Canada respondents worried AI chatbots lack context and understanding; investigative reporting detailed moderation effectiveness (80% repeat-violation blocking) alongside human moderator limitations and trauma. Consumer demand remained strong (92% of UK surveyed said appropriate ad adjacencies important), and Microsoft's launch of Community Sift for gaming showed continued platform expansion of AI moderation. The quarter confirmed moderation at 2024 remained a paradox: vendors deploying increasingly sophisticated AI tools, demand high, yet human judgment and limitations remained central to practice.
  • 2024-Q2: Vendor platforms continued expansion with IAS extending brand safety to TikTok (Category Exclusion, Vertical Sensitivity segments), Pinterest (39 countries, 40 languages), and introducing misinformation tracking aligned with GARM—reflecting demand for stricter controls and regulatory alignment. DoubleVerify achieved MRC accreditation for CTV brand safety measurement, a third-party validation signal. However, critical evidence mounted: DoubleVerify's own brand safety scores on X/Twitter displayed incorrectly for 4.5 months, documented operational failure of vendor tools; UK regulator Ofcom began testing platform AI classification accuracy on sensitive material, signaling government scrutiny of tool limitations. By quarter-end, practice maturity was clear: vendors achieved operational scale across major platforms with enterprise adoption, yet regulatory bodies and independent assessments continued documenting accuracy gaps and over-reliance on algorithmic categorization.
  • 2024-Q3: Vendor ecosystem continued maturation with Zefr expanding misinformation category measurement on YouTube and Microsoft announcing Azure AI Content Safety as successor to deprecated Content Moderator, signaling platform-level evolution. Market demand remained strong: WARC survey found 60% of 100 programmatic experts cite brand safety as top concern, and Adobe consumer research showed 94% of US respondents concerned about election misinformation. However, critical evidence dominated Q3: Adalytics investigation uncovered major brand ads on unsafe user-generated content pages (Fandom, Tumblr) rated brand-safe by Integral Ad Science and DoubleVerify despite containing racial slurs and hate speech, exposing systemic classification failures; TaskUs practitioner analysis documented that AI continues to struggle with sarcasm and linguistic nuance in moderation. By quarter-end, consensus solidified that brand safety tools had achieved operational deployment at scale despite acknowledged limitations in contextual judgment.
  • 2024-Q4: Vendor consolidation accelerated with DoubleVerify capturing 70% of displaced Moat advertiser RFPs (P&G, Google, BlackRock) following Oracle's exit, signaling market power concentration. Research advances showed multimodal LLMs achieving F1-scores 0.91 for brand safety classification with superior performance over traditional methods. However, critical evidence mounted: December Adalytics investigation alleged Fortune 500 brands' ads appeared next to pornography/racist content despite vendor brand-safe classifications, raising systemic effectiveness questions. Industry debate shifted toward brand suitability frameworks over blocklists, with practitioners arguing research supports contextual relevance over over-blocking. GARM closure in August created regulatory uncertainty but standards persisted in vendor tools.
  • 2025-Q1: Vendor innovation continued with Scope3 launching AI-agent-based competitor to DoubleVerify/IAS, while research advances (ICCV 2025) documented multimodal LLM effectiveness. However, critical signals dominated the quarter: Adalytics report found major brands' ads on CSAM-hosting sites despite vendor protections, triggering U.S. Senator inquiries and forcing DoubleVerify into rapid remediation. Meta's rollback of fact-checking and hate speech moderation shifted responsibility to advertisers, with Forrester research showing 59% of executives believe consumers care less about brand safety. Brand Safety Institute analysis documented that 69% of marketers view brand safety protocols as overapplied. By March 2025, practice maturity was clear—vendors achieved enterprise scale and platform integration—but regulatory scrutiny intensified following deployment failures and platform policy shifts reduced industry confidence in automated moderation as a reliable solution.
  • 2025-Q2: Vendor expansion continued with IAS launching pre-bid brand safety on Nextdoor with multimodal AI analysis, demonstrating platform ecosystem growth despite mounting evidence of tool limitations. Peer-reviewed research (Hertie School) documented systematic over- and under-moderation in OpenAI/Google/Amazon APIs with bias against marginalized communities, while analyst assessments quantified $2.8B annual publisher revenue loss from aggressive keyword blocklists. Industry rhetoric shifted toward "brand smartness" and performance optimization, with vendors reframing tools as campaign-planning inputs rather than content filters—implicitly acknowledging that static classification had reached practical limits. DoubleVerify faced legal threats from watchdog groups over tool efficacy claims, signaling escalating vendor-critic tensions. By June 2025, moderation remained operationalized at enterprise scale but with unresolved tensions between vendor innovation and documented deployment harms.
  • 2025-Q3: Market expansion confirmed with global AI content moderation market valued at $2.69B (2024), projected 12.4% CAGR to $9.8B by 2035. Vendor consolidation deepened via Novacap's $1.9B IAS acquisition (September) signaling strategic value of independent measurement. Transparency backlash intensified: advertisers and industry experts demanded detailed disclosure of classification accuracy from DoubleVerify and IAS following sustained criticism of over-blocking and tool limitations. Generative AI emerged as systemic brand safety threat: 100% of industry professionals acknowledged AI brand safety/misinformation risks, with 88.7% calling it moderate to significant. By September 2025, practice maturity was unambiguous (mandatory enterprise-scale deployment with measurable fraud reduction), yet legitimacy remained contested due to systematic over-blocking, bias against marginalized content, and failures against novel threats.
  • 2025-Q4: Platform expansion accelerated with IAS and DoubleVerify launching brand safety measurement on Meta Threads (400M monthly active users) and IAS integrating with TikTok Pangle (2.9B daily active users across 380k global apps), confirming multi-platform vendor ecosystem maturity. Large-scale advertiser survey (22k consumers, 1.97k marketers) documented 65% of advertisers expressing brand suitability concerns in walled gardens, with 57% of consumers reporting AI-generated content exposure on social media. Critical assessments intensified: Brand Safety Institute identified traditional blocklists as deprecated and fraud detection inadequate against AI agents; Mantis case study found contextual AI reducing over-blocking from 64% to 31%, doubling premium inventory access; shareholder lawsuit against DoubleVerify alleged bot detection failures and misled investor claims. By end-2025, moderation remained mandatory enterprise-scale practice with clear platform coverage and market-documented adoption, yet vendor credibility eroded amid contested effectiveness and mounting evidence that static blocklist categorization had reached practical limits.
  • 2026-Jan: Vendor ecosystem evolution continued with DoubleVerify launching AI-driven Authentic Streaming TV product (targeting $1B quarterly waste in CTV programmatic), while academic research exposed systemic inconsistencies—4,352-article study found significant classification discrepancies among DoubleVerify, IAS, and Oracle. Industry adoption remained strong (87% of media experts cite brand safety essential), but critical tensions mounted: IAS survey found 53% concerned about AI-generated content adjacency, New America think tank documented that automated tools struggle with contextual nuance and dataset bias, and news publishers reported 40-60% inventory over-flagged as unsafe despite IAS research showing 70% of keyword blocks were unnecessary (though contextual AI trials achieved 98% accuracy). Market expansion confirmed with $1.5B moderation market (2024) projected at 18.6% CAGR, yet regulatory enforcement (EU DSA fining Platform X €120M) and practitioner audits revealed persistent accuracy limitations and vendor credibility challenges.
  • 2026-Feb: Vendor consolidation and platform expansion continued with DoubleVerify reporting 14% YoY revenue growth ($748.3M) and 60% YoY acceleration in social activation, while launching CTV measurement for LinkedIn—signaling strong enterprise adoption despite mounting credibility challenges. FTC investigation into IAS for alleged advertiser boycotts and shareholder lawsuit against DoubleVerify alleging overbilling and false capability claims exposed regulatory and ethical risks. Peer-reviewed research advanced AI efficacy (GPT-4 F1-scores 66.46-77.09 for sensitive content), yet independent research (New America) and publisher adoption of alternative vendors documented persistent limitations of legacy blocklist systems, with $4B in annual CTV ad spend misplaced due to brand safety gaps. By month-end, moderation remained mandatory at enterprise scale with clear market growth, yet vendor legitimacy faced compounding pressures from regulatory scrutiny, credible overbilling allegations, and evidence that static categorization had reached practical limits.
  • 2026-Apr: Platform-scale enforcement confirmed at new highs: TikTok's Q4 2025 transparency report documented 175M videos removed globally with 99.1% proactive detection and 93.4% removal within 24 hours, while AWS Rekognition Content Moderation reached GA with multi-customer deployments processing millions of assets daily. A structural gap in brand safety tooling was exposed by DoubleVerify's AutoBait investigation, which uncovered a 200+ domain AI-generated made-for-advertising network evading detection at scale—demonstrating that moderation systems built for traditional content remain unprepared for synthetic media adversarial tactics. Cross-platform AI content labeling requirements from Meta, Google, and TikTok took effect in 2026, adding a new compliance layer that legacy classification pipelines were not designed to enforce.
  • 2026-May: Vendor credibility and systemic limitations under renewed scrutiny. DoubleVerify achieved first MRC accreditation for TikTok SIVT detection (April 2026), signaling independent validation of measurement accuracy at platform scale. However, critical assessments intensified: Global Voices investigation documented that only 42 of 2000+ African languages appear meaningfully in LLM training (~98% essentially invisible); TikTok enforcement in Kenya climbed from 450K removals (Q1 2025) to 592K (Q2 2025). University of Queensland peer-reviewed research found consistent political bias in LLM-based content moderation independent of overall accuracy. Singapore regulator (IMDA) documented that platforms fail to proactively detect CSAM and terrorism despite commitments. Meta/Instagram mandatory AI-content labeling on Reels (April 30, 2026) and DoubleVerify's new "AI SlopStopper" capability show vendor and platform innovation, yet regulatory fragmentation and adversarial synthetic media tactics remain unresolved. Regulatory enforcement intensified further: April 2026 VLOP enforcement data documented 2.0-2.5M moderation actions/day across 8 platforms driven by DSA compliance, the EU fined Meta for election disinformation (40% higher organic reach for unverified claims, 62% minority-language accuracy), and the European Ombudsman found Commission maladministration in X risk-assessment transparency. The U.S. TAKE IT DOWN Act deepfake compliance deadline (May 19, 2026) passed, requiring AI-driven detection infrastructure across platforms including thousands without existing capability. Meta's AI cleanup deleted millions of Instagram accounts for bot/spam activity, with documented false positives exposing system-level limitations at scale.