Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Advertising creative generation & testing

GOOD PRACTICE

TRAJECTORY

Stalled

AI that generates ad creative variants and tests them for performance, iterating toward higher-performing combinations. Includes automated A/B creative testing and variant generation; distinct from image generation in creative media which produces general imagery rather than performance-optimised ads.

OVERVIEW

Advertising creative generation and testing has reached institutional deployment maturity across major platforms by February 2026, yet effectiveness remains fragmented and contingent on implementation excellence. Platform-native solutions—Google Performance Max and Meta Advantage+—control the majority of programmatic ad spend at 43% and 4M advertiser scale respectively. Generative AI has expanded scope from variant selection to full creative production (video, copy, image generation), with AI-generated video now appearing in 57% of online ads and small brands leading adoption (45% plan AI-built creatives by 2026 vs. 36% for enterprises). The fundamental tension persists: platforms deliver strong ROAS signals and real deployment wins (623% purchase increases documented in January 2026, 40% ROAS lift validated in February 2026), yet 61% of advertisers report no meaningful results from AI creative tools, exposing massive execution gaps. Context-dependent performance boundaries sharpen the picture: Advantage+ outperforms at high budgets ($3K+, 23% higher ROAS) but underperforms at low budgets (31% worse), revealing where automation succeeds and fails. Peer-reviewed research confirms AI matches average human creativity but cannot surpass elite creative talent. The practice sits at mandatory operational necessity—nearly all mainstream advertisers run automated creative testing—but predictable business value remains elusive. Adoption barriers sharpen: only 29.7% have moved AI creative to production, data quality and system integration challenges block 42-41% of organizations, new advertiser guidance produces failures rather than successes, and consumer perception risks around AI-generated advertising mount. Success increasingly depends on advertiser organizational maturity, budget scale, and implementation capability rather than platform capability alone.

CURRENT LANDSCAPE

By February 2026, automated creative testing and AI-powered ad generation have achieved institutional deployment across major platforms, with clear leadership consolidation and widening performance divergence. Platform performance metrics remain strong with critical boundaries: Google Performance Max at 43% of all Google Ads budgets continues delivering 27% average conversion increases; Meta's Advantage+ now at 71% advertiser adoption (up from 60% in 2024) with platforms delivering 51B impressions and 19.9% spend share. Real-world case studies from February 2026 document continued wins: Joybird achieved 40% higher ROAS, 95% revenue lift, and 52% more clicks vs. Smart Shopping; digital agencies report 623% purchase increases and 7.69 ROAS on Performance Max deployments. Peer-reviewed research (Université de Montréal, January 2026) confirms AI models (GPT-4, Claude, Gemini) exceed average human creative performance on linguistic tasks but elite human creativity still leads. Production efficiency accelerates: hybrid AI-scripted creative workflows achieve 280% output increases and 65% cost reductions with 2.8% CTR. However, critical execution gaps crystallize with newly documented boundaries: FreeWheel survey shows 61% of advertisers have not seen meaningful results from AI creative tools despite 41% adoption; Meta Advantage+ analysis of 312 DTC campaigns reveals 23% ROAS outperformance at $3K+ budgets but 31% underperformance below $1K, exposing context-dependent automation; new advertiser guidance produces failures (reduced control, wasted spend, poor conversion tracking) rather than successes. Marketer adoption sentiment shows friction: 33% adoption of AI for creative development vs. 43% for other AI applications—indicating fragmentation and implementation friction. February 2026 data confirms dual-track marketplace where advanced practitioners extract genuine incremental revenue while majority struggle with execution basics, budget thresholds, data quality, and consumer perception risks from AI-generated advertising. Investment thesis remains sound, but advertiser organizational maturity, budget scale, and implementation excellence—not platform capability—determine success probability.

TIER HISTORY

ResearchJan-2023 → Jan-2023
Bleeding EdgeJan-2023 → Jul-2023
Leading EdgeJul-2023 → Apr-2024
Good PracticeApr-2024 → present

EVIDENCE (93)

— RedClaw benchmarks from 200+ accounts with $50M+ managed spend show 15-25% CTR improvements from AI creative adoption; creative quality identified as primary competitive lever.

— Comprehensive comparison of 10 AI creative testing platforms using synthetic audience methodology; benchmarked 80-95% accuracy, replacing $25k-$100k pretests with 30-minute testing cycles.

— EUR 150,000 budget test comparing AI vs human creative shows AI wins CTR/CPC (2.4% vs 1.9%, 22% savings) but human wins conversion (3.8% vs 2.9%); sector-dependent outcomes.

— Veo generative video integration into Ad Studio; generates 10-second videos from images; voiceover rollout for Performance Max; Demand Gen video enhancements show 16% conversion lift.

— Synthesis of two peer-reviewed studies (500M+ impressions, 105K field tests) showing AI ads achieve higher CTR (0.76% vs 0.65%), but hybrid ads perform worst; disclosure reduces effectiveness by 31.5%.

— Independent academic research on AI-generated vs human-created ad creative. Quasi-experimental study of 4,633 sibling ads (16B impressions) finds AI images outperform when not perceived as AI.

— Google extended AI Max to Shopping campaigns (April 30, 2026) with text customization generating ad copy tailored to shopper queries; expands automation layer across Google ad formats.

— Critical adoption paradox: 91% use AI but only 41% prove ROI (down from 49% in 2025); only 7% embedded AI for measurable outcomes. Data infrastructure identified as dividing line.

HISTORY

  • 2023-H1: Google Performance Max and Meta Advantage+ establish mainstream automated creative testing and variant selection. Early deployments show strong ROAS improvements (KEH Camera +76.5%, Culligan franchises with sustained gains), yet adoption remains concentrated among performance-focused advertisers and agencies. Privacy restrictions and cultural resistance to algorithmic creative control remain barriers.
  • 2023-H2: Generative AI enters ad creative ecosystem via Meta's AI Sandbox and AWS SageMaker, expanding creative testing from variant selection to full creative generation. Mainstream adoption accelerates (83% of creative professionals using ML tools by Nov 2023). Performance Max faces real-world criticism for performance mixing and algorithmic opacity, whilst practitioners develop systematic diagnostic frameworks for troubleshooting campaigns. Creative testing becomes institutionalized as structured methodology across platforms.
  • 2024-Q1: Performance Max and Advantage+ move to production-at-scale with exceptional deployment metrics across consumer verticals (839% ROAS, +32% ROAS improvements documented). Industry adoption broadens: 78.2% of 3,000+ retail campaigns use AI-driven Target ROAS bidding. Both platforms commit to full automation (Meta targeting fully automated AI-driven ads by 2026). Critical assessment emerges: practitioners document persistent data transparency and creative-quality evaluation limitations alongside real business wins, sharpening the distinction between algorithmic spend optimization and genuine creative improvement.
  • 2024-Q2: Google launches AI-powered creative asset generation, brand guideline integration, and image editing for Performance Max (5x faster production reported). Agency adoption reaches 91% (Forrester), with 49% using generative AI for dynamic asset optimization. Meta commits to full creative automation by 2026 at Performance Marketing Summit. However, critical field assessments surface persistent problems: hotel campaigns show 95.7% brand keyword cannibalization despite algorithmic promises, and invalid bot traffic affects 5-15% of search clicks. Tension sharpens between genuine creative improvement and algorithmic spend reallocation.
  • 2024-Q3: Google expands creative asset generation and reporting to App and Display campaigns (July 2024), moving beyond Search-only. Performance Max deployments continue across recruitment and consumer verticals. Standalone AI creative tools (AdCreative.ai) mature as accessible alternatives to platform-native solutions. Creative testing methodology solidifies as standard practice rather than bleeding-edge innovation. Underlying tension persists: platforms deliver measurable ROAS but field evidence continues to show data-quality dependency and algorithmic brittleness in complex scenarios.
  • 2024-Q4: Meta's Andromeda AI system reaches production scale, driving 22% ROAS gains for Advantage+ adopters; Meta aggressively pushes platform-default automation, reducing advertiser control. Real deployments continue (Arta Tradiției: 251% conversion lift; Trellis: 60% CTR gains with AI image generation). However, critical December 2024 neuroscientific research (NielsenIQ BASES) reveals AI-generated ads elicit weak memory activation and negative consumer perception—a counterweight to optimization gains. Optmyzr's 9,199-account study shows strategy-dependent performance: configuration and data quality determine success; 55% of accounts don't meet conversion thresholds. Creative automation is now standard operational practice, but effectiveness is constrained by consumer perception risks, data infrastructure dependency, and platform-driven loss of advertiser autonomy.
  • 2025-Q1: Performance Max reaches 43% of all Google Ads spend (up from 22% in 2023), signaling platform shift toward mandatory AI-driven creative optimization. Independent validation (Kantar LINK AI) confirms 80% prediction accuracy and 15x sales lift for top-performing ads. Yet adoption remains fragmented: only 30% of enterprises have fully integrated AI in campaign lifecycles (IAB survey). Marketer adoption sentiment shows friction: 38% uncomfortable with AI-generated creative in production (Marketing Dive), citing consumer perception risks highlighted by neuroscientific research. Practice crosses production maturity threshold with measurable business returns, but deployment challenges sharpen around data quality, platform lock-in, and consumer backlash to AI-generated ads.
  • 2025-Q2: Adoption infrastructure reaches scale: Meta Advantage+ 4M advertisers globally (70% YoY growth); Advantage+ $4.52 ROAS (22% higher than traditional). KPMG survey shows 93% leadership confidence in GenAI ROI, with AI-agent piloting accelerating to 65%. Yet independent research reveals deployment reality: only 4% of orgs achieve consistent GenAI value, 68% moved less than a third of experiments to production, 11% at scale adoption (UC Berkeley CMR). Optmyzr study of 24.7K Performance Max campaigns confirms widespread use (82% run alongside other channels) but performance variance due to setup/creative quality. Kantar validates hybrid testing with Unilever case showing 57% asset coverage with AI. Deployment maturity increases but strategic fragmentation widens between high-performing deployments and majority struggling with basics.
  • 2025-Q3: Meta unifies Advantage+ API (May 2025, rollout through Q1 2026) to streamline campaign creation via automation. Fortune 500 adoption reaches 78% in marketing/comms (matched with IT/engineering). Video creative generation accelerates—30% of digital video ads powered by GenAI in 2024, projected 39% by 2026; small brands lead with 45% AI-built creatives expected by 2026. Yet critical failures emerge: growth marketing veteran documents pulling all clients from Performance Max after incrementality tests show <10% incremental revenue vs. 80-90% for non-branded search, challenging platform ROAS claims. BCG analysis reveals limited enterprise deployment of GenAI in creative development despite momentum, citing glitches, crashes, and process rethinking requirements. Practice stabilizes at operational maturity but reveals deployment fragmentation: headline adoption metrics mask real-world incrementality gaps and consumer perception risks around AI-generated advertising.
  • 2025-Q4: Meta's GEM and Lattice AI systems achieve production-scale deployment: 5% conversion lift (Instagram), 3% (Facebook), 12% ad-quality improvement. Google Performance Max maintains 43% spend share; FULLBEAUTY Brands achieves 45% ROAS lift with AI-generated creative variations. Yet platform-contradiction surfaces sharply: Wicked Reports analysis of 55,661 campaigns shows Advantage+ new customer acquisition cost doubled ($257→$528) between May 2024-2025. IAB survey reveals 58% plan increased AI creative investment, but 70%+ already encountered incidents (hallucinations, bias, off-brand); only 35% plan governance investment. Practice reaches institutional maturity—mandatory infrastructure for competitive positioning—but Q4 data exposes widening gap between platform ROAS claims and real-world incremental performance, with advertiser success contingent on implementation excellence and data maturity.
  • 2026-Jan: Peer-reviewed research (Université de Montréal) validates AI exceeds average human creativity but cannot match elite creators. Platform evolution accelerates: Meta launches 2026 feature updates with Predictive Budget Allocation (8-15% ROAS gains) and video generation (40% cost reduction); Google adds A/B testing features for Performance Max. Real deployments continue (digital agencies document 623% purchase increases, 7.69 ROAS). Yet execution gap crystallizes: FreeWheel survey reveals 61% of advertisers report no meaningful results from AI creative tools despite 41% adoption. Only 29.7% have moved AI creative to production; data quality and integration barriers affect 42-41%. AI-generated video adoption accelerates to 57% of online ads. Practice consolidates at mandatory operational status but reveals severe organizational capability gaps and uneven value delivery.
  • 2026-Feb: New real-world deployments and adoption data confirm platform maturity with critical execution boundaries emerging. Joybird achieved 40% ROAS lift and 95% revenue increase with Performance Max in controlled testing (Feb 2026). Adoption continues: 71% of advertisers use Performance Max (up from 60% in 2024), with platforms delivering 51B impressions and 19.9% spend growth. However, nuanced analysis reveals automation works within bounds: Advantage+ outperforms manual at high budgets ($3K+, 23% higher ROAS) but underperforms at low budgets (31% worse), exposing deployment threshold constraints. Negative evidence surfaces: Performance Max guidance fails new advertisers due to reduced control and data-quality dependency. Efficiency gains quantified: hybrid AI-scripted creative production achieves 280% output increase and 65% cost reduction, but critical assessments document AI creative limitations including lack of originality and quality concerns. Practice reaches inflection point where adoption scales but effectiveness becomes increasingly contingent on advertiser sophistication and proper configuration.
  • 2026-Mar: Governance and quality control frameworks emerge as core practice infrastructure. Advertising Association publishes 8-principle best-practice guide (transparency, bias prevention, human oversight, brand safety); Google and Meta enforce AI content disclosure policies. Google AI Max Text generation hits GA to all advertisers with 14% conversion lift. Critical independent assessments deepen: Kantar's database shows GenAI-created ads score 54th percentile vs 65th for non-AI (average), documenting fundamental creative quality challenge; Digital Applied's 50K+ variation dataset confirms -8% to -14% conversion penalty on high-AOV products and 17% lower purchase intent when AI-identified. Advanced Science peer-review finds unguided AI significantly underperforms human artists on creativity tasks.
  • 2026-Apr: Google PMax shifts to asset-level creative review (replacing campaign-level enforcement), enabling rapid testing cycles with 0-60s text processing and 2-4h image/video review. Advantage+ ROAS benchmarks confirmed at 4.52x (22% above manual) across 50K+ campaign dataset. Practice consolidates at institutional maturity with proven value for performance-optimized verticals (impulse, low-consideration), but quality constraints and consumer perception barriers sharpen boundaries around brand-building and premium categories.
  • 2026-May: Google AI Max expands with AI Brief (natural language creative guidance), video voiceover generation, and Shopping campaign coverage—demonstrating continued platform capability enhancement. WFA research shows 78% of multinationals deploying AI-generated creative (87% images, 80% copy) with 61% citing transparency guidance gaps. Deployment reality sharpens: independent agency audits of AI Max GA reveal wide variance (SMEC +13% revenue but +16% CPA; Monks 99% zero conversions), indicating performance contingent on implementation. Amazon creative testing benchmarks: AI-generated creatives 22% higher CTR and 34% better ROAS across 61% of campaigns. Google's Veo integration into Ad Studio enables 10-second video generation from images with 16% conversion lift on Demand Gen; AI synthetic audience testing platforms now benchmark 80-95% accuracy, compressing $25K-$100K pretests into 30-minute cycles. A €150K budget controlled test confirms AI wins on CTR/CPC (2.4% vs 1.9%, 22% cost savings) but loses on conversion (3.8% vs 2.9% human), with outcomes sector-dependent. Industry framework crystallizes: Meta creative now determines 50-60% of auction outcomes (up from 47% in 2017); 3-3-3 testing methodology delivers 30% YoY CTR improvement. Critical barriers sharpen: consumer research documents AI ads as poor at emotional storytelling; Coca-Cola faced backlash despite strong metrics; disclosure concerns and authenticity perception gaps persist. Practice consolidates as operational infrastructure with measured capability gains, but deployment effectiveness remains highly dependent on advertiser execution maturity, budget scale, and brand category fit.

TOOLS