Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Campaign optimisation & performance prediction

ESTABLISHED

TRAJECTORY

Stalled

AI that optimises programmatic ad targeting, predicts campaign performance, and recommends budget allocation. Includes predictive audience modelling and real-time bid optimisation; distinct from marketing analytics which analyses historical rather than optimising future performance.

OVERVIEW

AI-driven campaign optimisation -- automated bidding, budget allocation, and predictive audience targeting -- is commodity infrastructure at critical inflection. Performance Max crossed 80% account adoption in Q2 2026, marking transition from "should we use this?" to "how do we control it?" Smart Bidding runs 78% of Google Ads spend, and Trade Desk's Kokai operates 85% of Trade Desk client budgets. The practice is established: platforms ship optimization as default, not using it requires active justification, and enterprise capabilities are mature. However, real-world deployment evidence exposes fundamental limitations. Independent audits show vendor metrics overstate true incrementality by 2–5x; holding companies (Publicis, Omnicom) are auditing and rejecting black-box platforms, shifting $26B annually into private deals; and 73% of advertisers report campaigns stuck in learning mode with 40–60% CPA inflation. The defining tension has crystallized: adoption metrics indicate maturity, but practitioner outcomes reveal that "set and forget" automation fails without active human oversight. Full-automation narratives collapse at scale (>$100K/month) and for segments without sufficient conversion volume. Campaign optimisation is how digital advertising now operates; the open question is whether platforms will deliver better control and transparency, or the market will bifurcate toward hybrid human-oversight workflows.

CURRENT LANDSCAPE

Adoption metrics signal market maturity: Performance Max reaches 4M advertiser accounts (4x growth in 12 months), 80% account penetration among active advertisers, and 45% of Google Ads conversions. Smart Bidding operates 78% of Google Ads spend; Trade Desk Kokai 85% of Trade Desk client spend. StackAdapt's 2026 survey (484 senior marketers, 6000+ advertisers) reported 75% expect budget growth, 84% stronger year-over-year performance. Independent benchmarks show structural advantages for $10K–$50K/month spend tier and local business segments reporting 10–20% performance lift from PMax over search-only. However, adoption metrics mask deployment reality and measurement opacity.

Independent audits reveal fundamental credibility gaps. Cassandra's MMM analysis (253 models, $383M spend) found Performance Max delivered 4.64x incremental ROAS but platform attribution overstates by 2–5x due to organic intercept and retargeted-user conversions—establishing that vendor metrics systematically misrepresent true impact. Real-world practitioner evidence: 73% of advertisers report campaigns stuck in learning mode, experiencing 40–60% CPA inflation; Performance Max fails above $100K/month spend due to over-indexing on remarketing and loss of channel control; new advertiser segments (insufficient conversion volume) face systematic ROI failures. Digital Applied's audit of AI Max (Google's new campaign type) found variance between claimed 7% lift and independent tests: SMEC +13% revenue, Brainlabs +40% success rate, but Monks documented 99% zero-conversion impressions—evidence of optimization failure patterns alongside wins.

Market structural shift: Major holding companies (Publicis, Omnicom) actively auditing and rejecting AI optimization black-box platforms, with $26B annual inefficiencies identified; 90% of spend now concentrated in private marketplaces rather than open programmatic exchange with automation. This signals loss of confidence in vendor optimization despite commodity adoption. The result is a market bifurcating by sophistication: enterprise teams with data infrastructure and optimization expertise continue extracting value; mid-market and SMB segments without sufficient conversion volume or expertise face persistent ROI barriers that full automation cannot solve, pushing industry toward hybrid human-oversight workflows.

TIER HISTORY

ResearchJan-2016 → Jan-2016
Bleeding EdgeJan-2016 → Jan-2018
Leading EdgeJan-2018 → Jan-2019
Good PracticeJan-2019 → Jul-2024
EstablishedJul-2024 → present

EVIDENCE (133)

— Analyzed €100k–€5M monthly accounts: predictive LTV lookalikes outperform demographic targeting 22–40%; value-based bidding lift repeat rates 8–14 points; creative volume impact 18–25% CPA reduction. Real-world stack performance across independent verticals.

— Google announced three production features: journey-aware bidding (learns from full lead-to-sales path), Smart Bidding Exploration expansion (27% more unique converting users), demand-led pacing (auto-adjusts daily spend by predicted demand). Demonstrates ecosystem maturity for automated performance prediction.

— Maps four automation layers with honest maturity assessments: bidding (high—12–32% lower CPC), audience (medium-high), creative (medium), reporting (low-medium). Documents threshold: below $5K spend, system lacks sufficient data for clean learning phase exit. Reveals automation blindspots.

— Q1 2026 benchmark (21,000+ accounts): CTR +21% YoY but CPA +4.41%; reveals optimization trades cost efficiency for volume, not improvement. Shows maturity challenge: engagement rising via volume expansion, not efficiency gains—structural barrier rather than scalability win.

— Large-scale benchmark ($4B+ spend): PMax 67% of product ad budget; achieved ROAS parity with standard Shopping for first time; YouTube share of PMax impressions tripled YoY. Confirms Performance Max maturation and channel expansion.

— Reframes Performance Max as signal architecture rather than campaign problem. Identifies five signal layers (first-party data, conversion value, audience signals, creative, measurement) that determine outcomes. Optimization failures are signal failures, not algorithm failures.

— PubMatic AgenticOS: 30 fully autonomous agentic campaigns running globally; Butler/Till Geloso Clubtails case study delivered 5× fee reduction, 40% more impressions, 30% lower CPM. Establishes agentic AI in campaign optimization moved from pilot to production.

— GrowthSpree managing $60M+ spend across 300+ companies: predictive audiences cut CPL 21% average vs. standard targeting; industrial automation SaaS achieved 35% CPL reduction in 8 weeks. Confirms channel-specific predictive performance gains.

HISTORY

  • 2016: Google announced Smart Bidding (July), introducing machine learning-driven bid optimisation to AdWords and DoubleClick Search. The Trade Desk advanced programmatic AI capabilities in its IPO period (June). Market adoption concentrated among Google ecosystem advertisers; mid-market hesitation remained high due to transparency and data volume constraints.
  • 2017: Google released Maximize Conversions (May/June), with Trex reporting 73% conversion volume lift. Criteo's Predictive Search achieved 47% ROAS gains in retail tests. Trade Desk scaled to 9M requests/second. However, industry skepticism grew: 47% of senior marketers viewed AI as overhyped, and Gartner noted AI passed "peak of inflated expectations." Adoption remained concentrated among large advertisers with sufficient data volume.
  • 2018: Google expanded AI-driven campaign optimisation with responsive search ads, Smart Shopping campaigns (GittiGidiyor: 28% ROAS, 4% sales increase), and refined Smart Bidding algorithms; Harmoney and FirstPoint case studies confirmed 2–3x conversion lifts. Trade Desk launched Koa, an AI forecasting tool processing 9M+ queries/second. Academic research (arXiv, KDD) validated bidding optimisation algorithms. However, critical blockers surfaced: data accuracy remained unsolved in programmatic pipelines, 73% of practitioners struggled to extract reliable insights from scattered vendor data, and organisational resistance to automation persisted despite demonstrated ROI.
  • 2019: Campaign optimisation achieved mainstream platform status. Google expanded Smart Bidding to optimise for offline store visits and reported 70% advertiser adoption; Trade Desk matured Koa and unified ID infrastructure with partners (PubMatic) reporting efficiency gains. Academic research continued validating algorithmic advances in RTB optimisation and bid shading with production DSP deployments. Data accuracy and organisational resistance remained critical constraints on broader adoption.
  • 2020: Platform maturation and capability expansion continued alongside emerging measurement barriers. Google advanced Smart Bidding with 90-day predictive forecasting incorporating seasonality (November 2020), with case studies showing 20% conversion increases (iProspect) and 112% revenue growth (Japan Experience). Academic research advanced RTB algorithms for budget allocation between RTB and direct channels, deployed at scale. Predictive attention models (Lumen) entered production programmatic bidding. However, Forrester research revealed a critical adoption gap: while 94% of marketing executives scrutinize digital spend, only 33% can accurately demonstrate programmatic ROI. Practitioners documented specific deployment barriers: insufficient conversion volume, over-reliance on automation without testing, and unrealistic expectations. Regulatory headwinds (CCPA, GDPR) and walled gardens further constrained data availability, particularly for smaller advertisers below viable Smart Bidding thresholds.
  • 2021: Commodity platform status solidified. Google rolled out Performance Max to all advertisers and consolidated Smart Bidding API strategies (September 2021), signaling full platform automation as default. User-acquisition optimization using predictive LTV modeling became standard practice. However, critical tension emerged: Adalytics' mid-year research showed mixed targeting effectiveness across advertiser cohorts, contradicting vendor success stories and suggesting optimization value varied significantly by sophistication and data volume. Privacy regulation enforcement (GDPR, CCPA, Apple ATT) and walled gardens significantly constrained third-party data availability, creating headwinds for data-dependent optimization techniques.
  • 2022-H1: Performance Max expansion delivered measurable results: named case studies showed Dime Beauty 64% sales growth and Luxury Escapes 45% revenue growth. Predictive capabilities matured across platforms—academic research validated unified frameworks for campaign forecasting; GA4 predictive audiences entered production with documented 60% loss reduction results. Vendors adapted to privacy: Outbrain launched cookieless optimization tool. However, maturity gaps persisted: practitioner analysis noted algorithms fail during market volatility; industry roundtables emphasized need for human oversight despite platform push toward full automation.
  • 2022-H2: Campaign optimization achieved commodity status with full automation as default. Google simplified Display campaigns with AI features rolled into all campaigns (August); Performance Max added transparency features (optimization score, seasonality, data exclusions). Trade Desk-Disney partnership (July) and expanded mobile measurement demonstrated enterprise adoption of post-cookie optimization. Lytics launched Predictive Audiences with AI lookalike modeling. However, deployment reality revealed significant gaps: practitioner analysis identified widespread misconception about Target ROAS causing systematic campaign failures; balanced assessments confirmed real ROI but highlighted martech complexity, startup risk, and AI talent scarcity as persistent barriers. Platform push toward full automation continued despite evidence of incomplete value realization without human oversight.
  • 2023-H1: Campaign optimization platforms continued expanding beyond Google. Amazon DSP deployed advanced ML models for bidding and pacing optimization (April); Performance Max reported 18% conversion lift over baseline (February). However, practitioner skepticism deepened: agencies reported TikTok's Smart Performance Campaigns prioritized engagement over conversions; Performance Max campaigns relied on geo-testing to verify incrementality; trust deficits persisted around black-box optimization. Industry forecasts predicted AI would impact 50% of advertising revenue by end-2023, but deployment reality remained fragmented: real wins (Caraway success stories via Performance Max) coexisted with systematic failures from misconfiguration and overreliance on automation without strategic oversight. Data quality and organizational maturity gaps continued to differentiate successful from failed deployments.
  • 2023-H2: Campaign optimization continued commodity platform status with performance evidence accumulating across verticals. Kueez achieved 20% ROAS growth via Google Ads API and Performance Max (July); sports and entertainment venues saw double-digit ROAS on Performance Max (Paciolan case studies, October). Trade Desk published internal case study showing multi-channel optimization value analysis outperforms single-metric CPA optimization (November). GA4 Predictive Audiences deployment expanded for remarketing optimization (September). However, critical limitations surfaced: practitioner analysis documented black-box optimization design flaws (mixing brand/non-brand search to obscure underperformance, December); PPC practitioners documented systematic campaign failures requiring active intervention despite automation promises (August). Platform push toward full hands-off automation continued, but evidence demonstrated deployment still required strategic oversight and systematic problem diagnosis to realize value.
  • 2024-Q1: Campaign optimization vendor ecosystem continued expanding in response to third-party cookie deprecation. Google expanded Performance Max to hotel advertising with new Target ROAS and AI-driven bidding for travel goals (February). Microsoft launched global Performance Max GA, extending vendor-agnostic competition in automated campaign optimization (March). Real-world deployments continued: DeVry University achieved 13% conversion lift and 11% CPA reduction via AI fraud detection and invalid traffic filtering; German aid charity optimized Meta campaigns with broad national targeting driving 76% of donor acquisition results. GA4 Predictive Audiences adoption expanded among mid-market agencies reporting above-average results on likely-purchaser targeting. Research-industry collaboration advanced: Dollar General, Oracle, and Fidelity-affiliated researchers published supervised ML models for campaign conversion prediction. However, persistent challenges remained: platform transparency deficits and the tension between AI automation promises and required human strategic oversight continued to define practitioner experience.
  • 2024-Q2: Campaign optimization platforms advanced predictive capabilities while deployment challenges persisted. Google announced predictive conversion values beta (May), leveraging Vertex AI to forecast conversion values in real-time—addressing a fundamental gap for SMBs without sufficient historical data. Trade Desk's Kokai platform matured with documented seed-based audience targeting and relevance scoring features (June), with adoption reaching 75% of client spend by Q3, showing 20-34% CPA improvements. Real-world practitioner case studies highlighted Smart Bidding value: consolidation-based deployments achieved +345% conversion value and +22% ROAS increases across multiple production accounts. However, critical analysis exposed ongoing algorithm limitations: Performance Max campaigns exhibited 95.7% unintended brand keyword overlap despite optimization, with 22.87% average brand clicks persisting even with explicit exclusions. DataFeedWatch documentation identified four common Performance Max failures requiring manual intervention: data inflation from brand mixing, spending opacity, channel attribution gaps, and AI optimization blind spots. The window demonstrated both capability advancement (predictive conversion values, Kokai adoption at scale) and persistent transparency/control gaps that continued to require active human oversight despite vendor automation promises.
  • 2024-Q3: Campaign optimization platforms continued commodity status with real-world performance evidence accumulating despite persistent black-box concerns. Google Smart Bidding deployments (Teknosa, August) demonstrated ongoing adoption among mid-market retailers seeking ROAS improvements. Optmyzr's analysis of 14,584 international accounts (September) found Max Conversion Value most efficient Smart Bidding strategy but no clear winner between AI and manual approaches. WARC's programmatic survey (July) confirmed brand safety and fraud concerns as top blockers, with 76% of marketers implementing first-party data strategies for post-cookie optimization. Agency experience showed Performance Max delivering measurable efficiency gains (188% more conversions, 42% lower CPA in one case) but continued complaints about transparency—over 50% of companies reported Performance Max as challenging to manage. Practitioner analysis documented persistent deployment gaps: common mistakes in audience signals and data feeds, algorithmic inability to exclude unwanted brand keywords despite explicit exclusions, and ongoing requirement for human strategic oversight despite platform automation promises. The window demonstrated capability maturation with measured case-study evidence alongside substantial deployment barriers requiring active operator intervention.
  • 2024-Q4: Campaign optimization platforms reached inflection point: vendor capabilities continued advancing (Trade Desk Kokai scaling to 34% average CPA reductions, Google Performance Max 17% ROAS advantage over social), but practitioner sentiment sharply diverged from vendor claims. Optmyzr's large-scale study (9,199 accounts, 24,702 campaigns) revealed structural optimization trade-offs rather than clear winners. Spork Marketing achieved 22% ROAS improvements through manual campaign restructuring, confirming optimization still required operator expertise. However, simultaneous practitioner backlash surfaced widespread skepticism: agency executives reported persistent transparency failures in AI optimizations, concerns about incrementality of conversions, and severe negative outcomes from over-automation (90% CPA increases on Meta, 75% spend reductions on Microsoft after platform algorithm changes). The quarter demonstrated the practice at a critical juncture: commodity platform status with documented capability and real deployment gains, but mounting evidence that platform-driven full automation was failing without human strategic oversight. The divergence between vendor success stories and practitioner reality failure modes became the defining characteristic of Q4 deployment experience.
  • 2025-Q1: Campaign optimization reached an inflection point between platform capability advancement and practitioner confidence erosion. Google's documentation showed continued Performance Max success: de Bijenkorf achieved 2x conversion increases and 68% ROAS uplift with Web-to-App integration; aggregate data showed 25% average conversion value lift for advertisers upgrading from Standard Shopping. Performance Max adoption accelerated to 43% of total Google Ads spend by 2025, up from 22% in 2023. Summit Media's case study demonstrated that Google Smart Bidding matched or outperformed 12-year-old proprietary bid management platforms for named clients (The Range, Joules, H.Samuel), confirming platform optimization maturity. Google clarified API-based placement exclusion functionality after Optmyzr validation, addressing persistent advertiser control concerns. However, by quarter-end, practitioner sentiment diverged sharply from vendor claims. Digiday reported advertisers cutting Performance Max budgets by 50% and moving to open web channels, citing lack of transparency, high CPM volatility, and diminishing returns. Critical analysis surfaced AI hype concerns: over-automation risks, 70% of professionals feeling adoption pressure, and red flags around 'custom AI' without explanation. The quarter demonstrated the practice in transition: established commodity status with documented case-study evidence and rapid adoption metrics, but emerging practitioner skepticism and real-world evidence of adoption barriers, transparency deficits, and performance plateaus when relying on full automation without strategic oversight.
  • 2025-Q3: Campaign optimization platforms demonstrated real-world deployment wins alongside mounting evidence of incrementality and effectiveness challenges. Yanolja (Korean travel app) deployed GA4 predictive modeling to optimize audience discovery and campaign efficiency, showing continued adoption of AI-driven optimization. Trade Desk's Kokai platform achieved significant results: a U.S. food/drink brand realized 103% ROAS increase, McDonald's Canada achieved 40% CPA decrease, with 70%+ of Trade Desk client spend migrating to Kokai by Q2 2025, signaling rapid platform consolidation. Nielsen's July 2025 survey found 59% of global marketers rank AI-driven campaign optimization as the most impactful trend. However, practitioners documented serious effectiveness concerns contradicting vendor success claims. A growth marketing veteran pulled all clients from Performance Max after incrementality testing revealed less than 10% incremental revenue; broader analysis showed 75% of incrementality tests across their book of business produced poor results. AdPulse's September analysis distinguished real wins (Amazon Creative Assistant, Netflix AI-embedded ads, Dynamic Creative Optimization) from persistent limitations: full automation produces generic output, privacy gaps persist, tool fatigue from constant adoption pressure, and cost-value misalignment. By quarter-end, the practice showed established status with real deployment evidence and adoption scale, but widening divergence between vendor metrics and practitioner-measured outcomes, with incrementality questions emerging as the core blocker to broader deployment confidence.
  • 2025-Q4: Campaign optimization entered its most critical inflection point: highest vendor adoption metrics alongside lowest practitioner confidence. The Trade Desk's Kokai adoption accelerated to 85% of clients by November 2025, delivering industry-leading efficiency gains (26% average CPA reduction, 58% cost-per-reach improvement, 94% CTR uplift), confirming platform maturity and network effects from consolidation. Google AI Max adoption (released May 2025) showed strong early traction with 14-27% conversion lift pilots; market growth accelerated—U.S. retail media networks exceeded $62B in Q4 spending, with predictive targeting and next-best-action modeling deployed in production across channels. However, the quarter exposed fundamental implementation barriers to full automation. Kokai's forced migration from legacy Solimar interface caused production outages (bugs, campaign launch failures, missing audiences, broken integrations), with critical deployments missing holiday season windows—an inflection point revealing automation platform brittle points despite adoption gains. Simultaneously, practitioner assessments documented that 84% marketer AI adoption coincided with systematic campaign failure: signal loss from privacy initiatives, bot traffic phantom conversions, algorithmic drift, and measurement misalignment as root causes. The practitioner consensus crystallized around a critical finding: full hands-off automation consistently delivered worse results than strategic human oversight. By quarter-end, the practice demonstrated established status with proven deployment at scale, but the evidence clearly showed that vendor claims of "set and forget" optimization were definitively untrue. The tier remained established, but the trajectory suggested the next phase would require either (1) vendors meaningfully improving algorithmic transparency and human control, or (2) market bifurcation toward hybrid "assisted optimization" workflows.
  • 2026-Jan: Campaign optimization platforms demonstrated continued commodity adoption with mixed ROI signals. Trade Desk Kokai reached 85% default usage across clients with measured performance gains (26% CPA reduction, 58% cost-per-reach improvement); named deployments (Georgia Aquarium omnichannel optimization, Specsavers 43% CPA reduction, Danone 1/3 conversion uplift) confirmed production scalability. However, January 2026 brought stark evidence of ROI realization gaps: eMarketer survey showed 82% adoption rate but 61% of advertisers report no meaningful results; CFO Dive analysis found only 12% of CEOs report both cost and revenue benefits with 56% seeing no significant financial gain. Critical practitioner analysis documented Smart Bidding limitations—manual bidding outperforms in low-data and tightly-controlled scenarios, revealing platform automation blindspots. The window crystallized the established tier status: platform capabilities demonstrably deliver results in specific deployment contexts (large-scale campaigns with sufficient data, omnichannel optimization use cases), but broader ROI realization required either algorithmic transparency improvements or hybrid human-oversight workflows.
  • 2026-Feb: Campaign optimization platforms faced intensifying credibility pressure as adoption rhetoric diverged sharply from deployment reality. StackAdapt industry report (484 marketers, 6000+ advertiser sample) showed 75% expect 2026 budget growth and 84% report stronger YoY performance, indicating continued commitment despite quality concerns. Performance Max adoption deepened—Digital Applied data showed it drives 45% of all Google Ads conversions with 18% CPA reductions. However, February produced mounting negative evidence that contradicted vendor success narratives. Five Nine Strategy's critical analysis exposed Performance Max's fundamental design flaw: it over-indexes on retargeting and treats all conversions equally, with a case study showing 1250% reported ROAS was actually 80% view-through conversions (true click-only ROAS: 250%)—confirming the practice remained plagued by measurement blindspots and inflated performance claims. Search Engine Land documented real deployment failure: a small chocolatier following Google's Performance Max advice spent $3000 for one purchase with $50 CPCs and nonexistent ROI. Industry assessment revealed the practice was fundamentally unsuitable for new advertisers or campaigns below scale thresholds. Most critically, a Wellows agency analysis citing Fortune research showed 95% of enterprise AI pilots fail to deliver significant ROI, and New Media and Marketing's Comscore-based analysis found programmatic spend growth intentions falling from 72% to 58% YoY—evidence that practitioner confidence in campaign optimization platforms was eroding despite continued commodity adoption. By February 2026, the practice remained at established tier with proven capability at scale, but mounting evidence of broad deployment failure in standard use cases intensified questions about whether "automation" was actually delivering ROI or simply moving spend across channels with phantom improvements.
  • 2026-Q2: Campaign optimization reached critical transparency and measurement inflection point. Independent benchmarking from Cassandra (253 MMM models across 59 advertisers, $383M spend) revealed Performance Max delivered 4.64x incremental ROAS but platform attribution overstates results by 2–5x due to organic intercept and retargeted-user conversions, establishing that published vendor metrics systematically misrepresent true campaign impact. Microsoft Advertising achieved Performance Max feature parity (April 2026) with NCA goal import from Google; ADAC Car Insurance reported 600% ROAS on new customer acquisition, signaling ecosystem convergence and cross-platform maturity. However, Q2 exposed mounting credibility crises: Trade Desk's Kokai faced material securities litigation (filed April 17, 2026) for undisclosed rollout failures and performance defects that contradicted public adoption claims; practitioner evidence (PPC Land, April 2026) documented specific Performance Max scale failures above $100K/month spend with over-indexing on remarketing and loss of channel control despite claimed automation. Google addressed practitioner misconceptions through official product guidance (March 2026) on Smart Bidding best practices, acknowledging that learning periods run 4–6 weeks (not 2–4), requiring $15K+/month minimum spend, and producing 22% higher CPA than optimized manual campaigns despite 18% conversion lift (WordStream analysis of 30,000+ accounts). Real deployment evidence continued: programmatic audio optimization (Ad Results Media, Q1 2026) delivered third-party validated brand lift (+4.1% awareness, +2.4% recall, +1.5% intent), confirming optimization impact at tactical level. eMarketer's analyst assessment (April 2026) documented 60% adoption rate but cited 62% of buyers struggling with setup complexity, data security, and transparency concerns. May 2026 evidence adds adoption milestones and backlash: Performance Max crossed 80% account adoption (Pace Ad Spend Index), marking a paradigm shift from adoption debate to control and steering questions; 78% of Google Ads spend now runs Smart Bidding with cross-industry CVR improving to 4.40%; but major holding companies (Publicis, Omnicom) are auditing and rejecting black-box AI optimization platforms, identifying $26B annual inefficiencies and concentrating 90% of spend in private deals—signalling loss of confidence in automated optimization at enterprise scale. By quarter-end, the practice demonstrated established tier status with proven point-solution wins, but the evidence base conclusively showed that vendor-claimed "set and forget" automation failed at scale and required active human strategic oversight. The trajectory indicated the next phase hinged on whether platforms would improve transparency or market would bifurcate toward hybrid workflows.
  • 2026-May: Google Marketing Live 2026 shipped three production bidding features: journey-aware bidding (learns from full lead-to-sales path), Smart Bidding Exploration (reaching 27% more unique converting users), and demand-led pacing (auto-adjusts daily spend against predicted demand). Real-world performance stack analysis across €100K–€5M/month accounts confirms structural advantages: predictive LTV lookalikes outperform demographic targeting by 22–40%, value-based bidding lifts repeat rates 8–14 points, and creative volume drives 18–25% CPA reduction—but the $5K/month data floor remains a hard threshold below which automated systems cannot exit the learning phase cleanly.

TOOLS