Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Content planning & repurposing

LEADING EDGE

TRAJECTORY

Stalled

AI that generates content calendars, identifies topic opportunities, and repurposes existing content across formats and channels. Includes trend-driven editorial planning and automated content adaptation; distinct from autonomous content production which handles end-to-end publishing.

OVERVIEW

Content planning and repurposing has crystallized into a bifurcated market: enterprise vendors deliver production-scale tool maturity (HubSpot, Jasper, ClickUp, CoSchedule) while adoption outcomes remain structurally constrained by governance, measurement, and organizational readiness. The technical capability is proven — Jasper deployments achieve 1-day campaign turnarounds and 9x organic growth; Panasonic scaled Content Remix across thousands of pages with 3.5x lead generation improvement; SmartPubTools reached 112K monthly impressions via AI-assisted publishing; Averi's repurposing framework demonstrates 3.7x engagement lift and 65% production time reduction. Yet the April 2026 enterprise survey shows only 29% of organizations achieve significant ROI from generative AI; governance barriers jumped from 8% to 27% as the single biggest adoption blocker; 94% of B2B teams that scaled AI content volume saw performance decline, not improvement. The practice's maturity ceiling is organizational: brands that implement human-supervised workflows, quality gates, and measurable frameworks extract real value; most others generate polished mediocrity at scale. Knowlee's 2026 brief-automation architecture and Mangold's cost analysis reveal the underlying constraint—planning and review infrastructure require 3.5+ hours per piece and must be budgeted from inception, not treated as post-generation cleanup. The practice remains on leading edge for organizations with sophisticated planning discipline and human-in-the-loop governance.

CURRENT LANDSCAPE

Vendor platform maturity reached peak feature parity by Q2 2026. Jasper (1.8M+ active users, $180M ARR) deployed with 20% Fortune 500 penetration; HubSpot Breeze Content Agent automates multi-channel repurposing with 200-250% content volume gains and 68% time savings in production deployments; CoSchedule Mia handles ideation and brand-voice training; ClickUp's content calendar achieved Cartoon Network output doubling. Specific deployment outcomes: Cushman & Wakefield saved 10,000 hours annually; SmartPubTools reached 112K monthly impressions through AI publishing with revenue gates; Panasonic scaled Content Remix across 5 regions achieving 3.5x lead generation improvement. May 2026 ecosystem analysis shows critical structural tensions: 72% of B2B marketers view repurposing as critical to strategy, but only 61% actively using AI for cross-format repurposing—signaling significant awareness-action gap. Market consolidation visible: Chief Martech reports Content Marketing category experienced 176 product removals in 2026 as AI labs absorbed functionality and incumbent platforms embedded capabilities. Strategic shift emerging: brands moving from volume-focused production to human storytelling and structured taxonomy for generative search visibility.

The maturity ceiling is organizational and infrastructural. WRITER's April 2026 enterprise survey (2,400 respondents) found 79% face adoption challenges and only 29% achieve significant ROI from generative AI. Governance barriers emerged as the dominant obstacle (27% of teams, up from 8% in 2025), preventing realization of efficiency gains. May 2026 data extends this finding: governance adoption lags production capabilities by 54 percentage points (37% detection vs 91% AI copy production); McKinsey research confirms only 1-in-10 organizations realize meaningful value from AI deployments. Human review remains the persistent constraint: 86% of marketers edit AI-generated content before publishing; editing averages 20-45 minutes per piece (3.5 hours total per article when including fact-check, E-E-A-T, SEO, and voice work) and represents $150-400 final cost per piece—a planning infrastructure requirement, not optional post-generation cleanup. Negative adoption signals intensify the picture: Lily Ray's analysis of 220+ AI content domains documents a boom-bust cycle where 54% lose 30%+ peak traffic and 39% lose 50%+ within 12 months of initial gains; EyeSift's 600K-page study confirms editorial oversight is the determining variable in SEO ROI (edited content gains 30-80%, unedited loses 40-90%). AuthorityTech's synthesis of 8 independent research sources reveals the adoption paradox: 94% of B2B teams scaled content volume, but only 6% improved performance. Successful deployments implemented revenue-based measurement, quality gates before publishing, and human verification frameworks. The practice remains on leading edge for organizations with sophisticated planning discipline and realistic human-in-the-loop governance; mainstream adoption constrained by governance infrastructure gaps, measurement failures, and the hidden costs of review cycles that scale faster than planning discipline can manage.

TIER HISTORY

ResearchJan-2023 → Jan-2023
Bleeding EdgeJan-2023 → Jan-2025
Leading EdgeJan-2025 → present

EVIDENCE (88)

— Direct case study of HubSpot Breeze Content Agent and Content Remix deployment: 4-person team scaled blog 4→12 posts/month, email 2→6/month, social 8→28/week with 68% time reduction; productivity multiplier on standard tiers ($890/mo).

— Rigorous analysis of AI content repurposing boom-bust cycles: rapid scaling followed by steep decline; consistent pattern across industries; Google's Helpful Content Update targets automated content strategies, limiting sustained ROI.

— Critical analysis of State of Martech data: governance infrastructure adoption lags production capabilities by 54 percentage points; McKinsey finding of 80-point gap between deployment and value realization documents structural adoption ceiling.

— Spring 2026 CMO Survey: AI adoption measured in ROI but outpacing organizational readiness; talent gaps, system integration challenges, and measurement gaps identified as adoption barriers; leading orgs treating AI as operating model shift not tool adoption.

— Five structural 2026 trends reshaping content planning: multimodal generation collapsing workflows; hyper-personalization at content layer driving 34% engagement lift; autonomous agents handling repetitive planning tasks; Answer Engine Optimization favoring structured, fact-dense content.

— HubSpot 2026 State of Marketing analysis: AI commoditized content production, driving strategic shift from volume-focused repurposing to human storytelling and structured taxonomy for generative search visibility.

— Chief Martech analysis shows Content Marketing category experienced largest outflow (176 product removals) as AI labs absorbed functionality and incumbent platforms embedded capabilities; documents consolidation and product-market fit barriers.

— Data-backed SEO analysis: editorial oversight is the determining variable in repurposing ROI; sites with substantively edited AI content gain 30-80% traffic while uncontrolled scaling triggers algorithmic penalties within 6-12 months.

HISTORY

  • 2023-H1: Early adoption in specialized vendors (Jasper 80K+ users) and integration into major platforms (HubSpot beta release). Content creation emerged as top AI use case in PR/comms (32% frequent use) and broader marketing (69% experimenting). Deployment examples: ProBlogger's 18-month CoSchedule integration. Significant quality and brand-voice concerns emerged as limiting factors for broader adoption.
  • 2023-H2: Product maturity accelerated with CoSchedule's dual AI-powered calendar release (Aug 2023) and consumer adoption expanding (31% Gen Z, 20% millennials experimenting). However, critical limitations became visible: Gannett paused AI-written articles due to quality failures; Cognizant and agency practice reports highlighted copyright risks, generic outputs, and ineffectiveness for strategic planning. Mixed outcome case studies (Tomorrow Sleep 10K% traffic vs. CNET factual errors) showed adoption heavily dependent on use case and human oversight. Human-in-the-loop remained essential.
  • 2024-Q1: Adoption accelerated toward mainstream: IBM data showed 42% of large enterprises deployed AI (38% on generative AI), while industry surveys reported 64% of marketers using generative AI tools and 84% reporting content creation efficiency gains (3 hours saved per piece). Dedicated repurposing tools matured (RepurposeMate 12K+ users). However, critical barriers remained: trust erosion from undisclosed AI content, regulatory pressure for transparency labeling, and public overconfidence in their ability to evaluate AI-generated text (3.26/5 survey score). Adoption remained heavily dependent on human oversight and clear governance frameworks.
  • 2024-Q2: Production deployments at scale achieved quantified results—IBM generated 200+ marketing assets with 1000+ variations using Firefly, achieving 26x higher engagement; Pfizer reported 15-20% engagement improvements. HubSpot renewed Content Hub (April 2024) with Content Remix feature for automated repurposing across channels. Simultaneously, adoption barriers hardened: DLA Piper found 48% of companies paused or rolled back AI projects due to privacy, integration, and regulatory concerns. Only 25% of executives successfully connected AI to CX goals (Adobe research). Brand safety failures escalated—Instacart, Selkie, Under Armour AI missteps highlighted quality and transparency risks. Over 70% of marketers invested in content calendars with increasing AI integration, but execution remained constrained by governance and quality assurance requirements. Practice consolidated around human-in-the-loop workflows: AI effective for content adaptation and variation generation, but strategic planning and editorial judgment remained essential.
  • 2024-Q3: Mainstream adoption confirmed through independent surveys: 90% of marketers use generative AI monthly (70% weekly) with 67% leveraging it for content creation (Basis Technologies); 75% of businesses deployed AI for text/content creation as their top use case (Pipedrive). Platform maturity advanced with HubSpot's Content Remix expansion and new Content Agent capabilities at INBOUND 2024; AI Writing Assistant category showed 170% growth 2022-2023 (G2). However, adoption barriers solidified into structural challenges: Gartner forecast 30% of GenAI projects will be abandoned post-POC by end 2025; Pipedrive found 48% of businesses cite knowledge gaps as primary blocker (with trust, privacy, and security concerns following). Implementation failures persisted—Vectara documented specific deployments that failed (Air Canada legal loss, 75% medication errors from ChatGPT). The practice remained on the bleeding edge for organizations executing at scale, but enterprises increasingly recognized that mainstream adoption required robust governance, transparency labeling, and technical infrastructure—not just technical capability.
  • 2024-Q4: Adoption metrics plateaued while quality and sustainability barriers became increasingly visible. Platform evolution continued (HubSpot, CoSchedule, Adobe) but failed to address core governance and quality assurance challenges. Vendor tools excelled at format adaptation and content variation generation, but organizations reported systematic failures in fully automated pipelines—projects halted due to quality inconsistency, sustainability concerns, and brand safety risks. Quality control failures escalated: Google Gemini's historically inaccurate and offensive outputs, and multiple brand missteps from inadequately reviewed AI-generated content, demonstrated that vendor tool maturity had not solved content accuracy and brand safety problems. Adoption barriers persisted at scale: 48% of businesses cited knowledge gaps as primary blocker; trust (40%), data privacy (27%), and security (26%) concerns remained structural obstacles. Gartner's 30% project abandonment forecast (by end 2025) gained credibility as organizations moved beyond pilots and encountered comprehensive governance, QA, and skills requirements that platform vendors alone could not address. The practice reached a bifurcation: early-adopter enterprises with sophisticated infrastructure and governance achieved quantified ROI (IBM 26x engagement, Pfizer 15-20% improvements); mainstream organizations struggled with quality assurance, governance scaling, and sustainability—accelerating a shift toward human-in-the-loop, supervised workflows over autonomous content production.
  • 2025-Q1: Mainstream adoption broadened (85% of marketers using AI for content creation, 90% planning increased integration), but user fatigue and project failure rates hardened as structural constraints. New vendor deployments achieved quantified ROI: ClickUp's AI calendar saw Cartoon Network double output, Finastra gain 40% efficiency, Vida Health increase productivity 50%; Hootsuite achieved 150% traffic growth via repurposing; Webs translated 200+ pages in 48 hours with 80% time savings. However, critical sustainability signals emerged: EY documented user fatigue with predictable, over-polished AI content; project failure rates remained severe (80% of AI projects fail, 48% never reach production); Gartner's 30% GenAI abandonment forecast tracked toward end-2025 deadline. The bifurcation persisted: well-resourced enterprises continued achieving measurable ROI through human-supervised workflows and comprehensive governance; mainstream organizations faced persistent barriers in data readiness, skills, and clear value alignment. Platform maturity had not solved organizational readiness or quality sustainability. The core repurposing capability—format adaptation and variation generation—delivered tangible efficiency gains, but could not substitute for strategic planning, editorial governance, or content quality assurance.
  • 2025-Q2: Vendor feature maturity continued accelerating—HubSpot Content Hub demonstrated 60% production time reduction with real-world deployments (40% faster publishing, 2X conversion lift); CoSchedule expanded Mia assistant capabilities; ReelMind introduced AI-driven predictive scheduling. Adoption broadened to 92% of large teams using AI-generated content with use-case segmentation: 62% for topic brainstorming, 53% for summarization, 44% for drafting, 32% for repurposing; practitioner adoption reached 88% with 70% planning expansion. However, project abandonment evidence hardened: 42% of companies abandoned AI projects due to strategic misalignment, cost miscalculation, and ROI measurement failures. Critical finding: human-written content generated 5.44x more traffic than purely AI-generated, reinforcing essential role of human-AI collaboration. Platform maturity had not solved structural barriers—vendor tool feature expansion masked persistent project failure patterns and organizational readiness gaps that constrained mainstream adoption at scale.
  • 2025-Q3: Market maturation became visible amid persistent project failure crisis. Content calendar software market valued at $2.53B in 2024, projected to grow to $8B by 2035 (11% CAGR), confirming sustained mainstream adoption and vendor ecosystem health. Jasper AI reached $80-120M annual revenue (from $45M in 2021), demonstrating sustained vendor growth. Practitioner workflows stabilized: AI-powered repurposing (converting single blog posts into emails, social posts, video scripts) became routine. However, critical reality check emerged: MIT's GenAI Divide study (155 executives, 350 employees, 300 projects) documented that 95% of AI pilot projects deliver no financial value or profit uplift—a systemic failure rate that reinforced previous cycles' patterns. Tool ecosystem maturity and adoption breadth continued, but organizational transformation barriers (skills, governance, ROI measurement) remained unresolved. The bifurcation persisted: leading-edge enterprises with sophisticated infrastructure achieving measurable ROI through human-supervised workflows; mainstream organizations struggling with sustainability and value realization despite vendor platform maturity.
  • 2025-Q4: Vendor platform maturity reached new peak while structural adoption ceilings crystallized. HubSpot launched Breeze Content Agent with multi-channel Content Remix automation; Jasper achieved 1.8M+ active users and $180M annual revenue with Forrester-validated 342% ROI and $2.2M annual savings; CoSchedule earned G2 High Performer recognition signaling customer validation. Feature expansion at scale demonstrated category-level ecosystem health and leading-edge deployment success. However, critical reality persisted: Vasco Consult (Dec 2025) synthesized MIT and McKinsey findings showing 95% of generative AI pilots still failed to deliver measurable business value, with workflow redesign gaps and content quality issues identified as fundamental blockers. The bifurcation hardened into a permanent market structure: leading-edge enterprises achieving quantified ROI through sophisticated human-supervised workflows, comprehensive governance, and workflow redesign; mainstream organizations remained constrained by organizational readiness gaps despite vendor tool maturity reaching near-peak feature parity. The practice had stabilized at leading-edge tier with durable organizational and operational barriers preventing broader mainstream promotion.
  • 2026-Jan: Adoption breadth reached saturation while strategic limitations and quality barriers became explicit. Practitioner adoption metrics remained strong: 94% of marketers plan to use AI for content creation in 2026, with 88% already using daily. However, critical barriers hardened into structural constraints: RAND/Gartner data confirmed 88% failure rate from pilot to production for AI agents (only 11% deployed at scale); AI detection tools triggered "Authenticity Tax" search penalties and content rejections, documented through case study showing 60% organic traffic loss from unedited AI content. Strategic shift visible among B2B leaders: moving from volume-focused production (traditional repurposing) to human storytelling and structured taxonomy optimized for generative search visibility. Vendor platform maturity (HubSpot Breeze, Jasper 1.8M+ users, CoSchedule Mia) continued supporting leading-edge deployments with measurable ROI, but mainstream organizations faced compounding barriers: implementation complexity, quality control requirements, emerging algorithmic penalties for low-fidelity content, and skills gaps in human-supervised workflows. The practice remained on leading edge—feature-rich and enabling category-level adoption among early-adopter cohorts—but increasingly constrained by quality, governance, and strategic planning requirements that purely AI-assisted workflows could not satisfy without substantial human direction and editorial oversight.
  • 2026-Feb: Market maturation reached crisis point—adoption breadth persisted but transformation barriers crystallized into quantified failure evidence. Vendor platform maturity continued: CoSchedule released Brand Profiles for consistent brand voice training, MindStudio documented teams reducing 20-hour production cycles to 2-hour review cycles (10-15 hours/week savings). However, independent research surfaced systemic adoption failure: Pertama Partners analysis (Feb 2026) synthesized RAND, MIT, McKinsey, and Deloitte data showing 80.3% overall AI project failure rate (33.8% abandoned, 28.4% deliver no value); 95% of custom AI pilots failed to deliver P&L impact; average failed project cost $4.2M-$8.4M. Whitehat SEO research documented the adoption paradox: 79% of organizations adopted generative AI (up from 33% in 2023), but only 6% were "high performers" extracting real business value—confirming that tool availability did not translate to organizational transformation. Critically, 56% of CEOs reported no revenue gains or cost savings from AI automation, citing data quality and skills gaps as primary blockers. Practitioner assessment reinforced barriers: SEONIB analysis found that automated content systems generated "perfectly readable" content achieving "absolutely nothing" in organic growth or brand authority, with strategic misalignment and "entropy of relevance" identified as core failure drivers. Quality and compliance barriers persisted: Fluid.ai assessment documented Cloudera survey showing only 31% of enterprise AI in full production, with hallucination error rates (3-27%), data privacy risks, and governance gaps identified as deployment blockers. By end of February 2026, the bifurcation had hardened into a structural market condition: leading-edge enterprises with sophisticated governance, workflow redesign, and human-supervised infrastructure continued achieving quantified ROI; mainstream organizations remained locked by implementation complexity, quality assurance requirements, and organizational readiness barriers that vendor platform maturity alone could not address. The practice had reached stable leading-edge maturity with durable organizational adoption ceilings.
  • 2026-Apr: The adoption-value gap crystallised into hard numbers. AuthorityTech's synthesis of eight independent research sources found 94% of B2B teams scaled content volume with AI, yet only 6% reported improved performance, with SEO decline (31.4%) and content saturation cited as the dominant outcomes; 60% of Google searches now return zero-click results, further eroding the value of volume strategies. WRITER's survey of 2,400 respondents found only 29% achieve significant ROI from generative AI, and governance barriers jumped from 8% to 27% as the single biggest adoption blocker. Against this backdrop, disciplined deployments continued to demonstrate positive outcomes: SmartPubTools documented 514 AI-generated pages reaching 112,000 monthly impressions in 90 days via revenue-gated quality gates, and Jasper's 100,000+ business customer base (nearly 20% Fortune 500) confirmed production-scale vendor maturity.
  • 2026-May: New deployment evidence reinforced the bifurcation: Averi documented 3.7x engagement lift and 65% production time reduction from AI-powered repurposing of single assets into 20+ touchpoints, while a 48-hour content engine case study reported 1,247% LinkedIn engagement lift and 120% organic traffic growth in 6 months — both contingent on structured planning and human review. Cost infrastructure analysis quantified the review bottleneck precisely: raw AI drafts cost $5–15 but final edited pieces reach $150–400 (3.5 hours of fact-check, SEO, voice, and expertise work), framing human oversight as a mandatory budget line rather than optional cleanup. Industry data confirmed awareness-action gaps persist: 72% of B2B marketers view repurposing as critical but only 61% actively use AI for cross-format repurposing. HubSpot Breeze Content Agent deployments demonstrated 200-250% volume increases with 68% time reduction in production settings, while Lily Ray's analysis of 220+ AI content domains found 54% lost 30%+ of peak traffic and 39% lost 50%+ within 12 months — a boom-bust pattern directly attributable to Google's Helpful Content Update targeting automated strategies. Governance lag remained the structural ceiling: State of Martech data showed only 37% of teams have AI detection capabilities despite 91% producing AI copy, and McKinsey confirmed only 1-in-10 organizations realize meaningful value from AI deployments.

TOOLS