Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

UX copy generation & voice enforcement

LEADING EDGE

TRAJECTORY

Advancing

AI that generates UX microcopy and enforces brand voice and tone guidelines across product interfaces. Includes context-aware microcopy creation and tone consistency checking; distinct from brand-voice workflows in marketing which target external content rather than product UI.

OVERVIEW

AI-generated UX microcopy and voice enforcement tooling have crossed into production at forward-leaning organisations, but most product teams have not yet operationalised them. That gap defines the practice's leading-edge position: the technology works, the vendor ecosystem is GA, and hybrid AI-plus-human teams report meaningful speed and consistency gains -- yet scaling stalls on organisational readiness rather than capability. The dominant deployment model treats AI as a co-pilot for drafting microcopy and generating copy variations, with mandatory human review for emotional tone, cultural sensitivity, and factual accuracy. Tooling is no longer the bottleneck. Formalised voice governance -- codified brand guidelines, terminology enforcement, approval workflows -- is. Teams that have built that scaffolding are seeing real returns; teams that skip it join the large majority of AI pilots that fail to deliver ROI. The practice is advancing, but the binding constraint has shifted from "can the tools do this" to "can the organisation sustain it."

CURRENT LANDSCAPE

A handful of vendors now offer GA voice enforcement for product interfaces. Frontitude ships an AI-powered UX Writing Assistant with voice governance controls; Oration AI provides Brand Voice with terminology enforcement and multilingual support; Copy.ai has built brand voice features into a platform serving 17M users. Figma's own ecosystem guide lists tools like Jasper for design copywriting with real-time tone and style variations, signalling that AI-assisted UX copy is becoming a default workflow assumption rather than an add-on.

Where organisations have invested in governance scaffolding, the results are concrete. OmniClarity reports 89% voice consistency improvement and 67% faster revision cycles. Lenovo's deployment of AI-powered brand compliance automation achieved $16M in annual cost savings through systematic review and asset management. Hybrid AI-plus-human teams in documented deployments show 42% ROI improvement and 5x speed gains. A UX Tools survey finds 75.2% of designer AI usage centres on writing and content tasks, though adoption skews toward leadership (32.2%) over individual contributors (19.9%), suggesting top-down rollout patterns. Emerging signal: pillar-organized content with consistent voice achieves 3.2x higher AI citation rates (41% vs 12%) in search systems, introducing a new competitive dimension for voice enforcement.

The failure modes are equally well documented. Production-scale evidence includes Coca-Cola's 2024-2025 failed AI holiday advertising campaigns, criticized as 'soulless' and 'creepy' despite substantial budget and decades of brand equity, demonstrating that tooling maturity does not guarantee output quality. Practitioner analysis finds 77% of companies struggle with brand voice consistency in AI output, and 85% of generated copy requires human editing before publication. Voice profile extraction research reveals a critical limitation: profiles capture style (sentence length, vocabulary) but miss deeper voice signals (reasoning patterns, perspective); optimal profiles are under 400 words, with diminishing returns beyond that threshold due to model context limits. AI-generated microcopy still lacks emotional nuance and can fabricate facts -- risks that demand rigorous editorial oversight. The industry has converged on systematic voice frameworks (personality traits, tone ladders, approved phrase libraries, QA rubrics) as the prerequisite for safe scaling, but most teams have not yet built them.

TIER HISTORY

ResearchMar-2023 → Mar-2023
Bleeding EdgeMar-2023 → Apr-2025
Leading EdgeApr-2025 → present

EVIDENCE (96)

— Copy.ai framework for on-brand generation: 81% of companies struggle with off-brand output; solution requires brand examples in prompts, tool selection for voice capabilities, and unified brand knowledge traveling with each handoff across teams.

Magician - AI design assistant pluginProduct Launches

— Figma-native plugin auto-generating realistic UI microcopy (replacing Lorem Ipsum); lacks explicit brand voice enforcement or tone consistency features; represents partial solution addressing copy generation but not voice governance.

— Klarna case study: AI handled 80% of copywriting, saved ~$10M/year, then reversed course citing quality decline. McKinsey shows only 6% of AI users achieve high performers status; identifies workflow redesign—not tooling—as binding constraint.

— Microcopy positioned as critical trust lever in AI interfaces; provides NIST AI RMF-grounded patterns and ready-to-use library; documents ROI connection from UI copy decisions to task completion and feedback signal rates.

— Adobe research: 81% of enterprises produce off-brand content despite guidelines; 33% revenue impact documented. Five-pillar brand intelligence system proposed to shift from static guidelines to governance-as-system; identifies AI as amplifier of existing control failures.

— Bridge Marketplace case: RAG-powered voice enforcement achieved 12.5x ROI and 10x pipeline growth in 90 days; demonstrates production-scale multi-agent architecture with brand voice reviewer agent preventing hallucination and terminology misuse.

— French-language practical guide: five-step workflow with four copyable prompts delivers 70-80% time savings on microcopy production; notes AI respects voice 70-85% with few-shot examples, recommends human review for security-critical copy.

— Professional guide for designers: positions AI-assisted microcopy adapted to brand tone as core workflow. Estimates 40-60% productivity gain while noting designer skills (research, vision, system thinking) remain irreplaceable; documents 1.5-2x project throughput gain.

HISTORY

  • 2023-H1: Initial evidence of emerging tooling (Figma plugins, guardrail startups) and academic validation that consumers accept AI-disclosed UX copy. Research shows no trust penalty for AI generation when disclosed; practitioner surveys indicate broad interest in AI-assisted UX writing to accelerate output and reduce manual review burden. Consumer adoption of GenAI broadly at 50%+; adoption of UX-specific tooling still nascent.
  • 2023-H2: Market tooling advances with Frontitude Team Guidelines private beta and voice enforcement platforms maturing. Consumer trust in AI-generated content stable (73%). Significant countervailing evidence emerges: independent research documents ChatGPT limitations in UX advice (19% useful, 72% useless), and critical assessments expose AI writing failures (fabricated legal citations, medical misinformation in published articles). Quality and legal risks become clear; human review remains mandatory. Adoption hindered by organizational maturity (lack of formalized voice guidelines) and necessity of human oversight.
  • 2024-Q1: Enterprise adoption accelerates with BrandGuard reaching multiple Fortune 500 companies and global agencies in production use. Guardrail frameworks for tone drift prevention mature with operational models (voice profiles, phrase libraries, compliance rules) now in deployment. Expert analysis confirms AI's role as scale-enabling assistant—creating content and measuring tone objectively—but human oversight remains essential. Documented failure cases (fake AI-authored articles, hollow auto-generated content) demonstrate that speed without quality review erodes brand trust. Organizational readiness (formalized voice guidelines and human-in-the-loop discipline) becomes the deciding factor for adoption.
  • 2024-Q2: Organizational adoption enters piloting and implementation phase across broader experience design functions. Major platform ecosystem (TikTok, Meta, Google) launches brand voice and tone generation tools in trial/rollout, signaling vendor competition intensification. Negative evidence surfaces: production pilots expose integration brittleness and data inconsistency challenges. Critical assessments document AI limitations: lack of emotional intelligence, cultural sensitivity, and autonomy risks. The tension sharpens: while organizational readiness improves and ecosystem tooling expands, deployment maturity challenges and autonomous AI governance risks become more salient.
  • 2024-Q3: Ecosystem tooling expansion and industry adoption metrics confirm broader market momentum. Goldcast launches Brand Voice feature GA for content repurposing. Microsoft publishes official UX guidance for copilot voice design, signaling platform-level standardization. WFA survey shows 63% of major brands already deploying GenAI with significant copy generation use cases, but 80% report governance concerns around legal and reputational risk. Practical pilots (e.g., Dext) reveal operational maturity is achievable but non-trivial—organizations building multilingual copy SSoT face integration and consistency challenges. Practitioner assessment remains cautionary: AI lacks emotional intelligence and cultural sensitivity for nuanced voice work; organizational readiness (formalized guidelines + human review discipline) is the binding constraint on safe adoption at scale.
  • 2024-Q4: Product tooling maturation continues with Copy.ai demonstrating GA custom brand voice capabilities across multi-language environments. Practitioner skill development accelerates—UX Writing Hub identifies "Writing for and with AI" as the year's top trend, reflecting normalized AI collaboration in UX workflows. Industry guidance emphasizes human-in-the-loop strategy: IMPACT podcast highlights shifting hiring patterns toward hybrid skillsets pairing writing with AI fluency. Critical assessments persist: practitioners continue to question whether AI can authentically capture nuanced brand voice without 'soulless' automation, underscoring that organizational discipline and human judgment remain non-negotiable. Evidence of broader adoption momentum vs. persistent authenticity and governance concerns suggests the practice is normalizing within product teams but scaling barriers remain structural rather than technical.
  • 2025-Q1: Ecosystem tooling accelerates with Frontitude announcing 4x workflow acceleration and 73% reduction in manual post-editing; product vendors expanding multi-language voice capabilities across platforms. Industry adoption reaches critical mass: 67% of B2B organizations deploying GenAI for content creation, with 41% volume increases and 33% cost reductions (Gartner). Governance frameworks maturing—MIT Sloan research shows 67% lower inconsistency with formal governance, and industry guidance shifts to exception-based approvals. However, adoption fragility surfacing: 42% of AI initiatives scrapped by March 2025 (up from 17% in September 2024), revealing execution and data quality gaps. Academic research identifies organizational barriers as binding constraint: 24 UX practitioners studied show lack of formal company GenAI policies and individual-rather-than-team usage patterns. Practice transitioning from capability validation to deployment maturity, but organizational readiness gap persists as critical barrier.
  • 2025-Q2: Enterprise deployments confirm production readiness with OmniClarity achieving 89% voice consistency improvement and 67% faster revision cycles; Contents platform reaches $8M ARR serving Dolce & Gabbana, Sainsbury's, Accenture with AI-powered content at scale. Ecosystem vendors standardize: Oration AI launches Brand Voice GA for enterprise agents with terminology enforcement and multilingual support. Copy.ai surpasses 17M users with brand voice features. Emerging signal: consistent voice becomes a ranking factor for AI search visibility (ChatGPT, Perplexity), driving competitive adoption. However, scaling barriers remain: 42% of businesses still scrapping AI initiatives due to data quality and implementation costs; most teams lack formalized governance despite proven tooling effectiveness.
  • 2025-Q3: Tooling maturation continues with Frontitude releasing updates to AI-powered UX Writing Assistant and expanded vendor support for voice enforcement. Hybrid deployment model validation: AI+human teams achieve 42% ROI improvement, 50% cost reduction, 5x speed gains. Critical finding emerges from MIT summer research: 95% of AI pilots fail to deliver ROI, highlighting organizational execution barriers (change management, data quality, governance discipline) as binding constraint rather than technical limitations. Practitioner assessment sharpens: AI effective for co-pilot-assisted microcopy generation but insufficient for autonomous brand voice capture; emotional depth, cultural sensitivity, and factual accuracy remain human-enforced requirements. Practice shows mature production deployments but persistent organizational readiness gaps blocking broader scaling.
  • 2025-Q4: Design team adoption reaches near-universality: Nielsen reports 75% of design teams using AI for text-based tasks (ChatGPT, Writer, Jasper), with senior designers outputting 3-person-squad equivalent volume. Vendor standardization deepens: Frontitude continues releases, Oration maintains Brand Voice GA, Copy.ai sustains 17M user base. McKinsey data confirms broad enterprise adoption (88% use AI in ≥1 function) but persistent scaling barrier: only 38% beyond pilots. MIT's 95% pilot failure rate continues through Q4, reinforcing organizational execution as the binding constraint. Industry guidance converges: AI functions as co-pilot for microcopy generation and variation, not autonomous voice generation; mandatory human review, formal governance (codified brand guidelines, terminology enforcement, approval workflows), and data quality discipline remain essential. Competitive signal emerges: consistent voice increases AI search visibility by 41%. Practice enters stable state at leading-edge maturity: normalized tool adoption, proven ROI in hybrid teams, but scaling constrained by organizational readiness rather than technical capability.
  • 2026-Jan: Platform ecosystem maturity continues with Frontitude releasing Voice Center beta for systematic brand voice governance. Designer survey data (200+ practitioners) shows measured optimism about AI in design workflows. Critical practitioner analysis documents persistent friction: 77% of companies struggle with brand voice consistency, 85% of AI copy requires human editing. Industry converges on systematic voice frameworks (personality traits, tone ladders, approved phrase libraries, QA rubrics) and LLM integration workflows, positioning 2026 as the year of methodical voice governance implementation rather than platform advancement.
  • 2026-Feb: Designer adoption metrics confirm mainstream integration: UX Tools survey shows 75.2% of designer AI usage focused on writing and content generation, with 32.2% adoption among leadership vs. 19.9% for individual contributors. Designlab survey of 200+ practitioners documents practical AI application across research, ideation, and content work. Parallel negative signals emerge: critical assessments highlight AI vendor economics under pressure (OpenAI losses $12B/quarter) and persistent copy quality failures (fabrication, lack of emotional depth). February evidence signals maturation and normalized adoption coexisting with structural economic and quality constraints that continue to limit scaling velocity.
  • 2026-Mar: Vendor ecosystem consolidation and practitioner consolidation of governance best practices. Frontitude ships character limit enforcement, automated review workflows from Figma, and AI Writing Assistant improvements—evidence of active market competition and product-market fit validation. Practitioner frameworks proliferate: WriteRush, EverWorker, and Yugasa publish systematized approaches to voice governance (structured prompts, voice DNA definitions, RLHF fine-tuning, multi-phase implementation). Emerging negative signal: specialized copywriting platforms (Jasper, Copy.ai, Writer.com) face market pressure—users migrating to general-purpose models (ChatGPT, Claude) due to economics and functionality parity; industry documents voice homogenization challenge (75% marketer adoption, but human content 5.44x more traffic, 83% consumer detection). Practice consolidates around hybrid human-plus-AI governance model; tooling ecosystem proves viable but adoption acceleration limited by content quality and organizational discipline barriers.
  • 2026-Apr: Ecosystem consolidation accelerates with major platform releases and deployment evidence. Figma Config 2025 ships native on-the-fly copy generation (write, rewrite, translate) and Figma Buzz template-locking for brand-enforced asset scaling; LogRocket's comprehensive Figma AI guide confirms production tools mature (Replace, Shorten, Rewrite, Text Suggestions) but notes most outputs still require human review for accessibility and semantics. Adobe announces Brand Intelligence system shifting from reactive post-creation review to preventive shaping-during-production validation. Adoption metrics: Humbl Design reports 31% of designers use AI, identifying the '60% problem'—AI reaches acceptable output fast but fails at voice distinctiveness and brand understanding, sustaining voice enforcement as human-critical. Critical failure mode surfaces: Sagum documents algorithmic brand drift where a retailer achieved 40% ROAS improvement but suffered declining brand awareness and NPS due to AI systematically removing signature colors and distinctive voice—solution requires 'brand constitutions' as hard guardrails. Governance operationalization advances: frameworks converge on defining voice as executable rules (not adjectives), systematizing terminology, implementing structured approval workflows, and embedding voice validation into content generation pipelines. Practitioner analysis (MarTech) confirms adjective-based voice guidelines fail in AI workflows; effectiveness requires operational scaffolding turning guidelines into repeatable systems—voice governance without execution rules consistently produces generic output regardless of tool quality. Enterprise production ROI validated: Lenovo's $16M/year cost savings in hybrid human-in-the-loop model remains clearest quantified outcome. Competitive dimension matures: brand voice now a citation signal in AI search (41% citation rate vs 12% for off-brand content). Technical limitation confirmed: voice profiles capture style but miss deeper voice signals (reasoning patterns, perspective), with 400-word optimal profile length and documented diminishing returns—reinforcing that governance scaffolding alone cannot substitute for human editorial judgment on voice authenticity.
  • 2026-May: Practitioner backlash and workflow consolidation dominate May signal. Real-world deployment outcomes underscore scaling reality: Klarna's high-profile reversal (CEO shifted from "AI handles 80% of copy, saved $10M/year" to "too much efficiency focus damaged quality") demonstrates mature teams recognizing that throughput without quality governance damages brand. McKinsey State of AI 2025 data shows only 6% of AI users achieve high-performer status; gap is workflow redesign, not tooling. Bridge Marketplace case study (Yolando) demonstrates production-ready RAG-powered architecture with multi-agent voice enforcement, achieving 12.5x ROI and 10x pipeline growth in 90 days—proof that structured voice governance systems deliver measurable outcomes at scale. Adobe research quantifies the problem: 81% of enterprises produce off-brand content despite guidelines; solution proposed is shift from static guidelines-as-document to brand-intelligence-as-system with enforcement at every workflow stage. On the tooling side, Magician (Figma-native plugin) reached GA for auto-generating realistic UI microcopy in place of Lorem Ipsum, though it addresses copy generation without voice governance — a partial solution that highlights the ongoing gap between generation speed and brand consistency enforcement. Practitioner analysis (Meghan Downs) documents ongoing pattern: clients scrapping AI website copy due to generic output, wrong audience attraction, SEO damage—root cause consistently identified as unclear brand voice definition. Emerging operational consensus: voice effectiveness requires defining voice as executable rules (not adjectives), systematizing terminology, structured approval workflows, and embedding validation into generation pipelines. Industry guidance reaffirms: AI functions as copilot for microcopy variants and scale, not autonomous voice generation. Human review mandatory; governance scaffolding non-negotiable. Practice at leading-edge plateau: normalized tool adoption with proven ROI in hybrid teams, but scaling limited by organizational discipline rather than technical capability.