Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Communication style adaptation

LEADING EDGE

TRAJECTORY

Stalled

AI that adapts writing tone and style for different audiences — executives, peers, clients — from a single draft. Includes audience-aware rewriting and formality adjustment; distinct from brand-voice workflows which enforce brand rather than personal communication style.

OVERVIEW

AI-driven communication style adaptation — rewriting a single draft for different audiences, adjusting formality, assertiveness, or technical depth — works well in tightly governed deployments but has stalled short of broad organisational adoption. Named enterprises (Databricks, Zoom, Emplifi, OneSource Virtual) report strong measurable results; Grammarly's 50,000+ organizational deployments and 3,000+ educational institutions document real gains. Yet these represent the vanguard, not the field.

The core tension is authenticity. Current models default to high-probability, tonally neutral phrasing when style signals conflict — a failure mode practitioners call "tone drift." Personalisation features in Grammarly and Jasper exist, but producing genuinely voice-consistent output demands detailed style guides, curated examples, and significant human oversight. Real-world testing shows that audiences have learned to detect AI-generated content within 30 seconds by observing tone patterns, and practitioners report that AI-assisted writing produces measurable voice erosion (essays 40% flatter, 70% more neutral, reduced pronouns). For routine business communication the tools deliver value; for voice-dependent writing where authenticity matters, the gap between marketed capability and deployed reality is structural and unresolved. Scaling beyond isolated use cases has proven difficult, and adoption remains concentrated in high-governance contexts where organisations invest significant integration effort.

TIER HISTORY

ResearchJan-2023 → Apr-2024
Bleeding EdgeApr-2024 → Jan-2025
Leading EdgeJan-2025 → present

EVIDENCE (84)

— Practitioner analysis of 2026 marketing showing 75% AI tool adoption yet human-generated content receives 5.44x more traffic; documents algorithmic bias toward mediocrity where default patterns converge outputs toward professional-but-passionless sameness despite distinct brand intentions.

— CHI 2026 peer-reviewed study on ultra-personalized AI reveals core failure modes: logging speech changes behavior (self-censorship), trained models struggle in fast-moving social contexts, and practice requires high contextual granularity to avoid erasing privacy and autonomy.

— Practitioner technical analysis documenting systematic multilingual failures: register collapse (German du/Sie, Japanese keigo ignored), terminology drift, morphological hallucinations; confirms practice maturity limited in non-English languages with English-centric training foundational constraint.

— Peer-reviewed study documenting creators perform significant hidden labor (epistemic verification, linguistic naturalization, narrative restructuring) to hide AI assistance and maintain authentic voice; reveals trust vulnerabilities and unequal adoption capacity across demographics.

— Independent business analysis documenting broad professional adoption of Grammarly across industries via organic growth and freemium conversion; validates that communication style and writing tools achieve genuine product-market fit with professionals perceiving significant value in tone detection.

— Editorial analysis documenting critical finding: 62% of audiences detect AI-generated content within 30 seconds through tone/style patterns; identifies successful 2026 playbook limits AI prose use and positions AI as research/structure support rather than voice replacement.

— Practitioner analysis of AI-assisted writing showing ~40% of unsolicited query letters now exhibit identifiable flatness from ChatGPT polishing; documents two failure modes (voice-replacement producing generic competent text, voice-imitation failing to capture underlying generative principles) confirming systematic authenticity gap.

— Grammarly's newest GA features: Reader Reactions (set target reader for tone feedback) and Humanizer (adapt AI text to sound natural and personal); vendor messaging: 'help turn your thoughts into impact by making sure they're clear, resonate with your audience, and sound like you'; latest product GA in ongoing capability expansion.

HISTORY

  • 2023-H2: Grammarly announced personalized voice detection for GrammarlyGO, automatically learning user writing style and enabling rewriting text in that style. Feature was in preview for business subscribers by late 2023, marking the first major public capability specifically targeting communication style adaptation.
  • 2024-Q2: Grammarly's voice detection moved into production across named enterprises (Databricks: $1.4M ROI, 71% of communications improved) and educational institutions (Chapman University campus-wide rollout). Research confirmed AI-driven style adaptation increases consumer satisfaction in service recovery contexts. Adoption surveys revealed bifurcated views: business value recognized but academic sector skeptical due to bias and authenticity concerns. High-profile failure case illustrated policy misalignment in academic settings.
  • 2024-Q3: University of Illinois Chicago pilot validated communication style features (tone detection) with 92.9% user satisfaction on clarity and confidence; research confirmed 25-40% writing efficiency gains and tone/empathy benefits. However, analyst predictions of 30% GenAI project abandonment and practitioner analysis of five systemic ROI barriers highlighted cost and scaling challenges limiting broader deployment despite successful proof-of-concept pilots.
  • 2024-Q4: Additional enterprise case studies (OneSource Virtual 27x ROI, 90%+ adoption; Zoom $210K time savings) solidified proof of concept; however, critical analyses emerged documenting abandoned pilot deployments and scaling failures. Broader adoption metrics revealed wide tool distribution (24% of workers) but minimal work-time integration (0.5%-3.5%), indicating availability without deep adoption. Market reassessment shifted from hype toward realistic assessment of implementation barriers: cost justification challenges, team-level scaling failures, workflow integration friction, and ROI disconnect between marketed benefits and actual deployment outcomes.
  • 2025-Q1: Two new case studies (Emplifi 19x ROI, Iterable 93% improvement) continued demonstrating strong isolated deployments; peer-reviewed research documented 22% adoption among academic researchers. However, industry data revealed critical maturity gap: only 11% of AI POCs reach production despite 89% of enterprises exploring AI, with adoption sentiment contradictory (88-97% report benefits yet 42% of executives say adoption is "tearing company apart"). Named deployments continued delivering strong ROI while broader enterprise scaling remained constrained by cost justification, workflow integration, and organizational maturity barriers.
  • 2025-Q2: Jasper launched Audiences feature for audience-specific tone automation; enterprise analysis documented RAG-inspired deployment patterns. However, user sentiment deteriorated: 45% of authors adopted but 84% of non-adopters cited ethical concerns about voice authenticity, with practitioners reporting "bland" tone output. Critical assessment revealed widening ROI gap: true first-year costs ($16.7K) 30% higher than advertised; 2-3 month ROI delays from workflow disruption; tone accuracy limitations requiring significant human review. Adoption remained concentrated in specific use cases despite expanded platform capabilities.
  • 2025-Q3: MIT's State of AI in Business 2025 report documented a credibility crisis: 95% of generative AI pilots fail to deliver measurable P&L impact, with 42% of companies abandoning AI initiatives entirely (up from 17% prior year). Practitioner analysis revealed current tools produce "AI slop"—synthetic output lacking authentic voice—a fundamental failure for communication style adaptation. Successful deployments clustered in narrow use cases requiring workflow integration and domain specificity; broader scaling remained blocked by ROI disconnect and voice authenticity gaps. The practice bifurcation that began in 2024 solidified: isolated deployments delivering value, systemic adoption attempts mostly abandoned.
  • 2025-Q4: Platform adoption metrics showed continued growth (Jasper reached 1.8M monthly active users; Grammarly's customer support deployments achieved 25% efficiency gains), yet practitioner skepticism intensified. Survey of 928 creators showed 89% concerned AI would copy their writing style; only 21% had adopted generative AI despite messaging about personalization. MIT-affiliated research documented the core limitation: LLMs default to corporate tone in training data, overriding authentic voice even when personalization features are enabled. Independent reviews confirmed tone adaptation utility for routine business communication but noted output as "bland" or "technically correct but creatively limited." Adoption barriers remained structural—not technical—rooted in authenticity and voice erosion gaps showing no signs of resolution.
  • 2026-Jan: Educational institutions scaled Grammarly deployments to 3,000+ institutions with specific integrity and efficiency gains (96% reduction in violations, 146K hours saved); enterprise marketing adoption reached 91% across vendors but ROI confidence declined to 41% as scaling barriers and governance challenges intensified. Platform maturation visible in Jasper's audience-specific tone features and Grammarly's institutional controls; however, tone detection limitations remained unresolved (false positives on hedging language, compassionate tone). Market metrics showed 78% business adoption and $2.74B market size, yet ROI disconnect between adoption intentions and proven productivity persisted—a continuation of the bifurcated pattern where named deployments deliver measurable value but organizational-scale implementations remain constrained by governance and ROI verification barriers.
  • 2026-Feb: Research and practitioner evidence converged on authenticity bottleneck: peer-reviewed study documented measurable impact of communication style on user outcomes in controlled AI interaction; yet practitioner guides revealed that AI tools required extensive manual brand curation (detailed voice guides, curated examples, rich prompts) to achieve style adaptation, suggesting limited out-of-the-box adoption. Technical analysis identified tone drift as core failure mode—AI defaulting to formal, neutral phrasing when style signals conflict—a documented limitation undermining voice preservation. Enterprise case studies showed continued strong ROI (aggregated metrics: $1.4M+ savings, 283% ROI, 72% communication improvement); however, industry analysis citing MIT research showed 95% of enterprise AI pilots delivering zero P&L impact. Longitudinal data from bid/proposal community (2024-2026) showed progress (20% exceeding expectations) but persistent barriers (40% room for improvement in knowledge management and workflow integration).
  • 2026-Mar: Grammarly's official tone detection feature moved fully GA (grammarly.com/tone), explicitly messaging audience-aware rewriting. Contrary Research detailed Jasper's evolution to 100,000+ active teams with Brand Voice and tone adaptation as core competitive feature. Peer-reviewed research (Google DeepMind) quantified voice authenticity erosion: essays from heavy LLM users 69% more neutral, 50% fewer pronouns, confirming that models default to formal/corporate tone despite personalization features. Practitioner frameworks emerged (identity.txt) proposing portable voice profiles across tools to address fragmentation. Writer.com's CMO identified "Will everything sound the same?" as the third core fear blocking AI adoption, validating authenticity concerns as structural barrier. Practitioner comparisons showed divergent architectures: Grammarly's structured tone presets (Formal, Confident, Empathetic) vs. Wordtune's custom tone library (VoiceLock™). Three-year Jasper user reported real deployment challenges; Grammarly's AI Humanizer feature demonstrated in-market implementation of style adaptation. Overall pattern persists: named deployments deliver measurable ROI, but organizational adoption blocked by authenticity concerns and homogenization risks.
  • 2026-Apr: Grammarly shipped two new GA features — Reader Reactions (audience-aware tone feedback) and Humanizer (adapting AI text to sound personal and natural) — while simultaneously facing a US$5M federal class-action lawsuit over its Expert Review feature, which used named authors' identities without consent and hallucinated their voices; the feature was withdrawn March 11, 2026. Grammarly Brand Tones also reached GA with documented enterprise deployment at Databricks across six global offices, and Markup AI (formerly Acrolinx) shipped 8 GA tone presets plus custom style-guide extraction, signalling continued product maturation in the enterprise segment. ACL 2026 peer-reviewed research quantified a dual-use tradeoff in deployed consumer writing assistants: stylization reduces user-profiling risk while increasing misinformation evasion, introducing a privacy-safety tension previously uncharacterized. Jasper Brand IQ demonstrated 18% conversion improvement in real-world A/B testing — the strongest empirical signal yet that brand voice training delivers measurable business impact — while practitioner critiques of Grammarly's style engine documented systematic failures (nominalizations, agency stripping, limited adaptive learning) that reinforce the gap between product capability and reliable voice preservation.
  • 2026-May: Evidence converged on the homogenization ceiling as the practice's defining structural constraint. Marketing analysis documented that human-generated content receives 5.44x more traffic than AI-generated alternatives despite 75% tool adoption — confirming that default AI output patterns undermine style differentiation at scale. CHI 2026 peer-reviewed research on ultra-personalized voice training revealed core failure modes: the act of logging speech changes behavior (self-censorship), and trained models struggle in fast-moving social contexts requiring high contextual granularity. Multilingual deployments documented systematic failures — German du/Sie register collapse, Japanese keigo ignored, morphological hallucinations — confirming English-centric training as a foundational constraint. A peer-reviewed study of creator economies documented the hidden cost of style adaptation: creators perform significant downstream repair labor (epistemic verification, linguistic naturalization, narrative restructuring) to pass AI-assisted content as authentic — revealing that productivity gains are redistributed, not eliminated. At the same time, 62% of audiences now detect AI-generated content within 30 seconds via tone patterns, and 40% of literary query letters exhibit identifiable AI-induced flatness, reinforcing that authenticity erosion is a deployment-scale outcome rather than an edge case.

TOOLS