Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Brainstorming & ideation support

LEADING EDGE

TRAJECTORY

Stalled

AI that helps individuals generate ideas, explore possibilities, and think through problems from multiple angles. Includes structured ideation and creative thinking prompts; distinct from content calendar planning which generates topic ideas rather than general brainstorming.

OVERVIEW

AI-assisted brainstorming delivers measurable value for individual ideation — but most organisations have not figured out how to deploy it beyond isolated pilots. Forward-leaning teams at a handful of named companies have demonstrated real gains in creative output and time savings, and consumer adoption continues to climb. The tooling is mature enough that vendors now ship brainstorming as a first-class enterprise feature. Yet a fundamental tension defines this practice's plateau: AI reliably increases idea volume while simultaneously reducing idea diversity, with research showing 94% of AI-assisted concepts converging toward common themes. That architectural constraint, combined with executive risk aversion and weak ROI signals at the organisational level, keeps brainstorming support firmly in leading-edge territory. Individual practitioners benefit; scaled enterprise rollout remains elusive.

CURRENT LANDSCAPE

The ground-level picture is split. On the consumer side, 45% of U.S. workers now use AI at work and 41% apply it to idea generation, while creative professionals report 26% average gains in creative ability. Specialist deployments continue to prove out: Monks agency achieved 80% CTR improvement and halved design hours using Gemini for creative concepting; ATB Financial sustains 40% daily usage with two-hour weekly time savings; Hikari System's brainstorm-driven marketing lifted customer engagement 27% year-over-year. Adoption acceleration is evident in marketing: 58% of marketers now use AI for content ideation with 44% productivity gains and 11 hours saved per week; Adobe's April 2026 survey of 800 creative professionals found 94% produce content faster with AI, majority reporting 50%+ speed increases and 17 hours weekly savings. Tool differentiation has hardened: Claude ranks highest for ideation quality in production marketing workflows (structured, authoritative output), ChatGPT leads for volume and speed, while practitioners increasingly adopt multi-model triangulation to improve brainstorming outcomes. Vendors signal maturity through product releases: Adobe's Firefly AI Assistant now generally available for agentic creative direction across Photoshop, Premiere, and Illustrator. Google shipped Gemini Enterprise's brainstorming capability as a GA feature, and Deloitte publishes guidance on AI-accelerated ideation for product innovation.

Yet capability boundaries are now empirically clear. A landmark University of Montreal peer-reviewed study comparing 100,000+ human participants with major LLMs (GPT-4, Claude, Gemini, others) found that some AI systems now achieve parity with average human creativity on divergent tasks, yet peak human creativity remains firmly beyond all AI models tested. On complex creative work (poetry, storytelling, nuanced ideation), the most skilled human creators consistently outperform AI. Simultaneously, Duke University's peer-reviewed research benchmarking 22 commercial LLMs against 100+ humans documented a core architectural constraint: LLMs produce responses that are "significantly more alike than the answers provided by humans," with homogenization persisting even when models are adjusted for temperature or creativity. Marketing practitioners in active deployment acknowledge the constraint; strategists at agencies like Zeal and others have developed workarounds—custom prompt engineering, multi-model triangulation using smaller diversity-focused models (such as Flint, which scores 7/10 on novelty vs. larger models at 2.88/10)—but describe the underlying limitation as intractable. Enterprises remain skeptical. PwC's 2026 survey of 4,454 CEOs found 56% reporting no significant AI ROI, and Forrester projects that a quarter of planned 2026 AI spending will slip into 2027. Organisational adoption remains fragmented: 58% of deployments sit in isolated pilots, with only 19% operationally integrated. Training data bias steers outputs toward generic rather than niche solutions, and sustained use produces diminishing novelty—a limitation researchers confirm is architectural, not prompting-fixable. For diversity-dependent ideation, human ideas still outperform on originality. The pattern is clear: proven where the use case is narrow and quantity-valued (copywriting, product naming, rapid prototyping), blocked where it requires diverse perspectives or deep creative range.

TIER HISTORY

ResearchNov-2022 → Nov-2022
Bleeding EdgeNov-2022 → Apr-2024
Leading EdgeApr-2024 → present

EVIDENCE (100)

— Psychology Today synthesis of 2026 peer studies: LLMs homogenize outputs more than humans mimic each other; standardization affects not just content but cognitive style and thinking patterns themselves.

— Named solopreneur (Kristin Ginn) deployed free AI for systematic ideation using persona-based prompting; refined business model from strategy feedback, landed customers in 60 days—real individual adoption outcome.

— Peer-reviewed study directly addressing homogenization problem with experimental validation: diverse personas in prompting preserved story diversity vs. human-only baseline, offering architectural design solution.

— MIT Sloan analysis: GenAI commoditized ideation itself; competitive advantage shifted to 'Question Zero'—problem reframing—indicating practice maturity evolution and strategy implications for deployers.

— Field experiment at IG fintech with Harvard/Stanford: GenAI eliminated performance gaps in conceptualization (brainstorming) across expertise levels but failed at execution; demonstrates brainstorming as domain where AI closes expertise gaps.

— FAccT 2026 study of 54 participants in team brainstorming: AI improved ideation on general tasks (48% more impacts, higher quality) but minimal gains on specialized high-stakes work; design guidance for effective AI intervention.

— Negative signal: 'idea inflation' from AI brainstorming exceeds team execution capacity; erodes team cohesion and meaning-making; Gallup research shows employee engagement at decade-low amid AI-enabled productivity paradox.

— Active marketing deployments of ChatGPT and Claude document homogeneity barrier; practitioners developed workarounds: custom prompts, multi-model triangulation, smaller diversity-focused models (Flint 7/10 vs. Llama 2.88/10 on novelty).

HISTORY

  • 2022-H2: ChatGPT's November 2022 release triggered rapid experimentation with generative AI for brainstorming and ideation. AskBrian's Brainstorm feature launched in July and became the most popular GPT-powered skill. Academic research showed ChatGPT could increase idea quantity but required expert validation. Early enthusiasm tempered by declining sentiment, reliability concerns, and creative professionals' skepticism about AI's limitations for true creative ideation.
  • 2023-H1: Mass adoption accelerated with 1 in 3 Americans using AI tools, 54% specifically for brainstorming and idea generation. Academic validation continued: Marketing Science study confirmed AI's 44% efficiency gain in idea screening; design research confirmed generative AI's impact on concept ideation. Real-world pilot achieved 13-point uplift in creative effectiveness. However, enterprise deployment remained constrained by accuracy, bias, and free-riding risks in collaborative settings.
  • 2023-H2: Brainstorming support reached mainstream adoption among US workers. The Conference Board survey (1,100 workers) and Betterworks study (1,000+ employees across 20 industries) both confirmed over half of workers using generative AI for brainstorming, with adoption rates at 56–60%. However, real-world deployment continued to reveal practical limitations: AutoGPT failures in brainstorming loops illustrated reliability constraints in AI agents, while organizational hesitancy persisted despite high employee usage, signaling a gap between individual adoption and enterprise readiness.
  • 2024-Q1: Major vendors competed to position brainstorming as a core AI capability—Google launched Gemini Advanced as a creative partner with Workspace integration, while Slack reported writing assistance as a top-valued AI feature. Organizational experiments with structured adoption programs showed measurable gains (150% uptake in active use, 2-hour weekly time savings), but academic research continued documenting dual effects (improved idea quantity but increased overreliance). Consumer-facing tools failed to sustain scale; enterprise adoption faced "ChatGPT Trap" (rushed, unintegrated deployments).
  • 2024-Q2: Enterprise-scale deployments demonstrated brainstorming as a sustained, production-ready practice—Moderna rolled out ChatGPT Enterprise company-wide (750 custom GPTs, 120 conversations/user/week), while legal professionals reached 27% adoption with brainstorming as the top use case. Specialist deployments (BotsCrew internal tool: 30% daily usage, 4.5/5 satisfaction) and vendor experiments (Google's 4,500-variation ad generation) showed real value. However, scaling barriers intensified: research revealed 80% pilot-to-production efficacy loss and 95% overall AI pilot failure rate, underscoring that success required strategic integration and operational discipline rather than tool availability.
  • 2024-Q3: Adoption matured into profession-specific concentration with journalists and marketing professionals at 79% ChatGPT use, yet industry confidence deteriorated. Peer-reviewed research documented AI's core trade-off: boosts individual idea output but homogenizes collective creativity. Gartner projected 30% project abandonment by Q4 2025; Fortune 500 companies unable to move pilots to production due to accuracy/security concerns. Google launched experimental 'Brainstorm with Gemini' on YouTube. Tension shifted from "can it work?" to "why don't enterprise deployments scale?" despite strong proof-of-concept evidence.
  • 2024-Q4: Deployment evidence consolidated with multiple named organizations achieving production scale (Incubeta: 50% ROI gains; Adore Me: 35-hour reduction in copywriting cycles; Pepperdine: faculty research brainstorming). Consumer adoption accelerated to 40% of US adults. However, enterprise scaling barriers crystallized: BCG reported 74% of companies unable to achieve AI value at scale, Fortune documented shift from hype to ROI skepticism, and MIT found persistent skill gaps. The practice transitioned from technology-readiness to organizational-readiness bottleneck, requiring governance discipline rather than tool innovation.
  • 2025-Q1: Early 2025 data confirmed continued consumer adoption growth (47% of consumers likely to use generative AI for research, up 6 points year-over-year) alongside emerging fundamental constraints. Harvard Business School research in March found AI systems generate convincing but inaccurate evaluations of creative ideas, causing humans to defer to incorrect decisions—a critical limitation for brainstorming in evaluation workflows. Quanta Magazine documented compositional reasoning limits: GPT-4 achieves 0% success on multi-constraint logic puzzles, revealing fundamental bounds in transformer architecture affecting complex problem ideation. Simultaneously, the Monks agency demonstrated production-ready deployment with Google Gemini for creative concepting, achieving 80% improved campaign CTR and 50% fewer design hours. Enterprise scaling remained bottlenecked: analysis found 88% of organizations use AI but only 30% successfully move beyond pilots. The practice's maturity profile had stabilized: capability sufficient for specialist domains and guided workflows, but fundamental reasoning limitations and evaluation accuracy risks constrain applicability in complex multi-constraint ideation scenarios.
  • 2025-Q2: Adoption consolidation and sentiment reversal marked Q2 2025. Creative professional adoption peaked at 83% daily integration with 26% average creative ability gains, but enterprise confidence collapsed: 42% of companies abandoned majority of AI initiatives (doubling from 17% in 2024), and 45% of frequent users reported burnout. Deployment successes continued (ATB Financial: 40% daily usage, 2-hour weekly time savings; Hikari System: 27% YoY customer engagement uplift), but remained isolated in structured, specialist workflows. Peer-reviewed research crystallized a core limitation: Wharton and Wisconsin-Madison studies found AI reduced idea diversity in group brainstorming, with 94% of AI-assisted ideas converging toward common concepts vs. unique human ideas. UC Berkeley synthesis (June) confirmed organizational ROI challenges: only 4% of organizations see consistent value, 74% make little progress, 68% unable to scale beyond pilot phase. The practice shifted from growth narrative to selective deployment focus—production-capable for ideation volume and speed in individual/guided workflows, but facing organizational fatigue, burnout, and fundamental trade-offs in collaborative/diversity-dependent settings.
  • 2025-Q3: Enterprise retrenchment consolidated; consumer adoption remained robust. ChatGPT reached 700 million weekly active users by September, with 'practical guidance' (brainstorming, planning) at 29% of personal use; 49% of students reported brainstorming as a primary AI application. However, enterprise deployment remained stalled: 40% of U.S. workers use AI, but only 22% have organizational clarity on strategy; 73% of AI projects stalled at pilot phase; 42% of companies had abandoned their AI initiatives by Q3. Organizational barriers (data quality, skills gaps, integration challenges, unrealistic expectations) prevented broadscale production rollout despite specialist successes remaining bounded and replicable (Monks agency: 80% CTR improvement; Hikari System sustained 27% YoY uplift). The practice entered Q4 2025 as a stable but bifurcated landscape: mass-market consumer adoption of brainstorming as personal productivity tool, vs. enterprise dysfunction in deployment and scaling.
  • 2025-Q4: Consumer adoption matured; technical constraints and executive risk aversion intensified. Gallup survey (December) confirmed 45% of U.S. workers using AI at work, with 41% applying it to idea generation. Google launched Gemini Enterprise's brainstorming use case in GA, signaling vendor ecosystem maturity for guided content ideation. However, two critical barriers crystallized: sustained-use degradation (case study of Lumina Labs edtech startup documented novelty collapse after first five brainstormed ideas, with prompt quality declining by week three—proving architectural limitation, not prompting failure), and executive risk aversion (Dataiku survey: 60% of data leaders fear career risk from failed AI projects; 59% report past hallucinations causing business losses). Organizational adoption remained fragmented and risk-averse: 58% in isolated pilots, 19% operationally integrated. The practice's maturity profile remained locked: proven for individual and specialist creative workflows, constrained by idea homogenization and organizational readiness barriers from broader deployment.
  • 2026-Jan: Enterprise deployment scaling accelerated in early 2026 alongside ROI skepticism. Gartner reported 78% of US enterprises deployed AI in production (up from 54% in 2024), with 67% using AI search tools for business research; Deloitte's survey of 3,000+ executives showed 60% of workers now equipped with sanctioned AI tools and 34% reporting deep business transformation. Google published official Workspace adoption guidance positioning brainstorming as a key AI use case. However, profitability concerns intensified: PwC's survey of 4,454 CEOs found 56% reporting no significant AI ROI, with only 12% seeing dual cost and revenue benefits; Forrester predicted enterprises would defer 25% of planned 2026 AI spending into 2027. MIT Sloan research warned of risks from outsourcing creativity to AI, noting accuracy gaps in LLM outputs for enterprise knowledge work. By end of January 2026, the practice's bifurcation deepened—broad workforce access and production deployment metrics climbed, yet executive confidence in brainstorming tool ROI remained weak, prolonging the organizational readiness bottleneck despite vendor ecosystem maturity.
  • 2026-Feb: February 2026 solidified the emerging tension between enterprise-scale deployment and quality concerns. Deloitte published guidance on AI's role in accelerating product innovation from ideation through prototyping, signaling enterprise organizations moving beyond pilots into strategy-driven brainstorming integration. Fortune 500 adoption reached 78% with LLM projects deployed, and average productivity gains across content generation and ideation workflows measured 23%. Simultaneously, critical limitations surfaced: research documented that while AI generates higher idea volume, human-generated ideas remain more novel and valuable, with AI-assisted ideas converging toward common concepts (94% overlap in ChatGPT studies). Practitioner analysis revealed structural problems: training data bias drives tools toward generic rather than niche solutions, and LLMs exhibit compound failure modes where systems fabricate evidence to defend initial fabrications—a critical reliability issue for accuracy-dependent brainstorming workflows. By February 2026, the practice had achieved production-scale deployment at major enterprises, yet quality and reliability concerns positioned brainstorming support as a force-multiplier for individual idea volume while raising questions about diversity, authenticity, and trustworthiness in organizational ideation workflows.
  • 2026-Mar: March 2026 evidence confirmed brainstorming quality constraints and tool differentiation in production workflows. Peking University 7-day study (Zhou et al.) documented that ChatGPT users experienced initial creative boost but dropped to baseline by day 7, with homogenization effect persisting 30 days post-removal. MIT research showed LLM-assisted brainstorming reduces brain connectivity and cognitive engagement, with users struggling to recall their own work. Tool-specific testing across marketing professionals showed Claude ranked highest for ideation quality (structured, authoritative output) in production campaigns, ChatGPT fastest for volume/speed, while independent practitioners increasingly adopted multi-model triangulation to improve outcomes. Deployment context confirmed: brainstorming support is operationalized in narrow use cases (copywriting, product naming, rapid prototyping) but systematically trades diversity and cognitive ownership for volume—a tension architecture cannot overcome. The practice remained stalled at leading-edge, with proven individual/specialist adoption but blocked enterprise scaling due to ROI skepticism and fundamental quality trade-offs.
  • 2026-Apr: April 2026 data crystallized cognitive and organizational costs of brainstorming tool proliferation. BCG/Harvard survey of 1,488 employees found 14% report "AI brain fry"—cognitive exhaustion from excessive tool oversight—with 33% more decision fatigue and 39% more major errors when using 4+ tools; affected workers 36% more likely to quit. University of Barcelona peer-reviewed study documented that AI ranks last in independent visual ideation (below non-artists) but improves to non-expert level only when given human ideas embedded in prompts, reframing AI as "sophisticated executor of ideas" not a generator. Harvard/BCG 758-consultant study mapped the "jagged technological frontier"—AI excels at bounded creative tasks (brainstorming, writing) but degrades performance on novel problems with 19% accuracy losses on out-of-frontier work. Psychological research synthesis showed AI brainstorming improves output quality but reduces intrinsic motivation and neural connectivity; MIT EEG studies show users exhibit progressively weaker brain engagement over time, with long-term consequences for independent creative work. Real deployment case (London growth agency) documented Claude compressing weeks of strategy brainstorming workshops into days, proving narrow use-case effectiveness. Marketing deployment analysis: 83% of ad execs deployed AI in creative brainstorming (up from 60% in 2024), yet Coca-Cola and McDonald's AI campaigns failed publicly with consumer perception gap (45% consumers view AI ads negatively vs. 82% exec belief). On the adoption side, 58% of marketers now use AI for content ideation with 44% productivity gains and 11 hours saved per week; Adobe's survey of 800 creative professionals found 94% produce content faster and report 17 hours of weekly savings; Adobe Firefly AI Assistant reached GA for agentic creative direction across Photoshop, Premiere, and Illustrator. Practitioners confronting homogeneity have adopted multi-model triangulation — diversity-focused models (Flint: 7/10 novelty vs. Llama: 2.88/10) — as a workaround, though the underlying constraint is acknowledged as architectural. Pattern solidified: brainstorming tools deliver measurable individual productivity in bounded, narrow tasks (copywriting, product naming, rapid iteration) but create cognitive overload, reduce diversity, and sustain organizational skepticism around ROI and consumer perception when deployed at scale.
  • 2026-May: Early May 2026 evidence confirmed the practice's bifurcated maturity profile with new research addressing design solutions. Harvard/Stanford field experiment at IG fintech showed GenAI effectively democratizes conceptualization (brainstorming) across expertise levels—non-experts matched expert output when equipped with AI—demonstrating brainstorming's value in expertise-gap bridging. However, execution remained expertise-dependent, suggesting brainstorming unlocks ideation but not judgment. University of Montreal published peer-reviewed evidence that diverse AI personas can mitigate the homogenization effect in collaborative ideation through prompt engineering; a design solution showing the constraint is not insurmountable. ACM FAccT 2026 peer study of 54 participants confirmed AI improves general-purpose brainstorming (48% more ideas, higher quality across 6/8 metrics) but delivers minimal gains on specialized high-stakes ideation, with design guidance: AI serves best as process facilitator (clustering, structuring) rather than core idea generator. MIT Sloan analysis positioned practice evolution—as commodity ideation tools mature, competitive value shifted upstream to problem framing ("Question Zero"), implying brainstorming support alone insufficient for innovation strategy. Negative signals intensified: Content Marketing Institute documented "idea inflation"—AI brainstorming velocity exceeding execution capacity, eroding team cohesion and meaning-making, with Gallup data showing employee engagement at decade-lows. Named solopreneur adoption case (Kristin Ginn) showed free AI tools enabling systematic ideation workflows via persona-based prompting, landing paying customers in 60 days—individual adoption outcome with strategic value. Peer synthesis noted LLM homogenization extends beyond content to cognitive style itself, standardizing how users think rather than simply what they produce. By May 2026, the practice landscape reflected clear tool maturity for bounded ideation tasks, emerging design patterns for diversity mitigation, and growing organizational fatigue with velocity-without-value deployment models.