The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that helps individuals generate ideas, explore possibilities, and think through problems from multiple angles. Includes structured ideation and creative thinking prompts; distinct from content calendar planning which generates topic ideas rather than general brainstorming.
AI-assisted brainstorming delivers measurable value for individual ideation — but most organisations have not figured out how to deploy it beyond isolated pilots. Forward-leaning teams at a handful of named companies have demonstrated real gains in creative output and time savings, and consumer adoption continues to climb. The tooling is mature enough that vendors now ship brainstorming as a first-class enterprise feature. Yet a fundamental tension defines this practice's plateau: AI reliably increases idea volume while simultaneously reducing idea diversity, with research showing 94% of AI-assisted concepts converging toward common themes. That architectural constraint, combined with executive risk aversion and weak ROI signals at the organisational level, keeps brainstorming support firmly in leading-edge territory. Individual practitioners benefit; scaled enterprise rollout remains elusive.
The ground-level picture is split. On the consumer side, 45% of U.S. workers now use AI at work and 41% apply it to idea generation, while creative professionals report 26% average gains in creative ability. Specialist deployments continue to prove out: Monks agency achieved 80% CTR improvement and halved design hours using Gemini for creative concepting; ATB Financial sustains 40% daily usage with two-hour weekly time savings; Hikari System's brainstorm-driven marketing lifted customer engagement 27% year-over-year. Adoption acceleration is evident in marketing: 58% of marketers now use AI for content ideation with 44% productivity gains and 11 hours saved per week; Adobe's April 2026 survey of 800 creative professionals found 94% produce content faster with AI, majority reporting 50%+ speed increases and 17 hours weekly savings. Tool differentiation has hardened: Claude ranks highest for ideation quality in production marketing workflows (structured, authoritative output), ChatGPT leads for volume and speed, while practitioners increasingly adopt multi-model triangulation to improve brainstorming outcomes. Vendors signal maturity through product releases: Adobe's Firefly AI Assistant now generally available for agentic creative direction across Photoshop, Premiere, and Illustrator. Google shipped Gemini Enterprise's brainstorming capability as a GA feature, and Deloitte publishes guidance on AI-accelerated ideation for product innovation.
Yet capability boundaries are now empirically clear. A landmark University of Montreal peer-reviewed study comparing 100,000+ human participants with major LLMs (GPT-4, Claude, Gemini, others) found that some AI systems now achieve parity with average human creativity on divergent tasks, yet peak human creativity remains firmly beyond all AI models tested. On complex creative work (poetry, storytelling, nuanced ideation), the most skilled human creators consistently outperform AI. Simultaneously, Duke University's peer-reviewed research benchmarking 22 commercial LLMs against 100+ humans documented a core architectural constraint: LLMs produce responses that are "significantly more alike than the answers provided by humans," with homogenization persisting even when models are adjusted for temperature or creativity. Marketing practitioners in active deployment acknowledge the constraint; strategists at agencies like Zeal and others have developed workarounds—custom prompt engineering, multi-model triangulation using smaller diversity-focused models (such as Flint, which scores 7/10 on novelty vs. larger models at 2.88/10)—but describe the underlying limitation as intractable. Enterprises remain skeptical. PwC's 2026 survey of 4,454 CEOs found 56% reporting no significant AI ROI, and Forrester projects that a quarter of planned 2026 AI spending will slip into 2027. Organisational adoption remains fragmented: 58% of deployments sit in isolated pilots, with only 19% operationally integrated. Training data bias steers outputs toward generic rather than niche solutions, and sustained use produces diminishing novelty—a limitation researchers confirm is architectural, not prompting-fixable. For diversity-dependent ideation, human ideas still outperform on originality. The pattern is clear: proven where the use case is narrow and quantity-valued (copywriting, product naming, rapid prototyping), blocked where it requires diverse perspectives or deep creative range.
— Psychology Today synthesis of 2026 peer studies: LLMs homogenize outputs more than humans mimic each other; standardization affects not just content but cognitive style and thinking patterns themselves.
— Named solopreneur (Kristin Ginn) deployed free AI for systematic ideation using persona-based prompting; refined business model from strategy feedback, landed customers in 60 days—real individual adoption outcome.
— Peer-reviewed study directly addressing homogenization problem with experimental validation: diverse personas in prompting preserved story diversity vs. human-only baseline, offering architectural design solution.
— MIT Sloan analysis: GenAI commoditized ideation itself; competitive advantage shifted to 'Question Zero'—problem reframing—indicating practice maturity evolution and strategy implications for deployers.
— Field experiment at IG fintech with Harvard/Stanford: GenAI eliminated performance gaps in conceptualization (brainstorming) across expertise levels but failed at execution; demonstrates brainstorming as domain where AI closes expertise gaps.
— FAccT 2026 study of 54 participants in team brainstorming: AI improved ideation on general tasks (48% more impacts, higher quality) but minimal gains on specialized high-stakes work; design guidance for effective AI intervention.
— Negative signal: 'idea inflation' from AI brainstorming exceeds team execution capacity; erodes team cohesion and meaning-making; Gallup research shows employee engagement at decade-low amid AI-enabled productivity paradox.
— Active marketing deployments of ChatGPT and Claude document homogeneity barrier; practitioners developed workarounds: custom prompts, multi-model triangulation, smaller diversity-focused models (Flint 7/10 vs. Llama 2.88/10 on novelty).