The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that generates integrated multi-format content combining text, images, and layout in a single workflow. Includes newsletter generation and social card creation; distinct from content repurposing which adapts existing content rather than generating multimodal output from scratch.
Multimodal content generation -- AI systems that orchestrate text, image, video, and layout production within a single workflow -- has reached a new maturity milestone with the emergence of agentic orchestration and team-scale deployment infrastructure. Q2 2026 marks a generational shift: Adobe's Firefly AI Assistant (launched April 2026) consolidates Photoshop, Premiere, Lightroom, Illustrator, and 30+ partner models into a single conversational interface that accepts natural-language briefs and orchestrates multi-step workflows autonomously. Scaling to teams: Firefly Creator team offerings (May 2026) now enable non-Creative-Cloud-subscribers to deploy multimodal workflows at team scale without full suite subscriptions. Adoption breadth is now mainstream. 78% of multinational brands use AI-generated or enhanced multimodal content (images, copy, backgrounds, synthetic humans), signaling production readiness across enterprises. Competitors (Luma Uni-1, ByteDance Inset, JoyAI) demonstrate rapid capability convergence on spatial reasoning, multi-reference consistency, and culture-aware aesthetics. Yet hallucination remains a shadow over scale: hallucination rates in AI-generated media doubled from 18% (2024) to 35% (2025), with 12,842 AI articles retracted in Q1 2025 for fabricated content; consumer trust in AI content dropped from 60% to 26%. The practice's defining tension is now capability maturity versus quality assurance and production economics. Unified multimodal models suffer from architectural limitations: "mismatched decoder" problems constrain non-textual data utility, long-sequence text-image generation degrades, and token costs remain economically unsustainable at scale. Production engineering reveals critical deployment bottlenecks: VRAM constraints (24GB+ for frontier models), NSFW classifier bias (85% false-positive, 2–3x demographic bias), and compliance watermarking overhead (10–50ms per image). Production agents extract measurable value -- 60-75% time reduction, $3 AI versus $150-400 human cost per asset -- but only for organizations with tolerance for governance trade-offs. Creators adopt hybrid workflows combining specialized models (Midjourney for mood, Firefly for brand safety, FLUX for control) rather than single-model dominance, indicating ecosystem maturity. Agentic orchestration signals the beginning of the next adoption wave for production-first organizations; hallucination, reliability, and production infrastructure bottlenecks remain the ceiling for risk-averse sectors.
Agentic orchestration and team-scale deployment represent the Q2 2026 production frontier. Adobe launched Firefly AI Assistant (April 15, 2026) consolidating Photoshop, Premiere Pro, Lightroom, Illustrator, Express, and 30+ partner creative models (Kling 3.0, Veo 3.1, ElevenLabs Multilingual v2) into a single conversational interface. May 2026 update: Firefly Creator team plans (Pro, Pro Plus, Premium tiers) enable non-Creative-Cloud teams to deploy multimodal workflows without full subscription overhead—addressing SMB adoption barriers. The system accepts natural-language briefs, learns creator preferences, and orchestrates multi-step workflows without app switching. This addresses a core production friction point: outcome-centric direction replaces tool-centric workflows.
Competitive multimodal landscape accelerates. Beyond Adobe's dominance: Luma's Uni-1 (May 2026) introduces spatial reasoning and multi-reference consistency with culture-aware aesthetics (photorealistic, manga, webtoon); ByteDance's Inset (May 2026, research) scales interleaved text-image generation with 15M synthetic training samples; JoyAI-Image achieves SOTA on visual understanding and instruction-guided editing. Ecosystem maturity indicator: creators adopt hybrid workflows combining Midjourney (aesthetics), GPT-image-2 (layout/assembly), Ideogram (typography), Firefly (brand-safe production), FLUX (multi-reference control) rather than single-model dominance, signaling use-case specialization over feature parity.
Multimodal adoption breadth is now enterprise mainstream. World Federation of Advertisers (April 2026) documents 78% of multinational brands use AI-generated or AI-enhanced multimodal content in production marketing: product images (87%), copy (80%), backgrounds (77%), altered people (33%), fully synthetic humans (18%). However, only 67% of brands have internal AI policies despite 82% believing transparency essential. The multimodal AI content creation market is projected to reach $80.12B by 2030 (32.5% CAGR from 2025). Adobe maintains market leadership with Firefly at $250M+ ARR (Q1 2026); Midjourney at $500M revenue and 19.83M users; and specialized platforms (Runway, Pika) driving video adoption. Economic driver: AI-generated content costs $3 per asset versus $150-400 human-created, compelling adoption despite quality and trust gaps.
Production infrastructure and hallucination remain critical constraints. Real-world deployment analysis reveals hardware bottlenecks: SDXL requires 10–12GB VRAM; Flux requires 24GB; compliance watermarking adds 10–50ms per image overhead. NSFW classifier bias (85% false-positive, 2–3x higher false-positive on women) creates content-moderation friction. Hallucination rates in AI-generated news and editorial content doubled: 18% (August 2024) to 35% (August 2025). Q1 2025 saw 12,842 AI-generated articles retracted for fabricated quotes and invented sources. Consumer preference for AI-generated content dropped from 60% (2023) to 26% (current), reflecting trust erosion. Architectural reliability constraints remain: long-sequence text-image generation degrades as sequences grow, token costs are economically unsustainable at scale for multi-step pipelines, and unified models struggle with physics consistency (Sora, Runway) and cross-modality hallucination. Governance fragmentation persists: content authenticity standards (C2PA), EU AI Act Article 50 transparency requirements (effective August 2026), IP indemnification, and bias protocols remain inconsistent across jurisdictions despite enterprise copyright indemnification. Adoption ceiling: production agents deliver 60-75% time savings for risk-tolerant organizations; quality assurance, governance, and production engineering constraints limit broader enterprise deployment until infrastructure and reliability mature.
— Official Adobe announcement of three new Firefly Creator team pricing tiers (Pro/Pro Plus/Premium) enabling scaled multimodal content production (text-to-image, graphics, video) without full Creative Cloud subscription.
— Product announcement for Luma's Uni-1 multimodal reasoning model featuring scene completion, spatial reasoning, multi-reference generation, and culture-aware visual generation across photorealistic, manga, and webtoon aesthetics.
— Provides cost comparison ($3 AI vs $150-400 human) and performance metrics showing economic drivers pushing AI-generated content adoption despite performance and trust gaps.
— Recent research paper from ByteDance proposing Inset, a unified multimodal generation model that handles complex interleaved text-image instructions. Demonstrates scalable multimodal generation with 15M synthesized samples and extends to multimodal editing.
— Official Adobe release notes aggregator documenting two major 2026 Firefly announcements: AI Assistant beta (April 27) enabling conversational multi-format orchestration (images, design, video) and Adobe Brand Intelligence (April 20) for validating and assembling on-brand multimodal content at scale.
— Production engineering analysis exposing critical deployment bottlenecks in diffusion-based systems: VRAM constraints (24GB for Flux), NSFW classifier bias (85% false-positive, 2–3x demographic bias), watermarking compliance requirements—signals adoption barriers beyond tooling hype.
— Research paper describing JoyAI-Image, a unified multimodal foundation model achieving state-of-the-art performance on visual understanding, text-to-image generation, and instruction-guided image editing tasks.
— Amazon Science paper describing automated pipeline for generating synthetic multimodal datasets (text-image pairs). Directly addresses bottleneck in multimodal content generation—lack of rich conversational multimodal training data.
2023-H1: Adobe integrated Firefly into Express and Creative Cloud apps, enabling unified multi-format content workflows. Multimodal LLMs reached research maturity with emerging applications in education. Systematic failure analysis revealed reliability challenges in orchestrating multiple generative modalities.
2023-H2: Adobe expanded Express with multimodal AI features (Generative Fill, Generate Template, Translate, TikTok integration) serving millions. Text-to-image generation reached 150B+ annual production (Adobe Firefly alone hit 1B assets in 3 months). However, only 10% of organizations achieved production GenAI deployments; infrastructure, compliance, and IP risk remained barriers despite 12% ROI in content marketing early adopters.
2024-Q1: Enterprise adoption accelerated: 65% of enterprises adopted generative AI (vs. 11% in 2023); Firefly reached 6.5B images; 83% of creative professionals reported using generative AI. Multimodal market projected to reach $19.85B by 2032 (34.4% CAGR). However, governance emerged as primary constraint: academic research and WHO/Microsoft analyses documented hallucination failures, bias risks, and unintended harms requiring transparency, provenance standards, and regulatory frameworks. Data security and IP risk remained top adoption barriers for risk-averse sectors.
2024-Q2: Real-world deployment validation: Midjourney-powered newsletter reached $30K annual revenue; Amorepacific confirmed cost/time efficiency with Firefly for product marketing. Vendor competition intensified with Firefly Image 3 and DALL-E 3 refinements, but quality variance persisted. Governance barriers hardened: academic benchmarks revealed persistent failures in scientific visualization (text, spatial, numeric errors); research documented exacerbated bias in models like CLIP and Stable Diffusion. Content authenticity standards and regulatory clarity remained undefined; smaller vendors (DALL-E 2) discontinued as market consolidated.
2024-Q3: Multimodal content generation transitioned to mainstream production deployment. Adobe expanded Firefly to video generation (announced September 2024) and launched Content Analytics for measuring AI-generated content performance. Enterprise adoption accelerated with 49% of Australian businesses creating social media content with AI (projected 61% by 2026). Cloud platforms published reference architectures normalizing deployment patterns. Governance remained the primary barrier: compliance, content authenticity standards, and IP risk persisted as constraints despite technology maturity.
2024-Q4: Multimodal content generation solidified as production baseline with 79% of marketers using GenAI for content tasks; Firefly reached 13+ billion images generated; video generation entered beta. However, production reliability challenges surfaced: academic research documented persistent hallucinations and object composition failures across multimodal models, while enterprise platform integration issues (DALL-E 3 API errors on Azure) constrained adoption in regulated sectors. Compliance uncertainty remained the primary barrier despite widespread capability maturity.
2025-Q1: Adobe expanded Firefly Services with APIs and Custom Models for enterprise personalized content production at scale (March 2025). Market validation: multimodal AI market reached $1.6B with sustained growth; 89% of AI search queries incorporate visual elements, confirming production-scale adoption. Vendor services consolidation shifted from point tools to enterprise platforms enabling multi-format workflows at scale.
2025-Q2: Multimodal content generation reached 16+ billion generated artifacts as production infrastructure standard. Creator adoption continued advancing with 83% of content creators incorporating AI (up from 79% marketer adoption in Q4 2024). Adobe reported 700M+ monthly active users with Digital Media ARR growth to $4.35B+ (12% YoY). However, production monetization headwinds emerged: Firefly video (beta since September 2024) faced user backlash over aggressive paywall structure, quality issues (temporal artifacts), and free competitor advantages (Pika, Luma). Consumer sentiment remained mixed: 55% uncomfortable with AI-generated media; 33% of creators feared replacement. Governance and IP frameworks remained undefined across jurisdictions.
2025-Q4: Multimodal content generation solidified as critical infrastructure with 20B+ total Firefly generations and 70M+ freemium users (+35% YoY). Adobe achieved $5B+ AI-influenced ARR with Firefly 4 launch (10x faster), custom models enabling $7M+ enterprise revenue per customer, and December 2025 Microsoft integration into ChatGPT. Enterprise adoption reached 82% weekly AI usage with 72% measuring ROI, but monetization headwinds persisted (video paywall backlash) and reliability constraints remained (object composition failures, 76% error rate in multi-object tasks). Competitor emergence: Google Gemini and Veo gained creator preference, though larger orgs stayed on Adobe/Midjourney. Compliance and authenticity standards remained primary adoption barriers despite enterprise copyright indemnification.
2026-Jan: Adobe Firefly Foundry launched with Fortune 100 partnerships (Disney, CAA, B5 Studios) for brand-specific model tuning. Enterprise deployment accelerated: 65–78% of large enterprises testing/deploying multimodal AI, 34M images daily, 72% of companies integrating AI tools into marketing with 30% sales impact. Cloud platforms embedded multimodal as default (AWS Bedrock GA, Google Gemini expansion, ChatGPT integration). However, quality bifurcation widened: Adobe/Midjourney delivered reliable commercial output while DALL-E regressed in artistic capability and composition reliability declined. Consumer trust remained fragile (55% uncomfortable with AI media). Production constraints for scientific, spatial, textual content remained unsolved despite accelerating enterprise adoption.
2026-Feb: Adobe expanded Firefly to unlimited generations, signaling platform maturity for Creator baseline (86% use creative AI daily). Market validation confirmed: AI content production market reached $1.5B (2025) with 17.3% CAGR to $5.4B by 2033; 40% of digital content outputs now AI-generated across advertising and e-commerce. However, architectural limitations emerged: multimodal LLMs suffer from "mismatched decoder" problems where non-textual data treated as noise (64-71% variance removed improved performance), hallucination surveys documented persistent failures across image/video/audio modalities, and annotation data quality barriers remain critical. Real-world deployment analysis revealed three critical hurdles: token cost explosions (multi-step pipelines unsustainable at scale), latency (6-15+ seconds harming UX), and accuracy bottlenecks (hallucinations create liability in high-stakes applications). Newsletter automation case study (85% cost reduction, 320% revenue gains) validated niche multimodal workflows while broader reliability remained constrained.
2026-Mar: Adobe Q1 FY2026 confirmed the practice at production scale: Firefly ARR exceeded $250M (+75% QoQ), video generative actions grew 8x year-over-year, and audio doubled, while enterprise adoption reached 60% of applications combining two or more modalities. Production-ready multimodal agents now deliver integrated campaigns (copy, image, video, audio from a single brief) at $50-200 per video versus $5-15K agency baseline, with 60-75% production time reduction. Research identified a long-sequence reliability constraint in unified multimodal models — text-image interleaving quality degrades as sequences grow — and multimodal AI market projections put 2026 at $2.83B growing to $8.24B by 2030 at 30.6% CAGR.
2026-Apr: Agentic orchestration milestone: Adobe launched Firefly AI Assistant (April 15, 2026), consolidating Photoshop, Premiere, Lightroom, Illustrator, Express, and 30+ partner models into a single conversational interface that orchestrates multi-step workflows; NBCUniversal deployed it to 2,000+ creatives, compressing brief-to-campaign from 3 weeks to under 10 minutes. WFA study documents 78% of multinational brands using AI-generated multimodal content in production; global AI content generation market reached $26.9B (2026), projected $168.7B by 2034 (25.8% CAGR). However, hallucination doubled year-over-year to 35% (2025) with frontier models spiking to 18.7% on legal queries; 85% of consumers report uncanny-valley reactions to AI-generated content and trust dropped from 60% to 26%, establishing quality assurance as the defining ceiling for the practice's next adoption wave.
2026-May: Platform and model capabilities advanced on two fronts. Adobe expanded Firefly for team-scale SMB deployment with three Creator team tiers (Pro/Pro Plus/Premium) that remove the full Creative Cloud subscription requirement, directly addressing the cost barrier for non-enterprise adopters; Adobe Brand Intelligence (April 20) added on-brand multimodal assembly validation. Competitor models converged rapidly: Luma's Uni-1 introduced spatial reasoning, multi-reference consistency, and culture-aware generation (photorealistic, manga, webtoon); ByteDance's Inset scaled interleaved text-image generation with 15M synthetic training samples; JoyAI-Image reached SOTA on visual understanding and instruction-guided editing. Independent analysis positioned Firefly's strength in approval-workflow production (brand-safe variations, cleanup) rather than exploratory ideation, signaling market segmentation by use case rather than vendor dominance. Production engineering friction remained the structural ceiling: Flux requires 24GB VRAM, NSFW classifiers show 85% false-positive rates with 2–3x demographic bias, and compliance watermarking adds 10–50ms per-image overhead — confirming that deployment infrastructure, not model capability, is the binding constraint for scaled production adoption.