The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI generation of longer narrative videos, explainers, and educational content with coherent storylines. Includes multi-scene generation and narrative consistency; distinct from short-form which produces clips rather than structured narratives.
AI-generated long-form narrative video — explainers, educational content, structured short films — has attracted serious commercial investment but remains fundamentally experimental. Billion-dollar partnerships (Disney-OpenAI, WPP-Google) and platforms like Vidu Q3 with 40M creators signal that studios and brands see strategic potential. The tools, however, have not caught up to the ambition. May 2026 research (A²RD, EduStory, FreeSpec) demonstrates continued progress on coherence and consistency, with A²RD benchmarking 30% consistency gains and 20% narrative coherence improvements across 1-10 minute videos. Yet the underlying constraints remain: MIT-IBM Watson benchmarks show coherence degrading sharply after roughly eight seconds across all major models due to fixed-length context windows — a representational constraint, not merely a tuning problem. Professional-quality segments remain capped at about 60 seconds; outputs beyond that require multi-model orchestration, intensive prompting, and human curation at every stage. Character consistency works in stylized or animated scenarios but breaks down in realistic multi-character narratives with complex physics. New product maturity is evident — industrial-grade engines like BACH now rank in Artificial Analysis top 10 with character consistency and directorial precision — yet practitioners still report that generative models often "create more work, not less," producing isolated clips rather than connected storylines. Production deployment follows a hybrid pattern: AI accelerates ideation, storyboarding, and B-roll generation, while humans maintain narrative logic, emotional coherence, and quality control. Educational deployments (VideoTutor with 50M views, 105K creators across 39 countries using synchronized video+audio) show real adoption in constrained vertical markets. Autonomous long-form generation — where AI handles story structure end-to-end — is not yet viable at production quality. The practice sits firmly at the experimental frontier: real money is flowing in, credible demos exist, and adoption is emerging in specific verticals, but the gap between a compelling two-minute clip and a coherent ten-minute narrative remains structurally wide.
The market consolidated in March-May 2026. OpenAI shut down Sora on March 25 due to unsustainable unit economics ($15M/day operating costs against $2.1M lifetime revenue, <8% 30-day retention), and completed API shutdown by late April. The market stabilized around three commercial platforms: Runway, Kling, and Veo. Yet commercial breadth continues to expand — Higgsfield AI's global film competition attracted 8,752 submissions from 139 countries (broadest adoption signal to date), with China's micro-drama sector showing 90% cost reduction and 41% AI content penetration, evidence of production-scale adoption in specific genres. New entrants emerged in May 2026: Video Rebirth launched BACH, a Tier 5 "Cinematic Director" class engine achieving character consistency for 30-second multi-shot films, ranking #6 on Artificial Analysis benchmark upon debut and entering enterprise pilots across studios and agencies.
Actual deployment evidence shows a bifurcated reality. Educational institutions have begun scaling long-form narrative video: peer-reviewed evidence (NIH/PMC, March 2026) documents successful deployment in medical education with 72-94% cost reduction and improved learning outcomes. VideoTutor demonstrates real adoption at scale with 50M TikTok views, $11M seed funding, and 1,000+ enterprise API inquiries, confirming long-form educational narrative generation is viable in practice. ZSky AI reports 105,000+ creators across 39 countries using AI video for instruction with synchronized audio-video capability. Production ROI tracking shows hybrid workflows (combining Sora and Runway) achieving 93.3% time reduction (6 days to 8 hours) for 60-second product explainers with 340% ROI. These successes are real but contained to narrow use cases (education, explainers, B-roll) with significant human oversight.
Character consistency, however, remains a brick wall for multi-scene narrative work. A March 2026 practitioner case study tested four production-grade tools (Runway, Kling, Seedance, Pika) on a 60-second corporate explainer requiring a single character across 8 scenes — a minimal production requirement. The result: 15-20 regenerations per scene, with character hair length, skin tone, and clothing drifting across scenes. The project was ultimately abandoned in favor of hiring human talent. Feature-length work is worse: a PhD researcher quantified the barrier — AI cannot sustain coherence across 100,000-150,000 frames (feature-length content). Fundamental research barriers remain: Atlas Cloud technical analysis identifies three unsolved obstacles in long-form generation: VRAM wall (O(n²) attention cost saturates H200 GPUs at 10-second clips), temporal drift (progressive position/color/lighting shifts), and causal consistency constraints (bidirectional attention requires waiting for final frame).
Enterprise adoption metrics (42% of Fortune 500 marketing departments using AI video tools, up from 12% in 2024) mask what's actually happening: deployment clusters around short-form clips (10-25 seconds), concept ideation, and B-roll replacement. Consumer trust barriers persist: 36% say AI video lowers brand perception, 67% cite robotic gestures, 55% flag unnatural voices. For now, every workflow that attempts narrative length above 2-3 minutes depends on human editorial judgment to manage what the models cannot sustain.
— 105,000+ creators across 39 countries using AI video generation for education; platform enables synchronized audio-video for instructional content, demonstrating real adoption in educational narrative workflows.
— Peer-reviewed research framework for multi-shot STEM instructional video with pedagogical consistency tracking; advances knowledge state and narrative coherence—core long-form generation challenges.
— Agentic autoregressive diffusion framework addresses semantic drift and narrative collapse; benchmarks 1-10 minute videos showing 30% consistency and 20% narrative coherence gains vs baselines.
— Technical engineering analysis of six research approaches to long-form generation; quantifies bleeding-edge barriers (VRAM wall, temporal drift, causal consistency) and consolidates assessment into single practitioner-facing resource.
— Industrial-grade BACH engine achieves character consistency and directorial precision for 30-second multi-shot films; ranked #6 on Artificial Analysis benchmark; entered enterprise pilots across studios and agencies.
— Named organization (VideoTutor) achieves 50M TikTok views, $11M seed funding, 1,000+ API inquiries; real deployment of adaptive instructional video generation with documented enterprise integration interest.
— Ecosystem maturity milestone: Tier 5 'Cinematic Director' APIs now production-ready (multi-shot, physics-aware, audio-sync, scene graphs); major vendors report transition from random generation to directorial control.
— Peer-reviewed 2026 preprint identifying core long-form technical barriers: multi-shot narrative logic, spatiotemporal text-video misalignment, and character consistency failures in current models.
2024-Q2: Research papers and evaluations dominate the landscape. Academic benchmarks reveal AI's struggles with long-form narrative comprehension and temporal reasoning. Product announcements (Runway Gen 3, Open-Sora) focus on short-form generation. Practitioner assessments and critical analyses emphasize generation time, consistency, and cost barriers preventing production deployment. No evidence of commercial adoption for full-length narrative production.
2024-Q3: Technical coherence research accelerates (narrative consistency frameworks, follow-on shot limitations). Industry reports confirm major strides in video generation quality overall, but practitioner and critical analyses deepen understanding of long-form-specific barriers: diffusion models cannot reliably generate follow-on shots without breaking narrative logic; production economics remain prohibitive (300:1+ generation ratios). Early commercial attempts (brand films) remain short-form rather than long-form narratives. Viewer sentiment shows cautious adoption (75% receptive but 90% concerned about accuracy/authenticity). No advancement in long-form commercial production.
2024-Q4: Product maturation accelerates (Veo 2, Sora Turbo, Amazon Nova Reel, open-source Hunyuan) with expanded access and improved quality in short-form generation. However, research analysis of long-form comprehension (HourVideo dataset) reveals AI models at 25-37% accuracy vs. 85% human baseline, indicating fundamental gaps in sustained attention and temporal sequencing. Practitioner assessments document specific narrative failures (semantic misinterpretation, character identity breaks) and identify 20-second clip length as practical ceiling. Media and entertainment industry remains cautiously hesitant despite tool proliferation; no evidence of production-critical long-form narrative generation deployment.
2025-Q1: Academic research intensifies around narrative coherence solutions (Meta's OneStory, StoryAgent multi-agent framework, VideoStudio LLM-guided synthesis), demonstrating continued innovation in character consistency and multi-scene generation. However, real-world production case studies reveal persistent practical barriers: creative agencies report AI's inability to handle realistic human motion and physics, while high-profile deployments (Coca-Cola's campaign) require thousands of iterations with visible continuity issues. Production workflows shift toward multi-agent orchestration and human-in-the-loop curation rather than autonomous generation. Technology remains constrained by 20-second clip ceiling, low-yield generation ratios, and authenticity concerns; no evidence of autonomous long-form narrative production in professional media workflows.
2025-Q2: Major commercial investment accelerates (Runway $300M Series D funding Runway Studios for long-form AI film production with Gen-4 character consistency features). Research advances ~60-second narrative generation with character consistency, extending prior 20-second ceiling. Industry analysts identify character consistency as "the holy grail" problem for long-form adoption. Critical assessments document deployment barriers: strategic oversight requirements, compliance validation gaps, production economics still prohibitive despite cost savings in early-stage ideation. Media studios remain cautious. Practitioner analysis emphasizes 70% cost benefits offset by emotional depth gaps and authenticity concerns. No evidence of autonomous long-form narrative deployment in professional production; hybrid human-AI workflows remain dominant.
2025-Q3: Product releases accelerate narrative capability focus: Runway Gen-3 Alpha introduces cinematic storytelling controls; Sora 2 (end-Q3) improves physical world simulation. Research advances character consistency evaluation frameworks and multi-stage narrative pipelines with explicit stability metrics. Market projections reach $10B by 2027. However, production deployment remains constrained: character consistency improvements apply to controlled scenarios (animation, stylized content) but fail in realistic multi-character narratives. Practitioner reviews of Gen-4 and Sora 2 confirm incremental capability gains but note substantial iteration still required. No evidence of autonomous long-form narrative production; human-AI hybrid workflows with multi-agent orchestration remain industry standard. Technical coherence at scale and production economics continue to block adoption.
2025-Q4: Product maturity accelerates: Sora 2 (Oct 2025) delivers synchronized audio and improved physics; Runway Gen-3 Alpha prioritizes character consistency for cinematic narratives; third-party integrations (SJinn) chain Sora 2 and Veo 3 to break the sub-10-second barrier, enabling minute-long character-consistent storytelling. Kling AI 2.0 reaches 22M users, signaling mass-market adoption of advanced video generation. Practitioner workflow guides document production patterns: shot planning, multi-take generation, QA gates for narrative content. However, deployment evidence confirms character consistency stability remains limited to stylized scenarios; realistic multi-character narratives with complex physics still require intensive iteration and human curation. Technical limitations (temporal coherence, semantic understanding, hand consistency) persist. Production economics remain prohibitive for autonomous long-form generation. Deployment has shifted from research prototypes toward hybrid human-AI production workflows, particularly in animation, education, and ideation-phase applications. Bleeding-edge capability present; mainstream production adoption constrained by technical barriers and cost-benefit economics.
2026-Jan: Major commercial deployment acceleration: Vidu Q3 launches as first long-form AI video model with native audio-video generation (16s synchronized output); achieves 40M creator adoption with 500M+ videos generated (70% commercial). CraftStory releases 5-minute image-to-video capability for long-form narratives with human actors and lip-sync alignment. Disney-OpenAI partnership ($1B licensing) and WPP-Google partnership ($400M) signal production-scale adoption by major media and advertising conglomerates. Agentic research frameworks emerge (ScripterAgent, DirectorAgent) for dialogue-to-cinematic generation. However, foundational technical barriers persist: MIT-IBM Watson benchmark analysis documents coherence degradation after ~8 seconds in all major models due to fixed-length context windows. Character consistency remains limited to stylized scenarios. Feature-length professional production remains capped at 60 seconds; deployment assessments confirm 5-10 minute outputs still require multi-model orchestration and intensive human curation. Temporal coherence and semantic narrative understanding remain substantially unsolved. Production workflows continue hybrid human-AI patterns. Technical limitations at scale and production economics continue to constrain autonomous long-form generation deployment.
2026-Feb: Market expansion signals growth (AI video generation market reached $1.8B with 45%+ CAGR). Enterprise adoption metrics document 42% of Fortune 500 marketing departments using tools, 65% of marketing teams (vs. 12% in 2024), 40% of e-commerce brands, 80%+ of social creators under 30. However, critical production deployment barriers emerge across evidence: Sora 2 assessed as "not reliable enough for final ad output" with "long-form and multi-scene control still fragile"; consumer adoption declined sharply (iOS downloads dropped 45% by January 2026); practitioner assessments note generative models "create more work, not less," generating "isolated clips with no narrative continuity." Professional use remains constrained to 10-25 second B-roll replacement. Consumer trust barriers persist: 36% report AI video lowers brand perception, 67% cite robotic gestures, 55% unnatural voices. Deployment evidence confirms market awareness and enterprise adoption acceleration, but production deployment barriers—narrative coherence gaps, consumer trust concerns, poor output reliability—continue to block autonomous long-form narrative generation. Hybrid workflows and ideation-phase use remain dominant applications.
2026-Mar: Major market consolidation and evidence of both scaled deployment success and sustained technical barriers. OpenAI shuts down Sora March 25 due to unsustainable $15M/day operating costs vs. $2.1M lifetime revenue, signaling structural failure of standalone video generation products; consolidation accelerates around Runway, Kling, and Veo. Real-world production evidence shows contrasting patterns: Higgsfield AI competition attracts 8,752 film submissions from 139 countries (broadest adoption signal to date), with China's micro-drama sector showing 90% cost reduction and 41% AI content penetration. Educational deployment case study (peer-reviewed NIH publication) documents successful long-form narrative video deployment for medical training with 72-94% cost reduction and improved learning outcomes. However, practitioner case studies confirm character consistency remains unsolved: real client project testing 4 major tools (Runway, Kling, Seedance, Pika) on 60-second explainer required 15-20 regenerations per scene due to character drift, ultimately abandoning AI generation. PhD researcher quantifies feature-length barrier: AI cannot sustain coherence across 100,000-150,000 frames (feature-length content). Successful production ROI deployment shows specific hybrid workflow (Sora+Runway hybrid) achieves 93.3% time reduction (6 days to 8 hours) for 60-second explainers with 340% ROI. Technical barriers—character consistency, narrative coherence, frame length—remain fundamentally unsolved despite market breadth signals. Deployment remains constrained to educational, explainer, and B-roll replacement use cases with intensive human editorial oversight.
2026-Apr: Sora's shutdown confirmed the structural failure of standalone video generation as a product category, with Futurum Group documenting enterprise adoption risks and vendor durability concerns as direct consequences; the market consolidated around Runway, Kling, and Veo. Research frontier advanced on multiple fronts: OmniScript addressed multi-scene audio-visual coherence; Stable Video Infinity (ICLR 2026 oral) introduced error-recycling for infinite-length generation; the MuSS benchmark established formal evaluation of multi-shot narrative logic and character consistency barriers; and the Mixture of Contexts paper (ICLR 2026) demonstrated 7x attention-routing speedup enabling minute-long multi-shot generation with maintained subject consistency. Vendor benchmarking confirmed API ecosystem maturity reaching Tier 5 "Cinematic Director" capability — multi-shot, physics-aware, audio-sync, scene-graph APIs now production-ready across Vidu Q3, Kling 3.0, and Veo 3.1. Sora 2 remained available via API until September 2026 with physics-aware rendering and world-state persistence. Against persistent character consistency barriers, a Japanese production firm documented Veo 3.1 deployment in an insurance company explainer campaign (one-third cost compression, half the production time, 20% higher view completion), while a critical analysis found 68% of buyers report vendor homogenization and viewers sense the absence of human storytelling judgment — signalling that trust, not just technical coherence, now constrains adoption for narrative content requiring emotional resonance.
2026-May: Research advances continued on coherence and consistency: A²RD (agentic autoregressive diffusion) benchmarked 30% consistency gains and 20% narrative coherence improvements across 1-10 minute videos, while the EduStory framework addressed pedagogical consistency for multi-shot STEM instructional content. Educational verticals showed real adoption at scale: VideoTutor reached 50M TikTok views, $11M seed funding, and 1,000+ enterprise API inquiries; ZSky AI reported 105,000+ creators across 39 countries generating synchronized audio-video instructional content. New industrial-grade tooling emerged with BACH (Video Rebirth), a Tier 5 Cinematic Director engine ranking #6 on Artificial Analysis benchmarks for character consistency in 30-second multi-shot films, entering enterprise pilots. Fundamental barriers remained documented: technical analysis identified three unresolved structural constraints — VRAM wall (O(n²) attention cost saturating H200 GPUs at 10-second clips), temporal drift, and causal consistency constraints — confirming that coherent autonomous long-form generation above 2-3 minutes remains an unsolved engineering problem, not a tuning gap.