Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

🎬 Creative & Generative Media

AI for generating and editing images, video, audio, 3D assets, and cross-media content. Mostly leading-edge with rapid advancement — image generation, music composition, and voice synthesis are approaching good practice. Video generation and 3D asset creation are progressing fast but quality and controllability gaps persist. The most active domain by momentum: over half the practices are advancing.

21 practices: 4 good practice, 15 leading edge, 2 bleeding edge

Creative & Generative Media -- Biweekly Brief

The headline: AI creative tools are better and cheaper than ever, but consumer trust in AI-generated content has dropped from 60% to 26%, and the practices pulling ahead are the ones deploying AI where customers never notice it.

The Picture

Most large organizations now use AI for some creative production. Adobe Firefly is embedded in 72% of Fortune 500 design teams. Canva has 265 million monthly users. AI-generated product images are standard at ASOS, Zara, and Amazon. Brand asset generation -- creating marketing visuals, resizing ads, producing campaign variations -- works reliably at scale, with documented 50-60% reductions in production time and cost. These are settled practices. The gap is between these mainstream uses and everything else. AI video, music, 3D assets, and voice cloning all work technically but face legal, trust, or organizational barriers that prevent broad rollout. Half of U.S. consumers now say they prefer brands that avoid AI-generated content. That number is rising, not falling. The organizations gaining advantage are the ones using AI behind the scenes -- in production workflows, background music, product mockups -- rather than putting AI-generated creative in front of customers.

This Fortnight

  • OpenAI's new image model changed how AI generates pictures, not just how well. GPT Image 2 uses a fundamentally different approach (token-based reasoning instead of diffusion), scoring 80% win rates in head-to-head comparisons and integrating immediately across Figma, Canva, Adobe, and fal. For teams evaluating image generation tools, this architectural shift may matter more than incremental quality gains -- it suggests the next round of improvements will come from reasoning about images, not just pattern-matching.

  • Adobe launched a conversational AI assistant that orchestrates creative workflows. The Firefly AI Assistant (public beta) connects Photoshop, Premiere, Lightroom, Illustrator, and 30-plus partner models into a single interface that takes natural-language briefs and produces multi-step creative output. NBCUniversal deployed it to 2,000 creatives, compressing campaign production from three weeks to seven minutes. Teams invested in Adobe's ecosystem should evaluate this as a workflow transformation, not just another feature.

  • Music distributors started blocking AI-generated tracks from unlicensed platforms. Believe (parent of TuneCore) blocked Suno-generated music while licensing ElevenLabs and Udio, creating the first hard split between licensed and unlicensed AI music tools. This policy is expected to spread to DistroKid and CD Baby within 60-90 days, which will determine which AI music tools remain commercially viable for any team using them for content production.

  • Voice cloning liability expanded in two directions simultaneously. ElevenLabs reached $500M in annual revenue, but seven Pulitzer and Emmy-winning journalists sued the company for unauthorized voice training, and India's Delhi High Court ruled voice is a constitutionally protected right. Any organization using voice cloning or AI-generated speech should confirm its vendor's consent verification framework before these precedents spread.

Coming Up

  • EU AI Act transparency requirements take effect August 2026. Any organization generating or distributing AI-created content in the EU must disclose its synthetic origin. Teams deploying AI video, avatars, music, or voice synthesis should audit their labeling and provenance workflows now -- the compliance infrastructure takes months, not weeks, to build.

  • GEMA's lawsuit against Suno (decision expected June 12, 2026) could make music licensing globally mandatory. If the Munich court finds Suno's outputs constitute copyright infringement through memorization of training data, it will reshape the economics of every AI music tool. Organizations using AI-generated music should confirm their vendor's licensing status and have fallback options ready.

  • Voice actor consent registries are launching. RSL Media 1.0 (backed by Cate Blanchett and Emma Thompson, launching June 2026) will provide a machine-readable public registry for AI use permissions on names, voices, and likenesses. This and Washington state's voice cloning law (effective June 10) signal that consent verification is becoming table stakes for any voice AI deployment.

What's Hard About This

  • Consumer trust is a harder ceiling than technology. Half of U.S. consumers prefer brands that avoid AI content. Listener interest in AI music declined 20 percentage points in six months. Eighty-three percent of viewers detect AI-edited video. Improving AI quality does not appear to solve this -- the resistance is about perception and authenticity, not fidelity. Organizations cannot simply wait for the technology to get good enough; they need a strategy for where AI visibility helps versus hurts.

  • Copyright liability is not settling -- it is escalating. A U.S. court denied Stability AI's motion to dismiss Getty's claims. The UK reversed its AI training exception. Pure AI output is uncopyrightable in three major jurisdictions. Adobe's licensed-data approach ($250M-plus ARR) proves the copyright problem is solvable but expensive. Organizations relying on unlicensed AI tools face growing legal exposure with no clear resolution timeline.

  • The business model for generative AI creative tools is unproven at the frontier. Sora shut down after burning $15M per day against $2.1M in lifetime revenue. Apple Music receives 33% AI-generated uploads but AI tracks account for less than 0.5% of listening. Eighty-five percent of AI music streams are flagged as fraudulent. The practices that work commercially are the ones embedded into existing production workflows (brand assets, product mockups, podcast editing), not the standalone generative platforms that attracted the most investment.


Go deeper: the full Creative & Generative Media briefing -- the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.