Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Image editing — inpainting, outpainting & extension

LEADING EDGE

TRAJECTORY

Stalled

AI-powered image editing for filling in, extending, and modifying images including background removal and replacement. Includes generative fill and canvas extension; distinct from style transfer which transforms the entire image rather than editing specific regions.

OVERVIEW

AI-powered inpainting and outpainting have reached a critical inflection point: universal feature availability is decoupled from production maturity. Every major creative platform now ships generative fill and canvas extension; enterprise cloud providers offer them as integrated APIs; Adobe's agentic integration across Creative Cloud and multi-model ecosystem support signal vendor confidence in capability breadth. Yet the practice remains leading-edge because deployment at scale consistently encounters hard boundaries: commercial teams at major retailers (Target, H&M, Amazon) report 60-80% of outputs require manual correction; professional photographers document quality limitations (faces at 60% realism); and systematic access barriers (regional gating, overzealous content filtering) continue constraining production use.

The defining tension is not capability availability but deployment reliability. Feature proliferation has outpaced quality consistency. Forward-leaning teams see productivity gains (40% faster background replacement documented), but only through hybrid human-AI workflows where editorial oversight is not optional but mandatory. Most organisations have integrated these capabilities into pipelines or attempted to; those that maintain production deployments have reintroduced human editors as a critical layer, indicating the maturity gap persists despite continued vendor investment and ecosystem expansion.

CURRENT LANDSCAPE

Adobe's ecosystem expansion and agentic integration signal capability maturity even as deployment barriers persist. April 2026 marks inflection toward orchestrated workflows: Firefly AI Assistant enters public beta with conversational task coordination across Creative Cloud apps (Photoshop, Premiere, Lightroom, Illustrator), automating multi-step inpainting/outpainting sequences (image adaptation, cropping, expansion). Photoshop now opens to third-party models (Google Gemini 2.5 Flash, Flux.1 Kontext) via "compare results across multiple AI engines before choosing favorite" -- a shift from walled garden to multi-model ecosystem. Custom Firefly models enable brand-consistent inpainting/outpainting workflows trained on user assets, addressing repeatability and visual consistency use cases. March-April quarterly Photoshop updates (v27.2–27.5) show active refinement of Firefly models targeting quality consistency. Financial signals remain strong: 30%+ AI-driven ARR, Generative Credits consumption tripling. However, deployment barriers remain material: regional access blocks, infrastructure complexity (three-layer gating), and overzealous content filtering continue constraining professional workflows. Quality thresholds stable but not exceptional: faces remain at 60% realism, seaming and texture matching in outpainted regions architecturally unsolved.

Enterprise cloud platforms demonstrate sustained maturity. Azure OpenAI maintains production inpainting support in gpt-image-1 and gpt-image-1.5 (March 2026) across three capability tiers. AWS Bedrock continues Inpaint/Outpaint delivery. Technical research addressing production barriers accelerated: CVPR 2026 papers advance speed-quality tradeoffs (InverFill); April 2026 research (CAMEO, RefineAnything) proposes quality-aware multi-stage editing and region-specific refinement for structural control and local detail preservation. However, critical deployment friction persists at commercial scale: Rewarx (April 2026) documents 60-80% of Stable Diffusion inpainting outputs require manual correction at Target, H&M, Amazon due to edge smearing and artifact hallucination; successful professional switches to commercial tools show substantial ROI. Adoption remains economically displacing: Association of Photographers survey (January 2026) found 58% of members lost work to generative AI, average losses up 142% to £34,900 per person. Critical forensic gap widened: CVPRW2026 papers (DiffusionPrint) document that state-of-the-art forensic detection requires specialized architectures targeting diffusion-inpainted regions, signaling detection sophistication lags generation quality advances and exposing deepfake attribution risk. The practice occupies asymmetric maturity: vendor ecosystem expansion, enterprise cloud delivery, and research sophistication advancing; yet commercial deployment at scale encounters hard reliability and quality barriers, economic displacement constraining professional adoption, and forensic detection gaps widening.

TIER HISTORY

ResearchAug-2022 → Aug-2022
Bleeding EdgeAug-2022 → Jan-2024
Leading EdgeJan-2024 → present

EVIDENCE (102)

flux-general/inpaintingProduct Launches

— FAL.ai Flux inpainting API with LoRA, ControlNet, and IP-Adapter support; pricing structure ($0.075/megapixel) demonstrates production-ready commercial ecosystem beyond major platforms.

— Firefly AI Assistant (public beta April 27, 2026) orchestrates 60+ professional tools including Generative Fill and Remove Background across Creative Cloud apps; marks transition to agentic inpainting workflows.

— Production API endpoint for ControlNet inpainting with 8 controllable guidance modes (canny, depth, HED, MLSD, normal, openpose, scribble, segmentation); confirms ecosystem-level product-ga maturity.

— NAB 2026 coverage: Firefly Boards demonstrated real production gap-filling—extending LED volume background into clean aerial drone shot, generating usable 4K footage; documents production adoption in film/broadcast.

— Professional photography publication review: Photoshop 2026 GA features include auto-detection of distractions, Firefly 5 detail preservation, and improved reflection removal; documents photographer-focused production workflows.

— Government-funded DAIS report: 86% of young digital creators adopted generative AI for image editing/alterations/upscaling; 49% use daily, 34% use for specific tasks; documents task-level deployment.

— Adobe Photoshop 27.6 release notes document redesigned Generative Fill with multi-model support (Firefly, Gemini), UI refinement, and stability improvements; multiple crash fixes indicate production deployment maturity.

— AWS Bedrock launches Stability AI Image Services (13 specialized tools) including Inpaint, Outpaint, Erase Object with Stable Diffusion 3.5, achieving enterprise cloud-native deployment at scale.

HISTORY

  • 2022-H2: DALL-E 2 and Stable Diffusion introduced dedicated inpainting and outpainting features, enabling AI-driven image editing and canvas extension. Early tutorials and community experimentation began documenting these capabilities, marking the foundation of the practice.

  • 2023-H1: Major vendors entered the market and production barriers became visible. Adobe shipped Generative Fill in Photoshop beta (May 2023), while AWS enabled enterprise Stable Diffusion inpainting deployment via SageMaker JumpStart (June 2023). Academic research on inpainting security defenses indicated technology maturity, but widespread bug reports from both open-source and commercial platforms revealed reliability gaps: outpainting producing artifacts, inpainting failing on non-contextual edits, and content filtering limiting workflows. Feature availability accelerated while production-grade stability lagged.

  • 2023-H2: Feature proliferation and mainstream recognition accelerated despite ongoing quality issues. Adobe's Generative Expand reached GA and earned TIME's Best Invention recognition, with adoption scaling to 3+ billion images generated. OpenAI launched DALL-E 3 with improved inpainting, while research publications (ICML, ACM MM) advanced technical capabilities on coherence, multimodality, and resolution. However, production reliability remained constrained: practitioners documented systematic quality issues including resolution limits (1024x1024), unwanted object insertion, color inconsistency, and restrictive content filters. The gap between widespread feature availability and production-grade quality defined this period.

  • 2024-Q1: Platform integration accelerated while reliability concerns surfaced. OpenAI expanded inpainting/outpainting to ChatGPT (March 2024), integrating the practice into a mainstream conversational AI platform. E-commerce applications advanced with Amazon publishing real-time virtual try-on research targeting billions-of-products scale. Technical research addressed performance barriers: 4K-resolution algorithms achieving >60 fps, and mask-aware diffusion improvements (Soft Inpainting) in open-source tools. However, quality regressions emerged: Photoshop users reported systematic failures in subject removal and speculation about training data rollbacks due to consent concerns. The quarter demonstrated continued feature proliferation offset by instability in production deployments, suggesting the practice remains in the painful phase where availability outpaces reliability.

  • 2024-Q2: Vendor iteration and ecosystem maturation accelerated alongside persistent reliability challenges. Adobe released Firefly Image 3 (April 2024) with improved quality and reference-image guidance. Hugging Face standardized outpainting documentation in diffusers library (May 2024). Academic research advanced bias mitigation and transformer-based techniques (CVPRW, arXiv). However, production deployment barriers persisted and expanded: Photoshop users reported continued "Resource not available" connectivity failures, while SDXL inpainting exhibited new noise artifacts at maximum strength. By quarter-end, the practice showed feature abundance with unresolved control and reliability gaps, constraining confident production deployment.

  • 2024-Q3: Research and security focus intensified as maturity concerns surfaced in academic literature. CVPR 2024 showcased instruction-guided editing workflows combining inpainting with language models, demonstrating emerging sophistication in application design. Peer-reviewed research at IJCAI and arXiv advanced understanding of inpainting vulnerabilities: adversarial attacks against Stable Diffusion inpainters, forensic detection methods for identifying tampered regions, and weaknesses in proposed defenses. Meanwhile, capability gaps persisted in enterprise offerings—Azure OpenAI confirmed non-support for DALL-E image editing APIs through mid-2024. The quarter demonstrates a pattern of widening technical sophistication coupled with persistent reliability and deployment challenges: academics advancing attack surfaces and detection, while practitioners encountered continued quality regressions and missing enterprise integrations. The practice remained in the "uneven maturity" phase—research depth accelerating while production confidence stagnated.

  • 2024-Q4: Vendor feature announcements coincided with critical reliability regression. Adobe announced updated Generative Fill and Expand features at MAX 2024 (October), prompting expectations of improved quality; instead, professional users reported dramatic degradation, with success rates dropping from 90-95% to 5-10% post-update, accompanied by crashes and feature failures. Capability gaps persisted: Azure OpenAI still lacked inpainting support (confirmed October). Applied research expanded into domain-specific use cases (cultural heritage restoration), and open-source community development standardized complex multi-step workflows (SDXL outpainting with upscaling via ComfyUI). The quarter exemplified the practice's core maturity gap: sophisticated research and developer tooling advanced while the primary commercial platform regressed in production stability, reinforcing that feature abundance and deployment reliability remain decoupled.

  • 2025-Q1: Academic research matured significantly while production reliability remained constrained. IEEE TPAMI published a comprehensive diffusion-model survey consolidating inpainting/outpainting methodologies; CVPR and ICCV 2025 papers advanced semantic correctness (SAGI dataset of 95k+ images), object addition via inverted inpainting, and forensic detection. Applied deployment expanded into resource-constrained environments with quantization optimization for edge inference on agricultural data augmentation. However, Adobe Generative Fill continued to exhibit quality issues: users reported overzealous content filtering, unwanted object insertion, and degradation compared to beta releases throughout Q1. Enterprise cloud integration remained incomplete: Azure OpenAI still lacked inpainting support. The quarter widened the gap between academic sophistication and production stability—theoretical foundations solidified while commercial reliability and content filtering remained barriers to confident adoption.

  • 2025-Q2: Capability expansion and research advancement coincided with production reliability deterioration. Morphic Studio launched production Video Outpainting extending the practice beyond static images (May 2025). Research progress accelerated with peer-reviewed papers on Vision Transformer efficiencies, color consistency fixes addressing blending artifacts, and human-feedback fine-tuning for medical imaging accuracy. However, production-stage failures accumulated: DALL-E 2 outpainting experienced complete failure (April 2025) affecting commercial users with accumulated credits; Adobe Generative Fill underwent May 2025 regression breaking reference image workflows and disabling text prompting, compounding prior quality issues. The quarter demonstrates persistent bifurcation: academic and domain-specific research advancing sophisticated methodologies and expanding to video, while primary commercial platforms (Photoshop, DALL-E) exhibited deepening reliability regressions. The practice remains constrained by production maturity despite widespread feature availability and theoretical advancement.

  • 2025-Q3: Enterprise cloud integration accelerated alongside continued desktop-platform reliability issues and academic maturity consolidation. AWS Bedrock announced Stability AI Image Services GA with production Inpaint and Outpaint tools (September 2025), while Azure OpenAI expanded support with GPT-image-1 offering Precision Inpainting capabilities. A major peer-reviewed survey from Shenzhen Institute, Adobe, and Apple synthesized 100+ research papers and proposed new evaluation frameworks (September 2025). However, Adobe Photoshop Generative Fill suffered critical July 2025 regression blocking portrait edits with overzealous content filtering, while concurrent technical critiques documented persistent weaknesses in OpenAI's inpainting including texture fidelity issues and contextual misunderstanding. The quarter illustrates asymmetric development: enterprise cloud platforms and research sophistication advancing while primary consumer tools remain reliability-constrained.

  • 2025-Q4: Enterprise cloud consolidation continued alongside persistent desktop-platform service instability and cybersecurity research advancement. Adobe's Q4 FY25 financial reporting confirmed ecosystem momentum with 30%+ AI-driven ARR and Generative Credits consumption tripling, signaling sustained adoption at scale. However, professional users documented widespread Generative Fill access failures and regional availability blocks (November 2025), indicating infrastructure resilience gaps despite public GA status. Research broadened into synthetic image authentication—arXiv studies evaluated detector robustness against inpainting manipulations, signaling maturity concerns about forensic attribution and deepfake implications. Open-source communities sustained active problem-solving through Stable Diffusion inpainting guides and DALL-E outpainting tutorials (October-December 2025), demonstrating continued developer confidence despite primary platform instability. The quarter exemplifies the practice's defining maturity asymmetry: financial adoption metrics and enterprise cloud deployment advancing strongly, theoretical research sophistication expanding into forensics and security, while primary consumer/professional tools (Photoshop, legacy DALL-E APIs) remain fragile, access-restricted, and requiring persistent workarounds from practitioners—indicating that feature ubiquity and reliability remain fundamentally decoupled.

  • 2026-Jan: Feature iteration and quality recovery attempts coincided with persistent reliability challenges and security research advances. Adobe announced upgraded Firefly-powered Generative Fill and Generative Expand (January 27) with 2K resolution, sharper detail, and fewer artifacts, representing vendor response to prior quality concerns. However, deployment barriers remained substantial: technical analysis documented 3-layer gating complexity (identity, infrastructure, intent) explaining grayed-out features, while regional availability blocks persisted through January 2026. Stable Diffusion ecosystem demonstrated continued dominance with 80% market share and 12.59 billion images produced as of 2024. Critical security research (arXiv, January 30) demonstrated state-of-the-art AI-generated image detectors fail dramatically on inpainted content (91%→55% accuracy), exposing detection gaps and deepfake risks. Adoption metrics confirm continued high usage—2/3 Photoshop beta users employ generative AI daily and Generative Fill ranks among five most-used features—yet independent critical assessment documented persistent production limitations including context blindness, texture quality gaps, and hybrid human-AI workflows becoming standard. The month illustrates the practice's core tension: continued feature investment and usage scaling offset by reliability constraints, security vulnerabilities in detection, and production quality gaps forcing reintroduction of human editorial oversight.

  • 2026-Feb: Production deployment matured unevenly across platforms alongside critical research and practitioner assessment of limitations. Adobe's 2026 Generative Fill deployment achieved meaningful adoption with field reporting of 40% productivity gains in background replacement workflows on real client work; however, quality limitations persisted with faces landing at 60% realism threshold, indicating incremental rather than transformative improvement. Research advanced technical solutions: CVPR 2024 workshop papers proposed object-aware background generation reducing object extension artifacts by 3.6x versus prior models, addressing known outpainting limitations through improved model architecture. Critical practitioner assessments highlighted production barriers: independent hands-on testing of 30+ tools documented persistent accuracy and face quality limitations requiring human editorial judgment; systems engineering analysis documented architectural trade-offs between blending quality and performance, emphasizing seaming and texture matching remain unsolved. Significant negative signal emerged from professional communities: January 2026 survey of Association of Photographers members reported 58% lost commissioned work to generative AI with average financial losses up 142% to £34,900 per person, indicating both adoption acceleration and economic displacement constraining professional adoption. The month demonstrates the practice at an inflection point: vendor quality iteration and research advancement continuing while professional adoption remains constrained by reliability gaps, quality limitations, and economic disruption in creator communities—the defining pattern of the leading-edge tier where capability availability exceeds production maturity and professional confidence.

  • 2026-Mar/Apr: Enterprise cloud consolidation and agentic vendor offerings accelerated alongside forensic research exposing deployment reliability gaps. Adobe released major March 2026 updates with Generative Fill and Expand moving to 2K resolution and support for 25+ third-party models; in April, the Firefly AI Assistant (Project Moonlight) entered public beta with agentic orchestration across Creative Cloud apps, automating multi-step inpainting and outpainting sequences and signalling evolution toward AI-coordinated creative pipelines. Late April brought Photoshop v27.6 with redesigned Generative Fill UI, multi-model picker (Firefly, Gemini, third-party), and stability refinements (crash fixes, panel responsiveness) demonstrating production maturity through iterative polish. AWS Bedrock completed Stability AI Image Services GA on April 28, launching 13 specialized inpainting/outpainting tools (Inpaint, Outpaint, Erase Object, Search and Replace) with Stable Diffusion 3.5, achieving enterprise cloud-native inpainting at scale across AWS infrastructure. Quarterly Photoshop updates (v27.2–27.5) showed active model refinement; Azure OpenAI confirmed production inpainting support across model generations (GPT-Image-2, GPT-Image-1.5, GPT-Image-1, GPT-Image-1-Mini), providing enterprise teams stable documented deployment paths. Technical research matured production optimization: CVPR 2026 papers (InverFill) addressed the speed-quality tradeoff through semantic-aware noise initialization; CVPRW2026 (DiffusionPrint) proposed specialized forensic fingerprinting for diffusion-inpainted regions, confirming inpainting has matured enough to require tailored detection architectures; independent study (April 28) documented gpt-image-2 inpainting at production scale for document editing with sub-second latency. However, critical commercial deployment barriers persisted and new evidence documented them: professional retouchers identified 1024×1024 native resolution cap with forced upscaling destroying fabric texture synthesis; technical analysis documented edge artifacts, color bleeding, and texture mismatches constraining professional production workflows. Production case study (April 24) by Senior Art Director on 10 commercial projects confirmed Photoshop+Firefly production maturity for pixel-perfect product integration with commercial legal indemnification, achieving 3B+ Firefly images generated; however, documented persistent productivity gaps and quality constraints limiting broad professional adoption. Market adoption metrics confirmed mainstream integration: web traffic rankings (February 2026) showed millions monthly users across multiple outpainting platforms; AI image-editor market growing 15.7% CAGR to $8.9B by 2034, with 99% of creatives using generative AI, yet professional economic displacement persisted (58% photographers lost work, average losses up 142% to £34,900/person). The period exemplifies leading-edge maturity at inflection: vendor platform consolidation (Adobe agentic, AWS/Azure enterprise deployment), professional adoption confirmed in specific segments (product photography, commercial campaign workflows), and enterprise cloud delivery advancing—yet quality limitations (resolution, texture synthesis, edge handling), documented professional barriers (required manual correction at retailers 60-80%), and economic displacement in professional photographer communities continuing to constrain tier advancement despite universal feature availability.

  • 2026-May: Ecosystem maturation and agentic adoption reinforced leading-edge status without tier advancement signals. Adobe Firefly AI Assistant public beta (April 27) expanded agentic inpainting orchestration across Creative Cloud to mainstream access; Photoshop Remove Tool GA added auto-detection of scene distractions alongside improved reflection removal, signaling refinement of inpainting-adjacent professional workflows. Ecosystem vendor diversity accelerated: FAL.ai launched Flux General Inpainting API with LoRA and ControlNet support; Stable Diffusion API documented ControlNet inpainting endpoint with 8 controllable guidance modes (canny, depth, HED, MLSD, normal, openpose, scribble, segmentation). Ecosystem research confirmed structural advances: Samsung LaMa deep-dive documented inpainting method progression from patch-based to GAN to transformer-based and diffusion approaches, indicating category-level technical maturity. Production case evidence emerged from film/broadcast: NAB 2026 coverage documented Firefly Boards real production gap-filling (aerial drone shot extension from LED volume), demonstrating edge-case deployment. Adoption metrics confirmed momentum: DAIS government-funded report showed 86% of young digital creators adopted image AI for editing/alterations (49% daily, 34% task-specific), confirming task-level embedding in creative workflows. Market expansion confirmed: PixelDojo analysis showed AI art studio market projected to reach $9.85B by 2030 (57.4% CAGR), with inpainting positioned as standard platform feature. However, deployment maturity remained constrained: no evidence of tier-advancing quality breakthroughs, resolution limitations persist (1024×1024 native), and professional economic displacement continued. The month demonstrates ecosystem breadth and adoption depth without breakthrough capability advancement—ecosystem vendors expanding API access, professional workflows refining through iteration (Remove Tool, Reflection Removal), agentic orchestration reaching mainstream adoption, yet fundamental quality and reliability constraints remaining static.

TOOLS