The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that evaluates the risk and blast radius of infrastructure changes and validates disaster recovery readiness. Includes change impact prediction and DR scenario testing; distinct from deployment risk in Software Engineering which focuses on application releases.
The tooling for AI-driven change risk assessment and automated DR validation is technically ready. The organisations using it mostly are not. Platform-native DR automation from AWS, Azure, and third-party vendors now offers automated failover, non-disruptive drills, and ransomware-integrated validation -- capabilities that meet or exceed what enterprises need. AI-augmented change risk assessment has shipped in production platforms like IBM Cloud Pak for AIOps, with topology-based blast-radius detection and geospatial risk visualisation. Yet adoption outside large enterprises with mature governance remains thin, placing this practice firmly at the bleeding edge.
The defining tension is a confidence-reality gap compounded by AI-era complexity. An OpenText survey of 1,773 IT leaders found 95% confident in ransomware recovery readiness, but only 15% of those who experienced an attack recovered successfully. A 2026 Keepit survey deepens the concern: 94% of organizations have added AI scenarios to their DR plans, but only 32% test those plans monthly, and 33% report limited control over autonomous agents. Practitioner reports corroborate the pattern: backup dashboards signal readiness while masking unvalidated RTO/RPO parameters, corrupt backups discovered only post-emergency, and AI agents now causing data loss at scales that invalidate traditional recovery timelines. Over 80% of IT outages stem from planned infrastructure changes rather than unplanned failures, yet 71% of organizations perform no failover testing at all. The bottleneck is not platform capability but organisational readiness -- governance integration, audit-function alignment, validation process maturity, and organizational blindness about failure modes hidden beneath passing test results. Until those foundations catch up, the practice will remain bifurcated: proven at well-governed large enterprises, underdeployed everywhere else.
AWS Elastic Disaster Recovery and Azure Site Recovery provide production-grade automated failover and validation workflows, joined by independent platforms like Druva CloudRanger, N2WS, and Cutover. VP Bank's deployment -- 78 critical workloads protected with 48% cost savings -- demonstrates what committed enterprises can achieve. Cutover's April 2026 launch of AI Create for automated recovery runbook generation addresses a specific organizational bottleneck: teams can now transition from unstructured documentation to executable, validated recovery procedures in minutes rather than days, reducing Mean Time to Resolution by 28-50%. AWS and Elastio have integrated ransomware recovery assurance into DRS with 99.999% data integrity validation accuracy, while compliance mandates (DORA, NYDFS) are pushing automated restore testing into regulated-industry roadmaps. Market growth is substantial: the DRaaS segment is projected to expand from $22.4 billion (2025) to $28.5 billion by end-2026, driven by ransomware threats and regulatory requirements; 74% of organizations now plan to adopt DRaaS for ransomware recovery.
March 2026 AWS Middle East outage (drone strikes on UAE and Bahrain regions) crystallized a critical learning: organizations with pre-built, chaos-tested DR infrastructure in secondary regions recovered in 30 minutes; those relying on untested multi-AZ plans lost all data. Real-world evidence now distinguishes tested from untested readiness: Danske Bank scaled DR from 130 services in 10 hours to orchestrating 3,000 automated tasks through AI-driven runbook generation, achieving 300% resilience efficiency gain. These deployments demonstrate that when organizations invest in validation discipline and AI-augmented orchestration, recovery operations mature significantly.
On the change risk side, IBM Cloud Pak for AIOps 4.10-4.12 introduced topology-based single-point-of-failure detection and geospatial external-risk visualisation, advancing automated blast-radius analysis. But the platforms carry their own risks: IBM Cloud Pak disclosed 69 CVEs including buffer overflows and cryptographic weaknesses, and Azure Site Recovery continues to surface operational failures in Hyper-V replication and VSS consistency -- reminders that GA status does not guarantee smooth deployment. December 2025 AWS incident involving autonomous AI agent (Kiro) executing infrastructure changes with elevated privileges, causing 13-hour outage, underscores the critical need for change-risk assessment gates before autonomous modifications.
On the change risk side, Virima's analysis quantifies what practitioners know: over 80% of IT outages stem from planned infrastructure changes, yet change approval processes fail to assess blast radius because dependencies remain invisible. Without accurate dependency mapping and tested rollback procedures, changes execute blind to downstream impacts. The organizational gap is acute: only 62% of organizations conduct regular backup and restoration exercises, and 71% perform no failover testing at all, yet untested DR plans fail at 60% rates in real incidents. Business Continuity Institute research reveals that passing DR tests provides confidence only within controlled assumptions -- real incidents layer concurrent stressors tests miss, exposing organizational blindness about failure modes.
Real-world validation gaps remain acute. Practitioner analysis shows 70% of DR plans fail their first genuine test due to environment drift and unvalidated recovery procedures. Quest Software's survey of 650 IT leaders found that 75% do not test disaster recovery within recommended timeframes, and 24% never test at all; yet 79% believe AI can improve testing practices. Veeam's 2026 survey of 900+ security leaders revealed the core confidence-reality gap: 90% feel confident in meeting RTOs, but only 69% say those RTOs align with actual business continuity goals; among ransomware victims, only 28% fully recovered affected data. Configuration mismanagement causes 66-80% of downtime incidents, underscoring that change risk assessment must prevent unvalidated modifications. The fundamental challenge: only 40% of organizations use automation in recovery processes, and 24% lack executable recovery plans. Encryption dependencies, hardware-specific storage architectures, and inadequately documented procedures prevent recovery despite backups existing -- a critical failure mode for any DR validation framework. AI-era data disasters have scaled the risk: a single AI agent can move 16x more data than human users combined, and full instance restores now take 27+ days, extending incidents from hours to months.
Deployment barriers expose a deeper organizational constraint: backup verification testing (validating that recovery actually works, not just that backup transfers succeeded) remains underdeveloped. Organizations discovering corrupt backups, incompatible storage dependencies, and missing recovery procedures only post-emergency demonstrate that validation gaps hide beneath operational dashboards. Practitioners cite data privacy risks, decision opacity, and the need for human oversight as barriers to trusting AI in high-stakes recovery scenarios. Stanford's 2026 AI Index identifies a fundamental reliability constraint: frontier models exhibit capability-reliability divergence at 2-3x capability growth vs. 1.2-1.5x reliability growth annually, meaning multi-step autonomous workflows (95% per-step = 60% end-to-end reliability) remain inadequate for mission-critical change assessment. Amazon's response to AI-assisted code failures -- mandating senior engineer sign-off before production deployment -- exemplifies the emerging governance pattern: change risk assessment is shifting from technical analysis to organizational control gates. The ISG forecast that three in four enterprises will adopt continuous data protection by 2027 signals where the market is heading, but governance and trust deficits keep most organisations from arriving there now.
— NetApp-Elastio partnership embeds continuous backup validation (Deep File Inspection) into ransomware resilience service; Crane WW Logistics validates continuous inspection provides recovery confidence—demonstrates production adoption of automated DR data validation.
— Interactive calculator quantifying blast radius (damage magnitude × reversibility × visibility) of AI agent actions; demonstrates adoption of quantified risk methodology for change impact assessment in agent governance.
— Kubernetes-native DR platform with automated failover orchestration, non-disruptive testing, and policy-driven replication; signals maturity of cloud-native DR automation and continuous validation tooling with zero RPO targets.
— Critical incident analysis: April 2026 coordinated Veeam backup platform attacks disabled immutability controls before production ransomware, defeating static DR strategies. Validates need for continuous adversarial validation and monitoring beyond standard operational testing.
— Systematic framework for pre-deployment blast-radius analysis: permission surface audit, risk classification matrix (automatic/async/real-time/hard-disable tiers), enforcement at harness layer—directly applicable to change risk assessment for autonomous infrastructure modifications.
— Consulting firm with deployed customer implementations outlines three-tier DR validation strategy emphasizing continuous testing, automated failover, and adversarial drills; includes customer testimonials demonstrating real-world operationalization of change risk and DR validation practices.
— Prescribes pre-deployment blast-radius inventory artifact (tool-by-tool worst-case effects, reversibility, audit trails, rate limits, composition risks) addressing AI-era change risk assessment; documents incident-response pattern validating framework adoption in mature agent deployments.
— EU's DORA regulation mandates threat-led penetration testing and validates DR testing as compliance requirement; identifies RTO/RPO obsolescence in ransomware era (realistic targets now 24-72 hours, not legacy 4-8 hours) requiring validation against realistic conditions.
2020: Early validation of DR testing practices by managed service providers; foundational discussions of risk assessment frameworks and DR validation methodologies; emerging but limited evidence of AI application to infrastructure change risk prediction.
2021: Major cloud platforms (AWS, Azure) released general availability disaster recovery services with automated testing and validation capabilities. AIOps platforms (IBM Watson AIOps) launched machine-learning-based change risk assessment modules. Analyst reports cited 20-70% improvements in incident detection when using AI blast radius analysis; real-world deployment challenges persisted.
2022-H1: AWS DRS and Azure Site Recovery matured with cross-region failback and automated drill validation; customers reported 80-97% gains in recovery time and productivity. December 2021 AWS outage underscored that effective DR validation depends on proper architecture, not just tooling. Change risk assessment via AIOps remained early-stage in enterprise, constrained by business process integration challenges rather than technical capability.
2022-H2: AWS continued platform-native DR automation with automated in-AWS failback and non-disruptive testing capabilities (Nov–Dec). Cloud security practice matured around blast radius assessment and permissions-based risk mitigation. However, no significant evidence of broadened enterprise adoption of AI-driven change risk assessment; focus remained on vendor tooling and cloud platform features rather than comprehensive IT risk orchestration.
2023-H1: Platform-native DR automation entered production mainstream with enterprise-scale deployments (Merck, vertical transportation providers) automating DRS and failover validation. IBM Cloud Pak for AIOps (v4.9+) integrated ServiceNow change risk assessment into production platforms. Organizational constraints remained primary blockers: platforms were mature and validated, but change governance integration and enterprise change management alignment continued limiting broader adoption. DR testing had matured from periodic validation to continuous automation; change risk assessment remained concentrated in large enterprises.
2023-H2: Industry perspective on AI's role in DR and business continuity became more nuanced; practitioner discussions acknowledged both AI benefits in planning and validation, alongside risks like hallucinations and data accuracy. Security research highlighted inadequate DR plans as critical vulnerability for ML systems. Academic studies compared AI-driven cloud disaster recovery to traditional methods, showing improvement in downtime and recovery times but noting persistent challenges in model bias and data privacy. Platform maturity remained high, with cloud-native automation continuing to dominate; organizational readiness and AI reliability concerns emerged as limiting factors for broader adoption.
2024-Q1: AWS expanded DRS automation scope with post-launch action framework enabling validation and configuration tasks to execute automatically after recovery (Jan 2024). Independent vendors (Bennudata, N2WS) continued maturing DR automation tooling with AI-assisted discovery, testing, and recovery validation. AWS released prescriptive guidance for automating database-specific DR orchestration using event-driven patterns. Platform ecosystem demonstrated maturity and breadth; focus remained on operational automation of DR validation and failover procedures rather than AI-driven change risk assessment for planned infrastructure changes.
2024-Q2: Platform-native DR validation continued advancing with updated AWS and Azure guidance on testing methodologies (Apr–Jun 2024). NTT demonstrated AI capability to predict infrastructure damage from disasters with 90% accuracy, validating machine learning for proactive risk assessment beyond reactive tools. Security research (NetSPI) revealed critical credential exposure vulnerability in Azure Site Recovery automation, exposing reliability gaps in enterprise DR validation tooling despite platform maturity. Market data indicated sustained demand for cloud DR services driven by cyber threats and compliance requirements. Platform capability remained ahead of organizational adoption; enterprise implementation remained constrained by integration with change governance processes and vulnerability management.
2024-Q3: Platform-native DR automation matured operationally with both AWS and Azure releasing hybrid failover guidance, while independent vendors expanded AI-assisted discovery and testing tooling. Market data showed DR software market projected to reach $50B by 2025 with 15% CAGR through 2033, driven by cyber threats and digital transformation. Practitioner adoption shifted toward continuous validation: enterprises increasingly moved from periodic manual DR drills to automated testing integrated with backup monitoring and replication workflows. However, security research highlighted critical vulnerabilities in automation tooling, and organizational constraints (change governance integration, business process alignment) continued limiting broader enterprise adoption of AI-driven change risk assessment. Platform maturity remained ahead of organizational readiness.
2024-Q4: AWS and Azure released updated failover, failback, and hybrid guidance by December 2024; independent DR platforms (N2WS, Bennudata) continued maturing AI-assisted discovery and testing. Organizational readiness barriers became acute: financial sector research showed sustained enterprise investment in multi-cloud DR (78% single-cloud preference vs. multi-cloud for resilience), while government IT (NASCIO survey) emphasized federated DR models and infrastructure resilience. However, critical research revealed organizational constraints limiting adoption: audit functions lagged AI integration (only 2-4% of audit departments with substantive AI implementation), and operational failures persisted (1 in 5 organizations unable to recover data after cyberattacks, 84% citing tool sprawl as resilience inhibitor). AI-driven change risk assessment remained concentrated in large enterprises; platform capability had reached production maturity, but organizational change management, governance alignment, and validation process integration remained the limiting factors for wider industry adoption.
2025-Q1: AWS expanded automated testing and rollback best practices through updated Well-Architected Framework guidance (Feb 2025); AWS re:Invent 2025 sessions demonstrated emerging AI-powered resilience testing using multi-agent chaos engineering. However, real-world failures emerged: Azure Site Recovery deployment failures exposed external dependency vulnerabilities in automated DR validation (Mar 2025). Market adoption data showed persistent barriers: only 29% of risk professionals using AI for risk assessment, 15% for business continuity planning, with 80% of organizations unprepared for AI governance risks. Platform capability continued advancing, but organizational adoption of AI-driven change risk assessment remained constrained by governance integration and audit function readiness gaps.
2025-Q2: IBM Cloud Pak for AIOps 4.10 GA (June 2025) advanced change risk assessment with automatic detection of single points of failure and geospatial visualization of external risks, signaling ecosystem maturity. AWS and Elastio integrated ransomware recovery assurance with AWS DRS, enabling automated data integrity validation with 99.999% accuracy. Compliance drivers (DORA, NYDFS) accelerated adoption of automated restore testing validated via AWS Backup integration. ISG analyst forecast signaled mainstream adoption: 3 in 4 enterprises expected to adopt continuous data protection by 2027. However, security vulnerabilities in IBM Cloud Pak (69 critical issues) and operational challenges documented in Azure Site Recovery (network limits, VSS failures, replication errors) exposed limitations in platform deployments. Platform ecosystem continued advancing capability, but organizational adoption remained constrained by governance alignment and security risk management in deployed solutions.
2025-Q3: Cloud-native DR automation platform maturity advanced steadily through Q3 2025 with AWS Elastic Disaster Recovery maintaining GA status alongside continued feature updates (non-disruptive drills, RPO/RTO transparency, infrastructure diversity support). Azure Site Recovery and Microsoft documentation emphasized drill-driven DR validation as industry best practice. Third-party ecosystem (Elastio, Storware) expanded AI-driven validation offerings with hourly replica testing and ransomware detection integration. Enterprise adoption metrics showed critical barriers: ESG research indicated 60% of enterprises unable to determine proper RTO/RPO parameters despite platform availability, signaling that organizational readiness and governance integration—not platform capability—remained the limiting factor. Quantified research (NIST, McKinsey) documented AI impact on operational DR: 60% reduction in damage assessment time and 35% improvement in outage forecasting accuracy. However, deployment complexity remained documented: AWS Backup automated restore testing required Lambda/EventBridge orchestration; Azure Site Recovery continued exposing VSS and replication challenges in real-world implementations. By quarter-end, platform-native DR validation had achieved stable maturity with expanding compliance integration (DORA, NYDFS), but organizational constraints—RTO/RPO governance gaps, limited audit function AI readiness, and governance alignment barriers identified in Q1/Q2—persisted as the primary adoption limiting factors.
2025-Q4: Q4 2025 crystallized a critical disconnect between technical platform maturity and organizational disaster recovery readiness. Platform ecosystem continued advancing: Druva CloudRanger automated ADR workflows with RTO/RPO validation; IBM Cloud Pak 4.12 matured topology-based change risk detection. However, OpenText survey (1,773 IT leaders) exposed stark confidence-reality gap: 95% expressed confidence in ransomware recovery readiness, yet only 15% of organizations that experienced ransomware achieved successful full recovery. This data point repositioned DR validation as an organizational change management challenge rather than a platform maturity problem. Security vulnerabilities in key change risk platforms (IBM Cloud Pak: 69 CVEs including buffer overflow, cryptographic weaknesses) signaled that operational dependencies on AI-assisted automation introduced new risk surface. Industry perspective shifted: AI Confidence Report highlighted need for human validators and robust data foundations in AI-driven decision-making, critical for change risk assessment reliability. By year-end 2025, DR automation was technically mature and widely deployed at large enterprises, but the practice revealed itself constrained by governance, process maturity, and organizational readiness gaps—not platform capability. Change risk assessment via AI remained concentrated in large organizations with mature IT governance; broader adoption faced barriers in audit function alignment, governance framework integration, and validation process standardization.
2026-Jan: Early 2026 data reaffirmed organizational readiness as the primary limiting factor. AWS published expanded multi-account DR governance guidance and expanded resilience capabilities across the cloud ecosystem. Practitioner analysis highlighted a critical gap in current DR validation practices: organizations relying on backup dashboard metrics faced false confidence, with real-world failures including 40% corrupt backup discovery post-emergency and unvalidated RTO/RPO parameters. VP Bank's AWS DRS deployment (78 critical workloads with 48% cost savings) demonstrated that enterprises willing to invest in governance and validation were achieving operational maturity. The widening confidence-reality gap in ransomware recovery readiness (95% confident, 15% successful) continued positioning organizational change management and validation process standardization as primary adoption barriers rather than technical platform maturity.
2026-Feb: Platform-native DR tooling continued maturing with documented adoption gaps limiting broader enterprise implementation. Real-world deployment challenges remained acute: Azure Site Recovery Hyper-V replication failures exposed operational complexity in automated DR validation despite GA status. Market data reinforced readiness barriers: 100% of surveyed businesses experienced revenue-impacting disasters in 2025 with $2.3 trillion global losses; 43% of companies never tested DR plans, 23% lacked one entirely, with average downtime costs exceeding $9,000/minute. Vendor perspectives on AI-powered DR adoption highlighted critical trust deficits: data privacy risks, decision opacity ("black box" model concerns), and need for human oversight in high-stakes scenarios emerged as limiting factors for AI tool adoption. Early 2026 signaled that while DR platform capability had matured, organizational readiness—governance integration, validation process standardization, and trust in AI-assisted decision-making—remained primary adoption constraints for AI-driven change risk assessment and automated DR validation at scale.
2026-Q2 (Mar-Apr): Validation and governance barriers crystallized as the core limiting factor for broader adoption.
2026-May (1-15): New practitioner and vendor evidence reinforced critical themes while revealing emerging validation strategies. Continuous backup validation evolved from aspirational to deployed: NetApp and Elastio announced embedded continuous Deep File Inspection into ransomware resilience services (May 12), with Crane WW Logistics validating that continuous monitoring provides concrete recovery confidence—shifting DR validation from periodic drills to real-time assurance. Tian Pan (software engineer) published two-part framework for AI-era change risk assessment: pre-deployment blast-radius inventory artifact (May 2) documenting tool-by-tool worst-case effects, reversibility, audit trails, and composition risks; and systematic risk classification matrix (May 5) with harness-layer enforcement, validating that mature AI agent deployments now operationalize change risk gates. The framework emerged because documented prompt-injection attempts rose 340% YoY, and teams with pre-written risk inventories survived incidents while those improvising during crises failed catastrophically. Cycles published an open-access blast-radius calculator (May 12) quantifying damage magnitude by action reversibility and visibility scope—evidence of mainstream adoption of quantified risk methodology. Kubernetes-native DR platforms matured: Trilio Site Recovery for OpenShift (May 11) announced automated failover, non-disruptive testing, and zero-RPO replication with Red Hat certification. Critical validation gap became explicit: Kinetic Consulting Group analysis of April 2026 Veeam backup platform attacks (May 8) documented a new failure mode—attackers disabling immutability controls before production ransomware, defeating static DR strategies that assume backup integrity. The incident validates a core principle: continuous validation must include adversarial conditions and monitoring for suspicious administrative activity, not just operational testing. EU's DORA regulation (May 1 analysis) mandates threat-led penetration testing and confirms that RTO/RPO targets obsolete in ransomware era require validation against realistic conditions (24-72 hours realistic, not legacy 4-8 hours). Practitioner consulting firm WZ-IT published three-tier DR validation strategy (May 3) emphasizing continuous testing, automated failover, and adversarial drills with customer deployments demonstrating real-world operationalization. May 2026 data reinforced previous monthly findings: core testing gaps persisted (62% skip regular exercises, 71% never failover test), confidence-reality gap remained stark (90% confident in RTOs but only 69% aligned to business goals; 28% ransomware victims fully recover data), and organizational readiness—governance integration, validation process maturity, adversarial testing discipline, and AI reliability—remained the bottleneck constraining broader adoption despite platform capability reaching full maturity across cloud-native, ransomware-hardened, and autonomous-agent-aware architectures.