The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI using reinforcement learning or adversarial techniques to generate edge-case and fault-finding test scenarios. Includes fuzz testing augmented with LLMs and RL-based test case evolution; distinct from standard test generation which aims for coverage rather than fault discovery.
Adversarial test generation applies reinforcement learning, mutation, and adversarial techniques to systematically discover faults and edge cases that coverage-oriented testing misses. The practice spans LLM red teaming, AI-augmented fuzzing, and RL-driven test case evolution. It is advancing at accelerating pace, with production deployments from enterprises (Fortune 500 in finance, healthcare) and investment validation (OpenAI's $86M acquisition of Promptfoo in March 2026). Research results remain impressive: IEEE S&P 2026 accepts PILOT, which discovered 51 CLI vulnerabilities with 33 already patched; GoldenFuzz found 5 new hardware vulnerabilities; CVPR 2026 papers advance multimodal model fuzzing with 36% improvement over prior methods. However, a critical gap persists between vendor maturity and practitioner capability. Most organisations lack the operational expertise, threat models, and deployment patterns to embed continuous adversarial testing into CI/CD. Regulatory pressure is rising: EU AI Act Article 15 now mandates resilience testing for high-risk systems. Yet threat actors are moving faster—AI-powered fuzzing delivers 400% coverage improvements—making the adoption urgency both organisational and strategic.
Vendor maturity accelerated in Q1 2026. F5's AI Red Team (January 2026) now deploys to Fortune 500 enterprises in regulated sectors with 10,000+ attack techniques. More significantly, OpenAI's $86M acquisition of Promptfoo in March 2026 signals mainstream platform consolidation — Promptfoo reached 350,000 developers and 25%+ Fortune 500 adoption within two years, with automated red-teaming of 50+ vulnerability types built into CI/CD workflows. Specialist vendors (PyRIT, Robust Intelligence, HiddenLayer) proliferate; analyst reports project the market expanding from $680M in 2025 to $8.92B by 2034 at 34% CAGR, with prompt injection attacks surging 340% in enterprise deployments.
Real-world deployments show operational maturity across domains. Multi-agent adversarial arenas now run continuously in production (15 agents on one system with 91.8% detection rates across 3,200+ attempts and cryptographic proof-of-integrity). High-stakes red-teaming of LLM applications—legal brief validators, therapeutic agents—demonstrates the practice operationalised with explicit safety thresholds. Infrastructure wins accumulate: IEEE S&P 2026 accepted PILOT (51 CLI vulnerabilities across 43 real-world programs, 33 patched); Anthropic's Frontier Red Team discovered 500+ zero-days including 22 Firefox vulnerabilities in 2-week collaboration with Mozilla; Google's OSS-Fuzz discovered 3,818 vulnerabilities across major open-source projects, driving active remediation across the ecosystem. Academic researchers using fuzzing independently discover critical vulnerabilities (e.g., Chrome WebNN GPU vulnerability in March 2026) that escaped years of normal development and security processes.
Critical capability gaps persist despite market expansion. Empirical study of 13 open-source AI pentesting frameworks found 8 frameworks hallucinate results—stopping at decodable strings and never reaching actual vulnerability chains, producing false security findings without ground-truth validation. Meanwhile, only 16% of organisations have ever red-teamed AI models, yet 74% experienced AI security breaches. EU AI Act Article 15 mandates resilience testing for high-risk systems—a regulatory forcing function—but deployment guidance remains sparse and tool reliability is empirically questionable. The asymmetry persists: threat actors deploy AI-powered fuzzing with 400% coverage improvements and 97% jailbreak success on frontier models, while most enterprises lack threat models and lack visibility into whether their tools produce reliable results.
— Commercial platform launch: Votal AI's CART with RLHF-trained adversarial attacker generating 100K+ attack prompts across 35+ categories and 185+ named attack techniques.
— Production deployment: Mozilla deployed agent-based adversarial fuzzing harness discovering 271 Firefox vulnerabilities (180 sec-high, 80 sec-moderate, 11 sec-low) with minimal false positives.
— Technical practitioner analysis: adversarial testing requires orchestrated workflows with state management and evidence validation, addressing critical operational deployment challenges.
— Real-world deployment of AI agents for continuous adversarial pentesting across 28 companies discovering 2000 vulnerabilities (44.6% critical/high) via automated behavioral exploration.
— Agentic automation of adversarial test composition. Unified framework with 45+ attacks, 450+ transforms, 130+ scorers achieving 85% Attack Success Rate in hours vs weeks.
— Peer-reviewed research on ML-based adversarial test generation for network protocols. Tested on 27 kernel CC implementations, discovered previously unnoticed bugs and limitations.
— Advanced technical analysis documenting 2026 LLM security threats with CVE specifics and peer-reviewed research. Comprehensive coverage of adversarial attack techniques (EchoLeak, RAG poisoning, payload splitting).
— Peer-reviewed research framework for adaptive red-teaming of LLMs using compositional attack generation and hierarchical sampling, achieving 0.97 safety rate on StrongReject and 0.95 on HarmBench.
2024-Q3: OWASP initiates standardized red teaming methodologies; Miami University releases AiR-TK open-source toolkit with 25+ adversarial attacks; HARM framework advances automated RL-based test generation for LLMs. Regulatory mandates (Biden EO, EU AI Act) drive industry adoption. Tools for fuzzing and vulnerability assessment (ZAP add-on) enable practical adversarial testing workflows.
2025-Q1: Mutation-based fuzzing achieves 95%+ jailbreak success rates on production LLMs (TurboFuzzLLM). LLM-assisted fuzzer generation for automotive protocols advances domain-specific application (SAE International research). Adversarial testing frameworks extend to industrial control systems (AAG). Critical voices highlight rising evaluation rigor challenges in the field.
2025-Q2: Ecosystem maturation accelerates: CyberArk releases FuzzyAI (1.3k GitHub stars); Meta publishes AutoPatchBench (136 fuzzing-discovered vulnerabilities) as part of CyberSecEval 4. RL-augmented fuzzing extends to robotics (GzFuzz, 25 crashes detected) and autonomous agents (RedTeamCUA, 60% attack success on computer-use agents). LLM-directed fuzzing advances efficiency (RandLuzz, 2.1x-4.8x speedups). Adoption barriers persist: practitioners report traditional testing fails on AI systems, and gap between research results and deployment guidance remains the core blocker.
2025-Q3: Research methodologies mature across domains: LLAMAFUZZ extends LLM-augmented fuzzing to structured data; AdverTest demonstrates two-agent adversarial RL for fault detection (8.56%+ improvements); MetAdv brings hybrid virtual-physical testing to autonomous driving (ACM recognition); LLAMA targets smart contract security (91% coverage). Novel training advances: UTRL outperforms frontier models on test quality. Enterprise adoption signals: Pentera (1200+ customers) commits to agentic red teaming. Core tension remains: tools advance but deployment guidance gap persists.
2025-Q4: Market validation accelerates with Gartner recognition of Adversarial Exposure Validation ($2.5B projected by 2026, 45% adoption). Real-world deployments documented: FuzzyAI used in AWS Bedrock security assessments; ATGen RL framework achieves 60% improvements over baseline LLM test generation. Threat actor adoption surfaces: AI-powered fuzzing shows 400% coverage and 280% bug discovery improvements. Tool ecosystem matures: specialized AI pentesting vendors (PyRIT, Robust Intelligence, HiddenLayer) gain visibility. Challenge persists: despite research advances and market momentum, deployment guidance for CI/CD integration remains the adoption bottleneck.
2026-Jan: Enterprise adoption accelerates: F5 releases AI Red Team with 10,000+ attack techniques and deploys to Fortune 500 enterprises in regulated sectors (finance, healthcare). Research advances continue with frequency-aware adversarial perturbations for vision system testing (IFAP). Threat landscape solidifies: practitioners assess adversarial ML attacks as operational risks today with escalating sophistication; deployment maturity follows market demand.
2026-Feb: Research methodologies mature across domains: SAFuzz advances semantic-guided fuzzing for detecting vulnerabilities in LLM-generated code (85.7% precision); AdverTest introduces two-agent adversarial loop for unit test generation (8.56% improvement over LLMs). Test suite robustness elevated: SWE-ABS framework strengthens benchmarks via mutation-driven adversarial testing, exposing inflated success metrics. Production CI/CD integration: Wireshark's automated fuzz job discovers memory safety bugs in real-world code. Practice transitions from research validation to operationalized methodology with deployment guidance patterns emerging.
2026-Mar: Vendor maturity and real-world deployment validate market category. OpenAI acquires Promptfoo (350K developers, 25% Fortune 500 adoption) for $86M; platform integrates 50+ adversarial test types into CI/CD. Research breakthroughs across domains: PILOT (IEEE S&P) discovers 51 CLI vulnerabilities; GoldenFuzz (NDSS) finds 5 critical hardware flaws; VIPL publishes 10 CVPR papers on vision-language adversarial attack generation (36% SOTA improvement); EACL 2026 demonstrates adaptive black-box optimization raising danger scores from 0.09 to 0.79 on production LLMs. Production systems demonstrate operational maturity: multi-agent adversarial arenas achieve 91.8% detection rates with continuous evolution; DeepTeam framework handles high-stakes multi-agent red-teaming (legal, therapeutic); AdvJudge-Zero fuzzer bypasses AI-judge safety mechanisms with 99% success rate via logit-gap analysis. Enterprise operationalization documented: internal adversarial simulation labs with CI/CD integration using CleverHans, Torchattacks, and IBM ART frameworks. Regulatory drivers surface: EU AI Act Article 15 mandates resilience testing. Critical perspective emerges: 540% year-over-year surge in prompt injection exploits; traditional security testing fails on non-deterministic AI systems; deployment guidance gap remains primary adoption barrier despite vendor proliferation.
2026-Apr: Market category confirmed at scale ($680M expanding to $8.92B by 2034 at 34% CAGR) as production red-teaming reaches landmark results—Anthropic Frontier Red Team, AISLE, and XBOW collectively discovered 500+ zero-days and 1,000+ vulnerabilities across major organisations, while a solo PhD researcher using fuzzing uncovered a critical CVSS-rated Chrome WebNN GPU vulnerability. Technical breakthroughs accumulate across the month: TEMPLATEFUZZ achieves 98.2% attack success on 12 open-source and 5 commercial LLMs; MASFuzzer demonstrates multidimensional API fuzzing for deep vulnerability discovery; CrowdStrike advances feedback-guided fuzzing methodology; ARES adaptive red-teaming framework achieves 0.97 safety rate on StrongReject using compositional attack generation. AI security agents crossed from assistants to autonomous hackers—Project Glasswing identified thousands of zero-days, and Claude Opus 4.6/Kimi K2.5 generate working exploits autonomously. Gartner recognizes Adversarial Exposure Validation as a mature category; BreachLock reports 40,000+ engagements with Fortune 100 adoption. However, tool-reliability remains problematic: empirical study of 13 open-source AI pentesting frameworks found 8 hallucinate results, stopping at decodable strings without reaching actual vulnerability chains, undermining trust in automated adversarial testing outputs. The adoption gap persists: only 16% of organisations have red-tested AI systems despite 74% having experienced AI security breaches, and prompt injection attacks surged 340% in enterprise deployments.
2026-May: Operational maturity solidifies with large-scale production deployments. Mozilla's agent-based fuzzing with Claude Mythos Preview discovered 271 Firefox vulnerabilities (180 sec-high); continuous adversarial pentesting across 28 companies found 2,000 vulnerabilities (44.6% critical/high); AdvNet exposed critical kernel bugs across 27 protocol implementations. Votal AI launched an RLHF-trained adversarial attacker with 100K+ attack prompts across 185+ named techniques, and agentic red-teaming frameworks (45+ attacks, 450+ transforms) now achieve 85% attack success rates in hours rather than weeks. Practitioner analysis highlights orchestration complexity — state management, tool integration, and evidence validation — as the primary gap between research capability and enterprise-ready deployment.