The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that prepares board governance documentation, meeting minutes, and materials for regulatory examinations and audits. Includes automated minute generation and examination readiness assessment; distinct from compliance planning which manages ongoing compliance rather than preparing for specific governance events.
AI-driven governance documentation has crossed from experiment to production deployment at forward-leaning organisations, yet most boards and regulated entities have not started. The technology is proven — automated minute generation, board material compilation, and examination evidence collection all work reliably in controlled settings — but adoption is bottlenecked by legal liability concerns and governance maturity gaps, not technical limitations.
The defining tension is structural: regulators increasingly demand AI-related governance artifacts (the SEC now includes AI oversight in virtually all examinations), while procedural law and evidentiary standards in many jurisdictions treat AI-generated records as legally insufficient. Early adopters in education, smaller boards, and financial services are extracting significant efficiency gains. Regulated sectors — banking, insurance, public utilities — remain constrained by privilege exposure, discoverability risks, and frameworks that predate AI. Only about 21% of organisations have mature governance models capable of supporting these tools, even as 85% plan to deploy autonomous AI agents that will require documented oversight.
Vendor tooling has matured faster than organisational readiness. Diligent Boards, the dominant platform with 25,000+ customers, delivers AI-powered Smart Minutes and document compilation that cuts meeting prep time by 80%. Board Intelligence reports 40% time savings for clients like Cambridge Building Society. Newer entrants are expanding the category: FairNow automates evidence collection for ISO 42001 and NIST compliance (customers include Dayforce and Cielo), while Diligent's AI Request Agent for Internal Audit creates auditable evidence trails. ON Semiconductor and HPE have begun piloting AI for SEC filing drafts, pushing governance documentation from board operations into regulatory disclosure.
Regulatory pressure has entered enforcement phase. The SEC's 2026 examination priorities embed AI oversight across all examination categories—not as a specialized topic but as a core supervisory focus with mandatory documentation expectations. FINRA has designated generative AI a formal supervisory priority, explicitly requiring written supervisory procedures and governance documentation for all AI tools. Federal Reserve guidance (SR 26-2, April 2026) replaces legacy model risk management standards with explicit documentation and evidence requirements for AI systems. EU AI Act high-risk system requirements took effect August 2, 2026, with conformity assessments and technical documentation serving as prerequisites for deployment. The regulatory shift is fundamental: examiners in 2026 no longer ask "do you disclose AI use?" but rather "can you demonstrate and reconstruct your AI decision-making with audit-ready evidence?"
Institutional investors have formalized governance expectations: Glass Lewis, WilmerHale, and Harvard Law Forum now benchmark board disclosures against four-pillar frameworks—formal AI inventory, risk classification, board-level oversight structure, and third-party validation. Proxy advisors and governance bodies treat incomplete documentation as a red flag. Only about 50% of S&P 100 companies disclose board-level AI oversight; fewer than one-third disclose both oversight structure and formal policy. Organisations now face a mandate to produce governance documentation proving responsible deployment, yet maturity remains constrained: a Deloitte survey of 3,200+ leaders (April 2026) found only 21% have mature governance models for agentic AI, despite 85% planning autonomous agent deployment. Two-thirds of board directors report limited or no AI knowledge.
Vendor platforms have hardened into production tooling. Diligent's AI Board Member (GA expected fall 2026) reads five years of board papers and reconstructs decision context, shifting boards from reactive to prepared. Board Intelligence ships production minute-writing, agenda planning, and report-generation tools with explicit risk governance controls. The Art of Service released OWASP-aligned implementation playbooks (April 2026) reducing governance assessment time from 6–9 months to 120–140 hours, targeting CISOs, compliance officers, and security architects conducting examination preparation. The playbooks map governance controls to NIST AI RMF, ISO 42001, and EU AI Act—commodifying the evidence-collection work that previously required consulting fees of EUR 80,000–250,000.
Unilever's governance platform deployment reduced undetected model drift by 40%, demonstrating ROI for examination-ready governance infrastructure. Bradesco (Brazil's largest bank) deployed agentic AI with full audit trails and 100% behavioral documentation, achieving 83% resolution rates and 30% cost reduction—a reference architecture for regulated governance. These deployments show the maturity threshold: governance documentation is not optional overhead but a capability that enables production agentic AI at scale in regulated sectors.
Legal risks remain concrete: Birmingham, Michigan dropped AI-generated minutes in late 2025 after concluding they violated Open Meetings Act standards. Law firm guidance continues to flag privilege erosion, discoverability of unvetted AI records, and hallucination risk as material barriers for regulated sector adoption. The governance documentation paradox persists—organisations face simultaneous mandates to document AI systems exhaustively (for regulatory compliance) and minimize documentation trails (to protect against litigation discovery). Only organisations with mature governance programs can navigate both.
— Framework article directly addressing governance documentation and examination readiness, with focus on SEC/DOJ/FTC enforcement, fiduciary liability, and quantified governance metrics for board assurance.
— Commercial implementation playbook explicitly designed for audit preparation and governance documentation. Includes 30-question assessments, evidence collection runbook, audit preparation playbook, and cross-framework control mappings (OWASP, NIST, ISO, MITRE ATLAS). Demonstrates production-ready governance documentation methodology.
— Shows board governance documentation tools (Minute Writer, Insight Driver, agenda planning, report writing AI) with explicit risk governance controls including data isolation, transparency, and human-in-the-loop design.
— Vendor blog post providing additional context on AI Board Member capabilities for board preparation, decision-making, and governance documentation processes.
— Research from Stanford/LSE/Oxford identifying specific gaps between governance policy requirements (EU AI Act, US Executive Order, China regulations) and current technical tooling and expertise available to implement them.
— Agentic AI governance methodology with explicit focus on immutable audit trails, evidence collection, and compliance-ready documentation. Addresses audit-grade logging and production-ready governance architecture for examination readiness.
— Institutional investor and proxy advisor expectations for AI governance documentation, with specific framework requirements (inventory, risk classification, external validation) that boards must document for 2026 examination readiness.
— Directly addresses regulatory drivers (SR 26-2, EU AI Act) and governance documentation requirements; prescribes specific documentation/versioning controls needed for examination compliance.