Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Accessibility & accommodation support in education

LEADING EDGE

TRAJECTORY

Stalled

AI that supports learners with disabilities through automated accommodations, captioning, content adaptation, and assistive tools. Includes real-time captioning and content reformatting for accessibility needs; distinct from product accessibility auditing which targets digital products rather than learning environments.

OVERVIEW

As of May 2026, AI-powered accessibility in education stands at a critical inflection point: the April 24 regulatory deadline for ADA Title II WCAG 2.1 AA compliance arrived but was extended by DOJ (large entities to April 2027, smaller entities to 2028), yet the tools are feature-complete while institutional execution has not kept pace. Every major platform — Zoom, Microsoft Teams, Google, Panopto — ships production-grade ASR captioning with 50+ language support, and forward-leaning universities have enabled auto-captions by default. Yet most institutions have not crossed the gap from tool availability to genuine accommodation quality. ASR accuracy sits at 85-95% depending on conditions, well below the compliance-grade threshold, with systematic failures on accents, technical terminology, and multi-speaker environments. More critically, recent research documents that Whisper Large-v3 exhibits hallucination (fabricates content rather than signaling transcription failure), real-world meeting accuracy degrades to 8-12% error rate versus clean audio at 2.7%, and systemic bias affects African American English speakers at 40% error rate for Black men — meaning ASR systems create behavioral adaptation burdens and representational harms that undermine educational equity. The binding constraint is organizational, not technical: 45% of educators are unaware of ADA compliance rules, only 6% of education organisations conduct safety testing on student-facing AI systems, and independent evaluations consistently find that auto-captions alone do not meet WCAG standards. Institutional practice confirms this reality: UC Merced's Kaltura workflow achieves 60-80% accuracy baseline, requiring labor-intensive manual review to reach 99% WCAG AA compliance. Regulatory deadlines are forcing procurement decisions, but procurement is not implementation. The practice's central tension remains clear: vendor maturity has outrun institutional readiness, and equitable outcomes depend on human review, governance infrastructure, and commitment to addressing systemic biases that most organisations have yet to build. Research is advancing on multiple fronts — text simplification, DHH-centered caption interface design, and emotionally-enhanced captions for neurodivergent learners — but deployment constraints remain organizational and equity-focused rather than technical.

CURRENT LANDSCAPE

Microsoft Teams now supports captioning in 50+ languages with sign language interpreter positioning and auto-detection of spoken language with real-time caption/transcript updates; Otter.ai has reached 35M+ users and $100M ARR, adding HIPAA compliance in mid-2025 to address regulated education contexts. Vendor features have stabilised. The action has shifted to institutional deployment and its complications. The University of Alberta and Georgetown have enabled auto-captions by default across platforms; Binghamton University has built a mature compliance operation with dedicated coordinator roles, advisory groups, and systematic audits spanning 250 websites and 60,000+ LMS files. Khan Academy has published institutional deployment guidance demonstrating production-grade accessibility governance: WCAG 2.2 AA development baseline, external accessibility experts, annual third-party audits, VPATs, and assistive-technology user testing. These are the vanguard — most institutions remain far behind. The April 24, 2026 regulatory deadline (ADA Title II, WCAG 2.1 AA compliance) was extended by DOJ (now April 2027 for large entities, 2028 for small), with enforcement mechanisms active and litigation accelerating (4,600+ federal accessibility lawsuits filed in 2025).

Third-party tool consolidation is reshaping the landscape. A class-action lawsuit against Otter.ai over unauthorised voiceprint creation has heightened privacy risk for education buyers, and universities including UC Riverside and UMass have removed third-party transcription tools entirely, pushing students toward built-in platform options. Independent testing by Equal Entry and University of Maryland system guidance confirm that auto-captions from Zoom, Panopto, YouTube, and Vimeo do not meet WCAG without manual editing — making hybrid human-plus-AI workflows the emerging institutional standard rather than full automation. Research from the University of British Columbia (2026) shows that while students believe corrected captions improve learning, empirical data shows negligible differences between corrected and uncorrected AI captions (effect size r=0.14), suggesting that automatic captions cost-effectively meet accessibility needs without expensive human correction — though this conflicts with practitioner guidance from accessibility specialists recommending human review as mandatory.

Regulatory compliance has concentrated institutional focus, yet compliance readiness remains weak: only 14% of school districts completed accessibility updates by the April 2026 deadline, and 88% of the 20 largest districts received failing grades on accessibility fundamentals. DOJ formally acknowledged that generative AI 'does not yet reliably automate the remediation of inaccessible content at scale,' confirming that automation cannot substitute for institutional governance. Federal investment signals support: the $7.2M NCADEMI centre and $2M Department of Education funding for evidence-based accessibility tools both launched in 2024-2025. Yet significant barriers persist: a ScreenPal survey of 600+ educators found 47% of educational videos still lack audio descriptions; peer-reviewed research documents systemic ASR bias affecting African American English speakers (40% error rate); Whisper real-world accuracy degrades to 8-12% in meetings versus 2.7% on clean audio; and low-resource languages face 25%+ error rates. Only 6% of education organisations conduct AI safety testing on student-facing systems. The critical constraint remains institutional execution: UC Merced's documented Kaltura workflow achieves 60–80% auto-caption baseline accuracy but requires labor-intensive manual review to reach 99% WCAG AA compliance. Most institutions lack the governance infrastructure, expertise, and capacity for the human-review workflows that define the boundary between compliance and equitable outcomes.

TIER HISTORY

ResearchJan-2020 → Jan-2020
Bleeding EdgeJan-2020 → Jan-2021
Leading EdgeJan-2021 → present

EVIDENCE (125)

— Computational linguist argues ASR systems encode standardized-speech assumptions, excluding Indigenous languages and non-mainstream dialects; hospital AI scribes produce 50% inaccuracy rate—documenting justice and equity barriers in transcription deployment.

— Peer-reviewed research (Google Research, UC Davis, Stanford) documenting that ASR systems systematically misrecognize African American English (40% error rate for Black men); shows behavioral adaptation burden on users—critical equity barrier for diverse learners.

— Zurich/Cambridge research (JASA Express Letters 2024) documents Whisper Large-v3 hallucination: fabricates content during silences rather than admitting transcription failure; critical accuracy limitation for educational deployment.

— Whisper Large-v3 real-world accuracy: 2.7% WER on clean audio vs. 8-12% in meetings (3-4x worse); hallucination 1-80% of segments; low-resource languages 25%+ WER—documenting language disparities affecting minority learners.

— User study (12 DHH participants, CHI 2018 peer-reviewed) demonstrates AI-powered caption visualization design with speaker identity and context preferred over linear captions for group learning settings.

— Empirical study (144 undergraduates, British Journal of Educational Psychology 2026) shows automated captions meet accessibility needs without requiring expensive human correction; effect size r=0.14 across conditions.

— Khan Academy institutional deployment case: WCAG 2.2 AA baseline, annual third-party audits, VPATs, internal accessibility OKRs, and assistive-technology user testing—exemplifying organizational governance required for compliance.

— NUS case study (two-semester pedagogical trial) shows AI-generated video with automated captioning achieves 8.1% learning gains vs. 4.1% traditional; students prioritized accurate captions over instructor authenticity.

HISTORY

  • 2020: AI-powered accessibility moved from experimental pilots to rapid institutional deployment during COVID-19 remote learning shift. Major universities (UCL, Trinity College Dublin) deployed ASR captioning at scale; Otter.ai and Microsoft Teams launched education-specific transcription features. Accuracy reached 85-90% for general speech but remained problematic for technical terminology. Adoption was policy-driven (opt-out at some institutions) and highly variable (1% opt-in adoption elsewhere). Student surveys showed strong demand despite persistent gaps; professionals noted tools were helpful but not complete solutions.

  • 2021: Research validated AI-driven captioning effectiveness (96%+ accuracy for English in controlled settings, 87.5% UX preference for AR interfaces), while institutions consolidated deployment. UT Austin and Ohio State established university-wide policies for automated captioning; TAMIU adopted outsourcing models. Microsoft and Otter.ai expanded product support across multiple platforms and languages. However, practitioner evidence from major conferences revealed persistent failures in real-world deployment—inaccuracy with technical terms, inconsistent caption quality, and incomplete coverage—signaling that automation required human oversight and institutional planning to deliver equitable access.

  • 2022-H1: Vendor ecosystem matured (Zoom, Teams, Otter.ai, YouTube all offering free ASR), but implementation equity gaps widened. Peer-reviewed research found 50%+ of disabled students in online science courses still not properly accommodated despite tool availability. Algorithmic bias concerns emerged—Zoom captions performed inconsistently across dialect variants (AAVE accuracy issues). Usability barriers became visible: Zoom's manual two-step enablement limited practical adoption despite free captions. Accuracy limitations persisted (YouTube auto-captions 60-70%, below WCAG). Institutional barriers included syllabus language that positioned accommodations as exceptions, creating stigma. The critical insight: technology had scaled, but organizational and cultural readiness had not.

  • 2022-H2: Ecosystem expanded globally—Microsoft Teams added Ukrainian and multilingual support; Zoom launched translated captions. Real-world institutional deployments documented (University of Edinburgh's Zoom + AI Media live captioning for deaf students; NWEA-Microsoft accessible math assessment prototypes). However, critical assessment emerged: National Deaf Center documented persistent platform limitations (missing punctuation, speaker ID, audio descriptions); independent research on Zoom accessibility revealed gaps in caption accuracy and embedded content accessibility. The field had reached a strategic inflection point: technology maturity was no longer the constraint—organizational execution, institutional commitment, and genuine implementation equity were the determining factors for advancing accessibility outcomes.

  • 2023-H1: Policy adoption accelerated—44 US states formally recognized assistive technology as legal accommodation for students with disabilities. Major vendors confirmed production deployment metrics: Zoom ASR at 80% accuracy, Microsoft Teams at 85-90%. Institutional pilots expanded: Williams College deployed Accommodate platform to automate accessibility management amid post-pandemic demand surge. However, deployment barriers emerged: HIPAA compliance restrictions blocked auto-captions on Zoom in healthcare-adjacent contexts, revealing regulatory constraints on AI-driven accessibility. Practitioner and vendor evidence from AHEAD conference framed ASR as both solution and stopgap, emphasizing that technology availability did not guarantee equitable implementation. The critical pivot: by mid-2023, every major educational platform offered AI captions, yet the bottleneck remained institutional readiness, support infrastructure, and cultural commitment to accessibility as foundational rather than exceptional.

  • 2023-H2: Institutional adoption became mainstream—Georgetown and TU Delft deployed auto-captions by default across universities; Google Chrome and Google Workspace expanded with Live Caption and PDF OCR accessibility features; Otter.ai reached 14M+ registered users. Real-world research validated positive outcomes (University of Twente's DHH students successfully participated in classroom discussions using real-time captions), but also revealed critical limitations: generative AI tools (ChatGPT, Midjourney) showed significant reliability problems and ableist biases when tested by accessibility researchers; automatic captions remained insufficient for accents/dialects and lacked punctuation/speaker ID; persistent practitioner concerns about caption quality and need for human review. The field had matured to near-universal tool availability with default-on policies at forward-thinking institutions, yet implementation equity gaps persisted—organizational readiness, genuine commitment to accessibility review, and continued innovation in accuracy and linguistic support remained the determinants of equitable outcomes.

  • 2024-Q1: Vendor ecosystem investment continued—Otter.ai launched Meeting GenAI for AI-powered meeting transcription, and Microsoft expanded Teams live captioning to Azure Virtual Desktop. Institutional adoption remained steady with universities like Tufts rolling out Teams transcription features. Research from Australian universities showed ASR captioning valued by all students (disabled and non-native speakers alike), supporting universal design rather than accommodation-only framing. However, industry guidance emphasized persistent limitations: ASR errors with accents/dialects, missing punctuation and speaker ID, and the critical need for human review, especially in educational contexts. Generative AI tools continued to show reliability and bias concerns. By Q1 2024, the field remained stable at near-universal tool availability but with unchanged implementation gaps—institutional readiness and genuine commitment to accessibility review remained the determining factors for equitable outcomes.

  • 2024-Q2: Regulatory pressure accelerated adoption—ADA Title II compliance deadlines (2026-2027) began driving institutional technology procurement decisions. Microsoft Teams expanded live translated captions to 6 languages without Premium licensing, demonstrating continued vendor ecosystem investment. However, significant deployment barriers emerged: UMass banned third-party transcription tools (Otter.ai, MeetGeek) due to privacy violations and Massachusetts all-party consent statutes, restricting students to built-in Zoom/Teams options; Inside Higher Ed documented real deployment failures with law students finding Otter AI transcriptions 'completely unworkable.' Harvard researchers documented biases in speech-to-text AI affecting disabled people, emphasizing need for participatory design in development. ASR accuracy metrics showed incremental improvements (OpenAI Whisper at 8% error rate, Google Video at 14%), signalling quantifiable progress. The critical insight: by mid-2024, vendor features were mature and regulatory compliance was driving adoption, yet implementation equity remained contingent on institutional execution, privacy/consent governance, and genuine commitment to accessibility review rather than passive tool reliance.

  • 2024-Q3: Third-party tool ecosystem consolidation accelerated—UC Riverside removed Otter.ai and other third-party transcription tools from Zoom due to data security requirements, shifting students to Zoom AI Companion. Institutional accessibility demands remained strong: Clever survey found 56% of K-12 educators desired better technology for students with IEPs/504 plans, and 55% sought inclusive edtech training, signalling persistent practitioner demand. Large-scale production deployments confirmed: National Association of the Deaf operated 4,000+ open-captioned educational videos in streaming service; international adoption visible at KMITL (Thailand) promoting Teams Live Captions. Industry guidance from CoSN/CAST emphasized balanced assessment of AI accessibility tools, including benefits (text-to-speech, AAC integration) alongside critical challenges (algorithmic bias, data privacy, accuracy limitations). Section 508 compliance remained an institutional barrier: schools faced pressure to verify vendor VPATs and conduct accessibility testing in procurement. By Q3 2024, vendor features remained mature and stable, regulatory deadlines continued to drive adoption momentum, but implementation barriers shifted from tool availability toward privacy governance, vendor consolidation effects on student choice, and the persistent need for institutional commitment to accessibility review and remediation.

  • 2024-Q4: Regulatory momentum for ADA Title II compliance accelerated institutional procurement—Justice Department rules mandating WCAG 2.1 Level AA compliance for digital learning resources by 2026-2027 became the primary adoption driver. Federal investment signaled urgency: $7.2M National Center on Accessible Digital Education Materials (NCADEMI) launched in October 2024 to support state and local education agencies in compliance preparation. Sector-specific adoption metrics emerged: peer-reviewed survey of US Colleges of Osteopathic Medicine showed 95% lecture recording but inconsistent transcription/captioning provision (33% offering both). Critical assessment of caption accuracy persisted: 3Play Media 2024 ASR study and legal precedent analysis presented at Accessing Higher Ground conference highlighted real limitations of automated captions in educational contexts, signaling that production deployment did not guarantee sufficient quality. Vendor-led institutional partnerships continued: Verbit collaborations at Crafton Hills CC and CSU Northridge addressed DEI dimensions of implementation. State-level policy adoption continued: Texas expanded STAAR test accommodations including enhanced text-to-speech features. Research innovation advanced accessibility: UAH thesis work on context-aware image-caption to ASL translation and conference presentations on AI for visually impaired students indicated ongoing R&D push. By Q4 2024, the practice had entered a critical institutional maturity phase: regulatory deadlines were forcing procurement decisions, vendor features remained stable, but implementation quality gaps—accuracy limitations, organizational readiness, genuine accessibility governance—remained the determinants of equitable student outcomes.

  • 2025-Q1: Vendor feature expansion continued—Microsoft Teams extended live captions to cover both standard meetings and large-scale events with multilingual support (50+ language options), demonstrating ongoing ecosystem investment. Institutional deployment scaled: Michigan State University enabled automated captioning across Kaltura MediaSpace for all uploaded videos (launching March 22, 2025) with transparent accuracy disclosure (75% requiring human review). Market-level adoption signals accelerated: global AI-powered captioning and subtitling market grew at 65% CAGR with education identified as key growth segment, underpinned by regulatory drivers (FCC, DOJ ADA, Section 508 compliance). However, institutional governance of accessibility remained the critical variable: major research universities (Boston University) published explicit guidance that automated captions are not a replacement for formal accommodations, emphasizing the persistent gap between tool availability and equitable implementation. By Q1 2025, the practice exemplified the mature-but-uneven adoption pattern: vendor platforms were feature-complete and globally accessible, regulatory deadlines (2026-2027) were driving procurement, yet institutional capacity for accessibility governance—accurate caption review, compliance verification, and genuine accommodation quality—remained the limiting factor for equitable student outcomes.

  • 2025-Q2: Vendor platform maturity accelerated with institutional default policies: University of Alberta enabled Zoom automated captions by default for all users (Spring 2025); Microsoft Teams expanded language support to 50+ options including sign language positioning. Institutional governance advanced: Seneca Polytechnic formalized Otter.ai as supported accommodation with privacy office compliance; peer-reviewed research (PLOS ONE, May 2025) quantified disability service disparities and AI's potential (15% vs. 35% disclosure rates, t-tests p<0.001) to address institutional gaps. Federal adoption signals strengthened: $2M US Department of Education funding (June 2025) allocated for evidence-based technology tools improving reading outcomes for students with disabilities, signaling government commitment to scaled deployment. However, critical assessments persisted from accessibility practitioners: American Foundation for the Blind and Bureau of Internet Accessibility published major analyses documenting that automated captions remain insufficient without human review, AI systems struggle with accents and contextual nuance, and current overlays and tools often fail to improve accessibility outcomes. The field had entered a critical inflection point: by mid-2025, vendor ecosystem maturity and regulatory deadlines were universally driving adoption, yet persistent gaps between technology capability and equitable implementation outcomes required institutional commitment to governance, human review, and genuine accessibility remediation rather than passive tool reliance.

  • 2025-Q3: Vendor ecosystem continued feature expansion and refinement—Microsoft Teams released usability and privacy enhancements for live captions (scrollable transcripts, improved text copying controls) deployed across production environments in July 2025, signalling ongoing UX investment. Educator adoption trends solidified: D2L survey of 1,200 US educators (August 2025) reported 54% overall AI use with 88% adoption among Gen Z educators, identifying accessibility support for students with IEPs/504 plans as a top-3 growth use case alongside lesson planning and plagiarism detection; Michigan Virtual's summer survey of 554 educators similarly documented rapid AI adoption growth with strong perceived potential for accessibility improvements, though barriers remained (training gaps, policy clarity). Institutional governance matured further: cross-institutional pilot at 8 European universities (AACSB initiative, July 2025) demonstrated structured accessibility audits and intentional accommodation design, with 25% of pilot students actively accessing accommodations (captioned videos, multilingual glossaries), exemplifying integrated approach to inclusive educational design. However, critical practitioner assessment persisted: peer-reviewed analysis (IJESD, July 2025) of widely-deployed Otter.ai transcription tool through UNESCO's AI Competency Framework documented both accessibility benefits and systemic risks (algorithmic bias, data transparency gaps, limited cultural inclusivity, accountability gaps), reinforcing that tool availability alone did not guarantee equitable outcomes. By end-Q3 2025, the field remained at an inflection point: vendor platforms had achieved feature maturity and default-on policies were expanding, educator demand for accessibility-focused AI was validated and strong, and regulatory deadlines (2026-2027) continued driving institutional procurement—yet implementation equity remained contingent on institutional governance, human review processes, and genuine commitment to accessibility remediation rather than passive automation reliance.

  • 2025-Q4: Vendor platform consolidation and market validation accelerated—Otter.ai achieved $100M ARR milestone (March 2025 through year-end reporting) with 35M+ global users and 1B+ cumulative meetings processed, with HIPAA compliance achieved July 2025, signaling production-grade readiness for regulated educational contexts. Institutional accessibility governance shifted toward compliance enforcement: ADA Title II regulatory deadline (April 2026) became the primary forcing function, with case studies like Binghamton University demonstrating mature implementation (compliance coordinator role, accessibility advisory groups, systematic 250-website audits, 60K+ LMS file scans). Critical assessment of automation limitations coalesced around persistent barriers: ScreenPal survey of 600+ educators documented widespread policy awareness gaps (45% unaware of ADA rules), implementation execution failures (47% audio description gap despite increased video use), and organizational readiness as the binding constraint. Technical limitations analysis deepened: research on ASR accuracy biases documented 100:1 training data imbalance favoring general American English, with concrete failures in diverse educational contexts (University of Leeds case: Nigerian-British accent systematically misrecognized); independent 3Play Media analysis of 200+ hours of educational content reaffirmed that human review is mandatory for accessibility compliance. Critical practitioner assessments from vendors and accessibility organizations documented persistent gaps between auto-caption availability and ADA compliance readiness, reinforcing that technology maturity remained decoupled from institutional implementation capacity. By end-2025, the practice entered its most mature institutional phase: vendor platforms had achieved global scale and feature completeness, regulatory enforcement deadlines (2026-2027) were the primary adoption driver, and widespread institutional commitments to accessibility governance were visible—yet the determining factor for equitable student outcomes remained institutional execution: organizational capacity for human review, cultural commitment to accessibility as foundational, and governance infrastructure to bridge the gap between automated tool capability and genuine accessibility quality.

  • 2026-Jan: Regulatory pressure intensified as April 2026 ADA Title II deadline approached, emerging as primary institutional forcing function for technology procurement and compliance planning. Industry attention shifted toward vendor evaluation and critical assessment: higher ed journalists warned of vendor tools falsely claiming accessibility, while Caption Pros and other vendors documented that auto-captions (85-95% accuracy) remain insufficient without human review, establishing compliance-grade standards. Educational sector awareness gaps persisted: research showed only 6% of education organizations conduct AI safety testing on student-facing systems despite widespread deployment; community forum reports documented production failures in Zoom AI transcription reliability. Educational research centers (CIDDL) maintained focus on assistive technology integration and disability-specific practices, signaling continued institutional emphasis on structured AT implementation rather than passive automation reliance. By month-end, the practice exemplified mature-but-constrained adoption: vendor features had stabilized at production scale, institutional procurement was compliance-driven, and critical assessment of automation limitations was widespread—yet real-world implementation barriers (training gaps, safety testing gaps, policy awareness gaps) remained unchanged, confirming that institutional readiness rather than technology maturity continued to determine equitable outcomes.

  • 2026-Feb: Vendor platform maturity continued with strategic feature releases: Microsoft Teams expanded speech-to-speech interpretation to cover standard calls in nine languages and introduced mandatory consent policies for transcription, signaling investment in both accessibility and privacy governance. However, critical barriers accelerated: class-action litigation against Otter.ai (filed August 2025, prominent in February 2026) documented privacy violations and unauthorized voiceprint creation, establishing significant regulatory and procurement risk for third-party transcription platforms widely used in education. Independent accessibility testing reaffirmed persistent ASR limitations: Equal Entry's systematic evaluation of YouTube, Vimeo, Azure Video Indexer, and Whisper AI documented failures with accents and proper nouns, recommending human captioners as industry standard. University system guidance (USM) confirmed institutional reality: auto-captions do not meet WCAG without manual editing, establishing hybrid human-plus-AI as institutional best practice rather than full automation. Critical design analysis (Pratt Institute) emphasized that true accessibility requires standard design practices rather than remedial accommodations, framing tool limitations as design failures not user accommodations. By month-end, the field faced divergent pressures: vendor features and regulatory deadlines (April 2026) drove procurement and adoption momentum, yet independent assessment and institutional guidance reinforced that AI-driven accessibility without human oversight remained insufficient for compliance or equitable outcomes—institutional capacity for review, testing, and governance remained the binding constraint.

  • 2026-Mar: Research advanced on adaptive caption design for underserved learners: a 24-participant user study showed captions with emotional and multimodal cues reduce cognitive load and improve STEM comprehension for DHH and neurodivergent students, while peer-reviewed work demonstrated AI-powered text simplification (50% reading level reduction) and video subtitling (94.6% sync accuracy) with measurable user satisfaction gains. Institutional execution constraints were reconfirmed by UC Merced's documented Kaltura workflow, where auto-captions achieve 60–80% baseline accuracy and require labor-intensive manual review to reach WCAG AA compliance — confirming that organizational capacity for human review, not technology availability, remains the binding constraint on equitable outcomes.

  • 2026-Apr: The April 24 ADA Title II WCAG 2.1 AA deadline arrived but was then extended by DOJ — large entities now have until April 2027, smaller entities until 2028 — shifting the immediate compliance forcing function while leaving structural readiness gaps unchanged. Compliance readiness at deadline was weak: only 14% of school districts completed accessibility updates; 88% of 20 largest districts received F grade on fundamentals. DOJ formally acknowledged generative AI 'does not yet reliably automate remediation at scale.' Disability advocates documented concrete barriers: 73% of disabled students use AI but 0% of interventions rate low-risk for bias; 100+ organizations opposed the extension citing ongoing access failures (dyslexic students cut off from assignments, disabled parents unable to access school platforms). Institutional case studies showed hybrid remediation achieving 85% accessibility gains, but binding constraints remain organizational: procurement bottlenecks, governance fragmentation, limited safety testing (only 6% of education orgs test student-facing AI), and ASR accuracy failures on accented speech and specialized vocabulary persist as the gap between tool availability and equitable implementation.

  • 2026-May: New research advanced understanding of ASR limitations and equity barriers: peer-reviewed studies documented that Whisper Large-v3 exhibits hallucination (fabricates content), real-world meeting accuracy (8-12% error) is 3-4x worse than clean audio, and systemic bias against African American English speakers creates 40% error rates affecting behavioral adaptation and educational equity. Empirical study from University of British Columbia (2026) showed automated captions meet accessibility needs without expensive human correction (effect size r=0.14), suggesting efficient accessibility deployment. Case study from National University of Singapore demonstrated that AI-generated videos with automated captions can achieve learning gains (8.1% vs. 4.1% traditional) with students prioritizing accessible captions over instructor authenticity. Research on DHH-centered accessibility design (CHI peer-reviewed) showed AR head-mounted display captions with speaker context preferred over linear captions for group learning. Critical analysis documented that ASR systems encode standardized-speech assumptions, systematically excluding Indigenous languages and non-mainstream dialects, with hospital AI scribes producing 50% inaccuracy. Evidence converges: vendor platforms have stabilized at feature maturity while systemic equity barriers—bias, hallucination, accuracy degradation in real-world conditions—become the defining challenge alongside institutional governance gaps.

TOOLS