The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that analyses medical images across clinical specialties including pathology, dermatology, ophthalmology, cardiology, and dental imaging for detection, screening, and diagnostic support. Includes FDA-cleared retinal screening and AI-assisted pathology quantification; distinct from radiology which uses different imaging modalities and clinical workflows.
AI-driven screening across clinical imaging specialties — ophthalmology, dermatology, pathology, cardiology, and dental imaging — has cleared the technical and regulatory bars for real-world use but remains confined to a vanguard of forward-leaning health systems. Diabetic retinopathy screening leads the field: multiple FDA-cleared autonomous systems routinely exceed 90% sensitivity, and national programmes in Norway and the UK have begun deploying them at population scale. The technology works. What stalls broader adoption is human, not algorithmic: only a fraction of eligible patients receive AI-based screening, clinician trust lags far behind clinician awareness, and workflow integration challenges persist even where EHR connectivity exists. This gap between proven capability and actual penetration defines a leading-edge practice whose constraint has shifted from "can we build it" to "will institutions and clinicians use it." Distinct from radiology AI in both imaging modalities and clinical workflow, specialist clinical imaging AI sits at the sharpest edge of that adoption tension.
Three platforms dominate the diabetic retinopathy screening segment, which remains the most mature subspecialty. EyeArt screens across 32 countries, holds EU MDR certification for three diseases, and was selected by the UK National Screening Committee as the only system ready for live NHS deployment; Norway's South-Eastern Regional Health Authority is using it to push population coverage from 55% toward 95%. AEYE-DS, the first FDA-cleared fully autonomous portable system, now integrates with Epic across 3,600-plus US hospitals and delivers results in under a minute. IDx-DR continues to validate in new geographies, with a German real-world study of 875 patients confirming 94.4% sensitivity for severe disease.
These deployments are real, but they remain the exception. A 2024 JAMA Ophthalmology study found that only 2.2% of imaged diabetic patients in the US received AI-based screening. The bottleneck is not performance — algorithms consistently score in the mid-to-upper 90s on sensitivity — but institutional and human resistance. Recent Q1 2026 case studies document the divergence: Cary Medical Management's deployment of Optomed Aurora AEYE across eight North Carolina clinics achieved 15-20% HEDIS quality metric improvement and highest Medicare Shared Savings performance in the state; Cleveland Clinic's multi-clinic implementation delivers results in 30 seconds with 85-95% screening rates without dilation. Yet institutional adoption remains glacial. A survey of 156 ophthalmologists showed just 7.5% trusting AI for diagnostics despite broad awareness of it, and a Johns Hopkins patient study found that while 92% were satisfied with AI screening, 83% still wanted a physician in the loop. Clinician demand for continuous human oversight reflects both safety concerns and resistance to autonomous decision-making; a 2026 multinational survey found 63.74% of healthcare professionals insist on human-in-the-loop architectures. Workflow integration compounds the problem: an analyst survey of 150 healthcare organisations rated it 9-10 out of 10 in criticality, yet nearly half remained stuck in limited deployment. Reimbursement friction, EHR incompatibility (60% of US primary care EHRs remain incompatible with third-party AI tools), algorithmic bias across demographics, and workforce skill gaps (41.23% of institutions cite this as top barrier) round out a set of obstacles that are systemic rather than technical.
Market signals from April 2026 document ecosystem consolidation: PathAI's FDA-cleared AISight Dx digital pathology platform deployed across MedStar Health's network of 40+ pathologists represents enterprise-scale adoption beyond ophthalmology, marking the maturation of clinical-grade AI pathology with regulatory precedent (Predetermined Change Control Plan) enabling iterative software governance. Aidoc's Foundation platform processes 35,000 scans monthly across 28 European hospitals, signalling operator-level deployment at scale. India's clinician AI adoption has surged from 12% to 41% in a single year (2024–2025), outpacing US (36%) and UK (34%), though real-world diagnostic accuracy remains variable (60–80% sensitivity) in low-resource settings. A peer-reviewed systematic review (JMIR AI, April 2026) of 20 point-of-care imaging AI studies documents median sensitivity 93.6% with task-shifting in 65% of studies, yet identifies critical gaps in explainability evaluation and patient outcome measurement. Market projections estimate the global AI pathology sector reaching $633.69M by 2031 (28.16% CAGR from 2026), with software and decision-support services driving growth and hospitals commanding 46% of revenue share, signalling sustained enterprise investment. The practice boundary is expanding beyond ophthalmology: FDA Breakthrough designation for CLAiR enables cardiovascular risk screening via retinal vascular imaging in routine eye clinics, suggesting specialist imaging AI is transitioning from isolated tools toward integrated multi-disease screening platforms—but adoption mechanisms remain unchanged, constrained by the same barriers (workflow integration, clinician oversight demand, workforce capability gaps) that limit ophthalmology penetration.
— Physician-authored critical assessment documenting FDA-cleared AI performance matching subspecialist-level accuracy in diagnostic imaging (DR, pulmonary nodules, ICH, breast cancer screening) while questioning sustainability of human-in-the-loop model as capability advances.
— Industry analysis documenting India clinician AI adoption surge from 12% to 41% in one year (2024–2025), real-world imaging accuracy (CT brain hemorrhage 87%), and critical adoption risks including deskilling and documented failure cases.
— Peer-reviewed systematic review of 20 studies (~78,000 patients) on AI-assisted clinical decision support in point-of-care specialist imaging; median sensitivity 93.6%, task-shifting in 65% of studies, identifies critical evidence gaps in explainability and patient outcome measurement.
— Practitioner analysis documenting large-scale clinical imaging AI deployments in 2026, including PathAI-Labcorp U.S.-wide rollout with specific workflow metrics and Aidoc European deployment scale (35,000 scans/month across 28 hospitals).
— Analyst market report with regulatory milestones documenting maturation of clinical-grade AI pathology platforms; PathAI's AISight Dx FDA clearance with Predetermined Change Control Plan sets precedent for iterative software governance in regulated practice.
— Critical analysis of FDA AI medical device clearances: 75% of 2025 clearances are imaging devices; 96.4% bypass prospective clinical trials via 510(k) pathway; documents validation gaps and demographic bias risks, revealing regulatory approval does not ensure clinical evidence or equity.
— UltraSight AI-guided echocardiography achieves >95% diagnostic accuracy enabling non-sonographers to acquire clinical-quality ultrasound; Mayo Clinic validation across multiple patient populations demonstrates expansion of cardiac ultrasound diagnosis beyond specialist sonographers.
— Peer-reviewed systematic review synthesizing cardiology imaging AI (echocardiography, CT, CMR, nuclear) shows high diagnostic accuracy but highlights persistent barriers: large dataset requirements, limited transparency, data governance gaps, and need for rigorous prospective validation.