Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Clinical imaging — specialist screening & diagnosis

LEADING EDGE

TRAJECTORY

Stalled

AI that analyses medical images across clinical specialties including pathology, dermatology, ophthalmology, cardiology, and dental imaging for detection, screening, and diagnostic support. Includes FDA-cleared retinal screening and AI-assisted pathology quantification; distinct from radiology which uses different imaging modalities and clinical workflows.

OVERVIEW

AI-driven screening across clinical imaging specialties — ophthalmology, dermatology, pathology, cardiology, and dental imaging — has cleared the technical and regulatory bars for real-world use but remains confined to a vanguard of forward-leaning health systems. Diabetic retinopathy screening leads the field: multiple FDA-cleared autonomous systems routinely exceed 90% sensitivity, and national programmes in Norway and the UK have begun deploying them at population scale. The technology works. What stalls broader adoption is human, not algorithmic: only a fraction of eligible patients receive AI-based screening, clinician trust lags far behind clinician awareness, and workflow integration challenges persist even where EHR connectivity exists. This gap between proven capability and actual penetration defines a leading-edge practice whose constraint has shifted from "can we build it" to "will institutions and clinicians use it." Distinct from radiology AI in both imaging modalities and clinical workflow, specialist clinical imaging AI sits at the sharpest edge of that adoption tension.

CURRENT LANDSCAPE

Three platforms dominate the diabetic retinopathy screening segment, which remains the most mature subspecialty. EyeArt screens across 32 countries, holds EU MDR certification for three diseases, and was selected by the UK National Screening Committee as the only system ready for live NHS deployment; Norway's South-Eastern Regional Health Authority is using it to push population coverage from 55% toward 95%. AEYE-DS, the first FDA-cleared fully autonomous portable system, now integrates with Epic across 3,600-plus US hospitals and delivers results in under a minute. IDx-DR continues to validate in new geographies, with a German real-world study of 875 patients confirming 94.4% sensitivity for severe disease.

These deployments are real, but they remain the exception. A 2024 JAMA Ophthalmology study found that only 2.2% of imaged diabetic patients in the US received AI-based screening. The bottleneck is not performance — algorithms consistently score in the mid-to-upper 90s on sensitivity — but institutional and human resistance. Recent Q1 2026 case studies document the divergence: Cary Medical Management's deployment of Optomed Aurora AEYE across eight North Carolina clinics achieved 15-20% HEDIS quality metric improvement and highest Medicare Shared Savings performance in the state; Cleveland Clinic's multi-clinic implementation delivers results in 30 seconds with 85-95% screening rates without dilation. Yet institutional adoption remains glacial. A survey of 156 ophthalmologists showed just 7.5% trusting AI for diagnostics despite broad awareness of it, and a Johns Hopkins patient study found that while 92% were satisfied with AI screening, 83% still wanted a physician in the loop. Clinician demand for continuous human oversight reflects both safety concerns and resistance to autonomous decision-making; a 2026 multinational survey found 63.74% of healthcare professionals insist on human-in-the-loop architectures. Workflow integration compounds the problem: an analyst survey of 150 healthcare organisations rated it 9-10 out of 10 in criticality, yet nearly half remained stuck in limited deployment. Reimbursement friction, EHR incompatibility (60% of US primary care EHRs remain incompatible with third-party AI tools), algorithmic bias across demographics, and workforce skill gaps (41.23% of institutions cite this as top barrier) round out a set of obstacles that are systemic rather than technical.

Market signals from April 2026 document ecosystem consolidation: PathAI's FDA-cleared AISight Dx digital pathology platform deployed across MedStar Health's network of 40+ pathologists represents enterprise-scale adoption beyond ophthalmology, marking the maturation of clinical-grade AI pathology with regulatory precedent (Predetermined Change Control Plan) enabling iterative software governance. Aidoc's Foundation platform processes 35,000 scans monthly across 28 European hospitals, signalling operator-level deployment at scale. India's clinician AI adoption has surged from 12% to 41% in a single year (2024–2025), outpacing US (36%) and UK (34%), though real-world diagnostic accuracy remains variable (60–80% sensitivity) in low-resource settings. A peer-reviewed systematic review (JMIR AI, April 2026) of 20 point-of-care imaging AI studies documents median sensitivity 93.6% with task-shifting in 65% of studies, yet identifies critical gaps in explainability evaluation and patient outcome measurement. Market projections estimate the global AI pathology sector reaching $633.69M by 2031 (28.16% CAGR from 2026), with software and decision-support services driving growth and hospitals commanding 46% of revenue share, signalling sustained enterprise investment. The practice boundary is expanding beyond ophthalmology: FDA Breakthrough designation for CLAiR enables cardiovascular risk screening via retinal vascular imaging in routine eye clinics, suggesting specialist imaging AI is transitioning from isolated tools toward integrated multi-disease screening platforms—but adoption mechanisms remain unchanged, constrained by the same barriers (workflow integration, clinician oversight demand, workforce capability gaps) that limit ophthalmology penetration.

TIER HISTORY

ResearchJan-2016 → Jan-2016
Bleeding EdgeJan-2016 → Jan-2019
Leading EdgeJan-2019 → present

EVIDENCE (120)

— Physician-authored critical assessment documenting FDA-cleared AI performance matching subspecialist-level accuracy in diagnostic imaging (DR, pulmonary nodules, ICH, breast cancer screening) while questioning sustainability of human-in-the-loop model as capability advances.

— Industry analysis documenting India clinician AI adoption surge from 12% to 41% in one year (2024–2025), real-world imaging accuracy (CT brain hemorrhage 87%), and critical adoption risks including deskilling and documented failure cases.

— Peer-reviewed systematic review of 20 studies (~78,000 patients) on AI-assisted clinical decision support in point-of-care specialist imaging; median sensitivity 93.6%, task-shifting in 65% of studies, identifies critical evidence gaps in explainability and patient outcome measurement.

— Practitioner analysis documenting large-scale clinical imaging AI deployments in 2026, including PathAI-Labcorp U.S.-wide rollout with specific workflow metrics and Aidoc European deployment scale (35,000 scans/month across 28 hospitals).

— Analyst market report with regulatory milestones documenting maturation of clinical-grade AI pathology platforms; PathAI's AISight Dx FDA clearance with Predetermined Change Control Plan sets precedent for iterative software governance in regulated practice.

— Critical analysis of FDA AI medical device clearances: 75% of 2025 clearances are imaging devices; 96.4% bypass prospective clinical trials via 510(k) pathway; documents validation gaps and demographic bias risks, revealing regulatory approval does not ensure clinical evidence or equity.

— UltraSight AI-guided echocardiography achieves >95% diagnostic accuracy enabling non-sonographers to acquire clinical-quality ultrasound; Mayo Clinic validation across multiple patient populations demonstrates expansion of cardiac ultrasound diagnosis beyond specialist sonographers.

— Peer-reviewed systematic review synthesizing cardiology imaging AI (echocardiography, CT, CMR, nuclear) shows high diagnostic accuracy but highlights persistent barriers: large dataset requirements, limited transparency, data governance gaps, and need for rigorous prospective validation.

HISTORY

  • 2016: Deep learning drives accuracy improvements in diabetic retinopathy screening (96.8% sensitivity achieved). Independent NHS validation confirms real-world performance on 20,258 patients. EyeArt 2.0 launches commercially in Europe with 91% sensitivity. Clinical implementation studies reveal workflow integration barriers (47% agreement in routine care). Training data quality and annotation consistency emerge as core technical challenges.
  • 2017: Large-scale validation continues—independent study across 20,258 patients confirms EyeArt, Retmarker, and iGradingM all achieve 94.7%–99.6% sensitivity with cost-effectiveness. Real-world deployment expands: Los Angeles County safety-net system operates full-scale teleretinal DR screening, reducing wait times from ≥8 months. Small pilot deployments (Oslo, 64 eyes) show 100% AI-human concordance and cost savings. FDA regulatory pathway for digital pathology devices approved, but guidance on autonomous AI in clinical decision support remains unclear, creating adoption uncertainty.
  • 2018: Regulatory inflection: IDx-DR becomes first FDA-approved autonomous AI system for diabetic retinopathy screening in primary care (April 2018). Clinical validation accelerates with prospective multicenter studies in US (87% sensitivity), Netherlands (91% sensitivity), and China, demonstrating geographic generalization. Competitive landscape solidifies with multiple FDA-cleared platforms (EyeArt, Retmarker, iGradingM, AEYE Health). Critical voices emerge questioning safety rigor and clinical outcome evidence. Reimbursement and workflow integration barriers become adoption blockers despite regulatory approval.
  • 2019: Global scale-up of real-world deployments: EyeArt system deployed across 404 primary care clinics on 101,710 consecutive patient visits demonstrates real-world sensitivity of 91.3% and 98.5% for treatable DR. Lancet Digital Health meta-analysis confirms diagnostic performance of deep learning equivalent to healthcare professionals across medical imaging modalities. Geographic expansion continues—IDx-DR integrated into Vienna General Hospital and MedUni Vienna clinical workflows with near-perfect accuracy. Deployment now spans five continents with 500,000+ patient visits. Critical assessment literature questions readiness of widespread adoption despite technological validation, emphasizing gaps between algorithm accuracy and clinical workflow integration.
  • 2020: Consolidation of clinical deployment at scale. Large-scale validation in English Diabetic Eye Screening Programme (30,000 patients, 120,000 images) confirms EyeArt 95.7% accuracy with 100% sensitivity for severe DR and £10M annual cost-savings potential. EyeArt receives FDA 510(k) clearance with 96% sensitivity for more than mild DR. Clinical adoption remains constrained by reimbursement friction (unclear billing codes in US) and workflow integration challenges despite demonstrated safety and cost-effectiveness. Deployment continues to concentrate in well-resourced health systems and developed markets.
  • 2021: Peer-reviewed validation of EyeArt pivotal trial in JAMA Network Open (942 patients, 96% sensitivity for more-than-mild DR) strengthens evidence base. Medicare begins reimbursing autonomous DR screening in January, yet adoption barriers persist: workflow integration challenges, cost considerations, and limited patient acceptance. Industry analysis identifies four core adoption blockers—ecosystem interoperability gaps, data biases, regulatory uncertainty, and ROI concerns—highlighting that technical capability alone does not drive clinical scaling.
  • 2022-H1: Regulatory consolidation accelerates with UK National Screening Committee declaring EyeArt "only technology ready for live NHS implementation" (June). Real-world deployment expands geographically—portable smartphone-based systems achieve 97.8% sensitivity in Brazil; Google validates ultra-widefield AI with 90.5% sensitivity; Vienna medical center confirms IDx-DR accuracy in routine clinical practice. However, deployment diversity reveals algorithm generalization challenges: Chinese multicenter study documents only 33.9% sensitivity for vision-threatening DR despite 78.97% for referral-level disease. Systematic methodological review exposes research biases (data quality, publication incentives, inadequate clinical assessment) in the medical imaging AI field, reinforcing gap between algorithm validation and real-world implementation maturity.
  • 2022-H2: Peer-reviewed evidence strengthens: EyeArt clinical study shows 96.4% sensitivity vs. 27.7% for ophthalmologists on 521 patients; Chinese AI-based DR grading achieves 96.5% accuracy and improves junior resident training. Deployment extends into underserved markets—EyeArt now screening remote Ontario Indigenous communities (2,700 patients, autonomous results in <30 seconds). Critical assessment literature surfaces persistent adoption barriers: implementation reviews identify data governance, algorithm robustness, ethics, and regulatory clarity as blockers despite technical maturity; field-wide research biases and inadequate clinical outcome assessment inflate confidence in algorithm performance. Algorithm generalization remains the core fault line: systems validated on curated datasets show materially lower performance on severe pathology and different populations.
  • 2023-H1: Regulatory expansion consolidates: EyeArt achieves EU MDR Class IIb certification for diabetic retinopathy, age-related macular degeneration, and glaucoma (first system with three-disease scope); EyeArt 2.2.0 receives FDA clearance for multi-camera support (Canon + Topcon). Real-world deployment continues: UMass Memorial Health launches 500-patient pilot with AEYE Health's handheld AI camera in primary care; Frontiers study identifies workflow determinants for sustainable IDx-DR adoption (95% volume growth with clinical champions and resource alignment). Clinical validation pipelines mature: AEYE-DS enters pivotal trials; FDA publishes regulatory frameworks for medical imaging AI. Field consolidation accelerates around mature platforms (EyeArt, IDx-DR) with incremental regulatory expansion. Implementation research confirms regulatory approval no longer bottlenecks adoption—workflow integration, clinical champions, and algorithmic robustness remain primary constraints.
  • 2023-H2: Commercial partnerships expand ecosystem: IRIS platform (600+ clinics and labs) integrates AEYE Health's autonomous DR detection system. Research and critical assessment literature quantifies persistent adoption barriers: December study across five UC health systems identifies systemic obstacles to DR screening program implementation; concurrent literature documents slow adoption despite algorithm effectiveness. Regulatory framework guidance from FDA continues to mature. Consolidation accelerates around mature platforms (EyeArt, IDx-DR, AEYE-DS) with emphasis on clinical partnership and workflow integration rather than further algorithm improvements. Adoption bottleneck remains institutional: implementation barriers, resource allocation, and clinical champion engagement dominate over technical validation.
  • 2024-Q1: Real-world deployments expand: Nebraska Medicine begins testing EyeArt in two primary care clinics with 28% referral rate; Tarzana Treatment Centers reports similar adoption with ~25% detection on 700 annual exams. Digital Diagnostics' platform reaches ~600 sites nationwide. Implementation research broadens geographic scope: studies address DR screening effectiveness in high-resource settings (Japan) and low-resource contexts (sub-Saharan Africa). AEYE-DS enters formal clinical validation (NCT06241664, 500 participants). Reimbursement remains a constraint: CMS rate of $45.36 per autonomous screening does not offset equipment and integration costs. Adoption continues steady growth through clinical partnerships rather than explosive market expansion; barriers (EHR integration, staff training, algorithm generalization) persist despite proven technology maturity.
  • 2024-Q2: AEYE-DS achieves FDA clearance (April) as first fully autonomous portable AI for DR screening, expanding ecosystem diversity. Clinical deployments consolidate at named health systems: Barnstable Brown Diabetes Center (UK HealthCare, Kentucky) reports 22,000 annual screenings; cost-analysis in Oslo (Norway) validates 100% sensitivity in minority women with $143/patient savings. Critical research surfaces adoption barriers: MIT study (June) identifies bias mechanisms in medical imaging AI models across demographic groups; peer-reviewed analyses highlight clinician ambivalence and clinical translation gaps as primary inhibitors despite regulatory approval. No AI medical tool yet incorporated into clinical guidelines as established practice norm. Adoption remains constrained by systemic barriers (workflow integration, cost justification, algorithmic fairness) rather than algorithm performance alone.
  • 2024-Q3: Clinical deployments continue geographic expansion with fresh real-world outcome evidence. Johns Hopkins Medicine implementation research demonstrates improved adherence to annual testing with autonomous AI systems. UPMC's decade-long telemedicine program study (21,960 exams) documents sustainable deployment model with 31.5% specialist referral rates. Mary Lanning Healthcare reports 39% rise in screening adherence and 300,000+ cumulative patients screened. Internationally, EyRIS secures national government contract (Brunei, September 2024) for rollout across 40,000 diabetic citizens, signaling public health system adoption at scale. Scoping reviews synthesize persistent barriers: governance gaps, trust mechanisms, and clinical translation gaps continue to limit scaled adoption despite proven efficacy. Research confirms DR screening AI remains mature on algorithm performance but faces unresolved systemic barriers (workflow integration, regulatory clarity, equity concerns) constraining rapid health system scaling.
  • 2024-Q4: Regulatory milestone achieved: AEYE Health's AEYE-DS becomes first fully autonomous portable AI system cleared by FDA (November 2024) with 92–93% sensitivity and single-image success rates >99%, expanding access beyond fixed-camera settings. Real-world deployments consolidate: Eyenuk's EyeArt integrated into Henrietta Johnson Medical Center (Delaware FQHC, October) with 26% positive DR detection; ecosystem now spans 32 countries with annual screening volumes exceeding 500,000 in rural India alone. However, critical adoption research (November, JAMA Ophthalmology) reveals fundamental gap: only 2.2% of imaged diabetic patients received AI-based screening despite FDA approvals, indicating nascent real-world penetration. Market analysis highlights structural barriers: VC funding for medical imaging AI collapsed (from $1.1B peak in 2021 to $207.5M in Q1–Q3 2024); 60% of US primary care EHRs remain incompatible with third-party AI tools. Despite technological maturity and regulatory approval, systemic adoption blockers (reimbursement, EHR integration, clinician adoption) persist at end of 2024.
  • 2025-Q1: Real-world deployments continue: Johns Hopkins Medicine case study demonstrates improved access and equity through autonomous AI systems in both pediatric and adult populations. Multi-system quality improvement program deploys 198 AI cameras across 5 health systems, screening 20,000+ patients with diabetic retinopathy detection in 3,450+ cases. New portable technologies advance: AI Optics receives FDA 510(k) clearance for non-dilated handheld Sentinel Camera, enabling point-of-care screening beyond fixed-office settings. Critical research emphasizes systemic barriers: comprehensive review of bias in medical imaging AI identifies fundamental challenges to equitable deployment across demographics; Digital Pathology Association highlights risks of over-reliance on AI and need for human oversight in specialist diagnosis (pathology, dermatology). Survey of US pathologists documents widespread barriers to digital pathology adoption, mirroring ophthalmology challenges. Evidence accumulates that technical validation and regulatory clearance remain insufficient drivers of scaled clinical implementation; equity concerns, bias mitigation, and workflow integration persist as primary adoption constraints.
  • 2025-Q2: Geographic expansion continues: Eyenuk's EyeArt deployed at Diabetes Center Mergentheim in Germany, establishing first dedicated diabetic clinic in Germany using autonomous AI screening. Commercial ecosystem consolidates: BeamMed announces partnership to promote AEYE-DS through subscription model, targeting broader primary care penetration. UK National Screening Committee publishes evidence review for implementing machine learning autograders in diabetic eye screening programs, highlighting adoption considerations for national health systems. Ecosystem now spans 32 countries with multi-system deployments exceeding 20,000+ annual screenings in US health systems and 500,000+ in rural India. However, systemic barriers (EHR integration, bias mitigation, national policy adoption) continue to constrain rapid scaling despite technical maturity and regulatory approval.
  • 2025-Q3: National health system adoption reaches watershed moment: South-Eastern Norway Regional Health Authority (3.1M population) deploys EyeArt for autonomous DR screening with target to increase coverage from 55% to 95%. Italy completes first national prevention campaign, screening 2,200 patients across 30 centers with 214 new referable DR diagnoses. Real-world implementation in India documents variable performance (60–80% sensitivity), revealing algorithm generalization challenges in low-resource settings. FDA data (September 2025) surfaces quality assurance concerns: only 2.4% of 1,016 authorized AI medical devices had RCT support, 24.1% had no clinical studies, 4.8% recalled within 1.2 years. Ecosystem maturity marked by proven deployment but persistent systemic barriers: algorithm fairness, evidence quality, and health system integration remain core constraints to rapid global scaling.
  • 2025-Q4: Peer-reviewed research confirms algorithm superiority (EyeArt 96.4% sensitivity vs. 27.7% for ophthalmologists) while cross-national ophthalmologist survey reveals adoption gap (only 7.2% regular use despite 69.5% perceiving potential). Independent UK validation (1,257 NDESP patients) shows 92–100% EyeArt sensitivities with 50–67% workload reduction potential. IRIS partnership integrates AEYE-DS across 600+ primary care clinics, expanding ecosystem access. End-user research surfaces fundamental adoption blockers: demand for robust evidence of effectiveness and maintained human oversight indicate algorithm maturity alone insufficient for scaled clinical implementation. Systemic barriers (EHR incompatibility, bias mitigation, fair pricing) persist despite regulatory clearances and demonstrated technical performance.
  • 2026-Jan: AEYE-DS receives FDA 510(k) clearance (January 2); UK National Screening Committee selects EyeArt as only AI ready for NHS live implementation (January 25). Meta-analysis confirms EyeArt diagnostic accuracy across 17 studies (AUC 0.932). University of Utah Health deploys deepeye TPS for AMD treatment planning in Europe; US trial discussions ongoing. Epic EMR integration expands AEYE-DS deployment across US health system. Critical legal analysis surfaces regulatory compliance barriers (Anti-Kickback, False Claims Act risks) alongside technical maturity.
  • 2026-Feb: Real-world validation confirms deployment maturity but reveals adoption divergence. IDx-DR shows 94.4% sensitivity in German real-world cohort (875 patients). AEYE-DS Epic integration reaches 3,600+ US hospitals enabling sub-one-minute autonomous screening. Patient satisfaction high (92% at Johns Hopkins) but 83% prefer physician oversight. Clinician trust remains low—Bulgarian survey of 156 ophthalmologists shows only 7.5% trust AI for diagnosis despite awareness. Analyst research confirms workflow integration critical but nearly half of organizations stuck in limited deployment despite algorithm maturity.
  • 2026-Q1: Specialist imaging AI demonstrates continued real-world deployment with mixed outcomes. Primary care network (Cary Medical Management, North Carolina) deployed Optomed Aurora AEYE across 8 clinics showing dramatic clinical impact—one in three patients scanned revealed retinal changes requiring specialist referral; HEDIS quality metrics improved 15-20% and achieved highest Medicare Shared Savings performance in state through early detection and workflow integration without physician confidence-building requirements. Cleveland Clinic deployed AI-powered nonmydriatic fundus cameras across multiple clinic types (eye institute, primary care, endocrinology), delivering 30-second results with 85-95% screening rates without dilation and immediate EMR integration. Multi-pathology deployment evidence emerges: UPRETINA system validated across 1,652 eyes in teleophthalmology workflow with DR 86.8%/95.6%, AMD 94.9%/94.3%, glaucoma 82.7%/92.4% sensitivity/specificity, and Erasmus Hospital endocrinology deployment achieved 100% sensitivity on vision-threatening DR across diverse demographic groups. AEYE-DS Epic integration expanded to dozens of hospitals nationwide with 1-minute autonomous screening workflow and full CPT 92229 reimbursement support. Evidence on adoption barriers deepens: groundy.com analysis documents performance-deployment paradox (LumineticsCore 95% sensitivity yet only ~10% US hospitals have clinical AI adoption); multinational survey (2026) finds 63.74% of healthcare professionals demand human-in-the-loop oversight; India-focused primary research shows 23.39% institutional adoption rate despite 60% awareness, with 41.23% citing workforce skill gaps as top barrier. Practice scope expansion signals: FDA breakthrough designation for CLAiR enables cardiovascular risk screening via retinal imaging (91.1% sensitivity/86.2% specificity in 874-person prospective cohort), demonstrating specialist imaging AI moving beyond ophthalmology into systemic disease detection.
  • 2026-Apr: Deployment evidence deepened across primary care and specialty settings: Cary Medical Management's 8-clinic North Carolina deployment and Cleveland Clinic's multi-site implementation each confirmed 85-95% screening rates and immediate EMR integration without requiring physician confidence-building. Erasmus Hospital's endocrinology deployment achieved 100% sensitivity on vision-threatening DR across diverse demographics. The adoption-performance gap remained stark: a survey of 342 healthcare professionals found 63.74% demanding human-in-the-loop oversight and 41.23% citing workforce skill gaps as the top barrier, while only 23.39% of institutions had adopted diagnostic AI despite 60% awareness. CLAiR's ACC 2026 presentation confirmed cardiovascular risk screening via retinal imaging (91.1% sensitivity, 86.2% specificity in 874-person prospective cohort), reinforcing the trend toward multi-disease specialist screening platforms.
  • 2026-May: Enterprise-scale pathology deployment advanced with PathAI's FDA-cleared AISight Dx platform rolling out across MedStar Health's 40+ pathologist network, and Aidoc processing 35,000 scans monthly across 28 European hospitals — signalling operator-level adoption beyond ophthalmology. A peer-reviewed systematic review of 20 point-of-care imaging studies (~78,000 patients) documented median sensitivity of 93.6% with task-shifting in 65% of studies, while identifying critical gaps in explainability and patient outcome measurement. Physician commentary raised sustainability concerns about the human-in-the-loop model as AI reaches subspecialist-level accuracy across DR, pulmonary nodules, and breast cancer screening — while India's clinician adoption surge (12% to 41% in one year) outpaced Western markets despite variable real-world accuracy (60–80% sensitivity in low-resource settings).