The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that sees and interprets visual and spatial information for inspection, monitoring, and analysis. Heavily clustered at leading-edge: object detection, OCR, and quality inspection have proven deployments but most organisations lack the labelled data or edge infrastructure for production scale. Only one practice reaches good-practice. Most trajectories are stalled, waiting on hardware costs and data pipeline maturity to unlock broader adoption.
The headline: AI that sees works in real production settings now — but only a few organizations have figured out how to plug it into their existing workflows. The bottleneck is integration, not the technology.
Computer vision — AI that interprets images and video — works in production across hospitals, retail venues, security, and environmental monitoring. But adoption is narrow. The leaders are pulling real value out of it: a national US lab partnership has rolled out AI pathology, drone services are running 400,000 weekly trips, and a wildfire-detection network now covers 150+ government agencies in Australia. Most organizations haven't started. If you're not deploying computer vision in even one workflow, you're behind. If you are, the thing slowing you down is almost certainly making it fit your existing systems — not the AI itself.
AI for reading medical scans is no longer experimental. Enough clinical evidence has accumulated that this can be treated as a real production capability. A 68-patient prospective study at Ohio State showed improved tumor removal rates, and Microsoft's tool sped up radiotherapy planning 13-fold at NHS sites. The open-source toolchain is now mature enough to deploy. Health systems should be running pilots, not waiting for more evidence.
A consumer phone now matches a professional surveying rig. A field study confirmed the latest iPhone Pro paired with a precision GPS attachment hits 2-9cm accuracy — equal to a professional total station, with 30 times less labor. If you pay for surveying work — construction, utilities, real estate, infrastructure — your cost floor just dropped substantially. Worth a conversation with your services vendor at next renewal.
Cashier-free checkout keeps spreading — but not where you'd expect. New deployments at Inter Miami, the Kansas City Royals, and Melbourne Cricket Ground; one casino store outsold four staffed concession stands combined. The market for this technology is sports, hospitality, and convenience — not the supermarket. If you're in those venue businesses, the ROI case is now mature enough to evaluate seriously.
Facial recognition rules are pulling apart, not converging. Detroit cut its searches by 91% after wrongful-arrest lawsuits. Germany increased its use 159%. Virginia passed a law requiring vendors to hit a 98% accuracy bar, effective July 2026. If you operate across jurisdictions, your compliance work just got harder this cycle — audit your vendors against the toughest rule that applies to you.
Wildfire detection has scaled from research to operations. A US network grew from zero to 51 stations across Arizona in under two years, with 88 projected by year-end. Australia's network spotted 1,132 unplanned fires last summer with five-minute response times. This is one of the few areas where the technology is genuinely advancing in the real world. Insurers, utilities, and land managers should be at the table.
Virginia's facial recognition law (effective July 2026) sets a 98% accuracy threshold from a federal testing lab and limits what police can use it for. Similar bills are moving in other states. If you use facial recognition for access control or identification, audit your vendor's accuracy claims against the federal benchmarks now — before enforcement starts and the answer becomes a legal problem.
AI struggles to read diagrams and complex documents. The most capable current image-and-text AIs — frontier models — score 40-54% on understanding the relationships in a diagram. That's barely better than guessing. If you're planning AI document workflows, scope them for text extraction (which works) and don't promise diagram comprehension (which doesn't). Vendors who claim otherwise are overselling.
Deepfake attacks on facial access systems are happening now. A high-profile incident at engineering firm Arup demonstrated real-world exploitation. A major analyst firm has predicted 30% of enterprises will abandon facial biometrics by 2026. If you rely on face recognition for physical access, evaluate "liveness detection" tools and back-up authentication factors before an incident forces the decision in a hurry.
The technology works; the workflow integration doesn't. Fewer than 2% of US radiology practices use AI tools, despite 873 of them being cleared by the FDA. Buyers consistently rate fitting AI into existing imaging systems as a top concern. The blocker is plumbing, not algorithms — most deployments stall or fail trying to make the AI fit how clinicians and operators actually work.
The patchwork of regulations is permanent, not temporary. The UK is expanding facial recognition while Detroit pulls back. The EU, Virginia, and China are each writing different rules. There's no convergence coming, and operators in multiple geographies need to design for divergence from the start.
The benchmarks lie. Video analytics that look strong in tests often deliver below 10% real-world precision on the events you actually care about. Vision-language AIs make things up — hallucinate — 60-100% of the time when shown no image. If you're choosing tools based on published numbers, you are systematically overestimating what you'll get.
Go deeper: the full Computer Vision & Sensing briefing — the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.