Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Personal Effectiveness

AI for individual productivity, communication, organisation, and self-directed learning. The most polarised domain: writing assistance and meeting summarisation are good practice, but nearly half the practices are bleeding-edge — personal AI agents, life planning, and autonomous scheduling lack reliable implementations. Most trajectories are stalled, reflecting a gap between consumer hype and sustained daily utility.

14 practices: 4 good practice, 4 leading edge, 6 bleeding edge

Personal Effectiveness — Biweekly Brief

The headline: AI productivity tools are everywhere your staff already work, but the gains aren't showing up in the numbers. Access is no longer the problem; activation, quality, and governance are.

The Picture

AI productivity features are now baked into the platforms your workforce already uses — Gmail, Outlook, Microsoft 365, Google Workspace — and more than 60% of employees have access to a sanctioned AI tool. The "should we adopt" question is settled. The harder question is whether any of it is paying off: most CEOs report no meaningful return, only one in seven executives sees consistent net productivity gains, and big internal AI rollouts typically stall at around 20% of seats actively used. Translation is the one clear win, with named enterprise deployments processing tens of thousands of documents in production. Most other use cases work for individual power users but break down when scaled across a department.

This Fortnight

  • Microsoft put its free Copilot AI assistant behind a paywall. Only 3% of 450M Microsoft 365 users had upgraded voluntarily, so Microsoft moved the in-Office AI features to a paid $30-per-user-per-month tier in mid-April. This affects spreadsheet automation, slide generation, and writing assistance across the whole suite. If you're a budget holder, this is the moment to revisit per-seat economics before the next renewal.

  • A flagship AI tutoring product was called "a non-event" by its own founder. Despite reaching 700,000+ students across hundreds of school districts, only 15% of users engage with it, and named teachers at lighthouse schools have stopped using it. A controlled trial of more than 1,200 students found AI assistance actively impaired performance once the AI was taken away. If you're investing in AI for learning or upskilling, scrutinize engagement, not headcount with access.

  • The hallucination problem got worse, not better. A "hallucination" is when an AI tool confidently makes things up. Independent testing this fortnight found error rates of 17–44% in AI presentation tools, and one frontier reasoning model — the most capable current AI — invented facts on 80% of general-knowledge questions. For anything touching compliance, finance, or a customer, human-in-the-loop review (a person checking each output before it ships) remains non-negotiable.

  • "AI brain fry" is now a measured phenomenon. A joint BCG/Harvard study found 14% of employees report cognitive overload from AI tools, with 39% more errors when juggling four or more of them. Separate behavioral data from 163,000 employees shows AI adoption correlating with 9% less focus time per day. More AI tools on the desktop does not mean more productive employees; consolidate before you add.

  • The domain barely moved this fortnight. No major capability or adoption shifts across the workflows we track — and that's the story. The technology has stopped being the bottleneck; what's stuck is organizational. If you're waiting for a better tool to unblock your rollout, you're solving the wrong problem.

Coming Up

  • Copilot pricing forces a hard look at AI seat economics. Only about a third of employees with paid Copilot access actually use it, versus more than 80% for the consumer version of ChatGPT. With Microsoft now charging for what was free, IT and finance should audit real usage before renewing — paying for unused AI seats will be one of the easiest line items to cut next quarter.

  • EU and US rules are about to bite on everyday AI workflows. California's automated-decision-making rules are already live, and EU enforcement on "high-risk" AI lands in August. Decision-support and email-triage AI tools used in hiring, performance, or customer-facing work may need governance documentation most teams don't have today. Legal and compliance should start mapping which AI-assisted workflows fall under regulatory scope now.

  • Cross-app AI assistants are the next vendor battleground. Google shipped a Workspace assistant in late April that stitches Sheets, Gmail, Calendar, and Chat together; Microsoft is doing the same in Office. Early data shows real capability gains — but nearly a third of IT leaders report data-exposure incidents tied to these assistants. Make sure your data-governance rules cover what an AI agent (software that acts on its own without being prompted) is allowed to read and move across apps.

What's Hard About This

  • Access has outrun activation. Sixty percent of workers have a sanctioned AI tool; year-on-year usage rates are flat. The bottleneck is governance, manager modeling, and workflow redesign — not software availability. Buying more licenses won't move the number.

  • Speed gains can hide quality losses. AI writing tools have been shown to reduce coherence by 70% even as users report being equally satisfied with the output. AI email assistants increase volume by 38%, with most of that volume adding no information. The productivity gains your teams self-report may not match what's actually landing in customers' inboxes.

  • What IT deploys and what people use are diverging. Enterprise AI vendors are consolidating and raising prices, yet employees keep reaching for the consumer chatbot they use at home. The gap between sanctioned and shadow AI is widening, and that's where the data-leak and compliance risk lives.


Go deeper: the full Personal Effectiveness briefing — the longer analytical write-up, plus every practice we track in this domain with its maturity rating, the tools to consider, and the evidence behind our assessment.