The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that organises personal files, notes, and information and enables semantic retrieval across personal knowledge stores. Includes automated tagging and cross-note linking; distinct from enterprise search which operates across organisational rather than personal knowledge.
AI-enhanced personal knowledge management has reached practitioner maturity with capable tooling, expanding AI-native patterns, and market growth validation, yet remains confined to individual power users with weak organizational spillover. Obsidian, Logseq, and Mem ship semantic search, automated tagging, and conversational retrieval as table stakes. Market validation is strong: the AI personal knowledge base segment reached $1.65 billion in 2025 and is projected to reach $7.6 billion by 2026 (30.3% CAGR), with $18.4 billion by 2034. Obsidian reached 1.5 million monthly active users and removed commercial licensing barriers in April 2026. Mem achieved SOC 2 Type II, ISO 27001, and HIPAA compliance—maturity reserved for enterprise vendors. Bleeding-edge deployments show sophisticated AI integration: Claude Code plugins automate wiki compilation (reducing tokens 20–40x), durable agent patterns maintain 700+ note vaults with persistent operational rules, and architectural innovations (strict layer separation) prevent recursive summary degradation. However, what sustains bleeding-edge classification is the gap between individual productivity gains and organizational adoption barriers. Critical constraints prevent team-scale deployment: local-first philosophy places backup burden entirely on users (permanent data loss documented with auto-update), reliability gaps (sync crashes, 25% mobile failures, complete mobile app absence), platform incompleteness (no real-time collaboration), and architectural limitations (performance degradation at 1000+ pages, semantic search precision drops 87% at 50,000+ documents). Vendors continue shipping, yet team-scale adoption remains blocked by ecosystem fragility and organizational readiness constraints, not AI capability maturity.
Obsidian leads with 1.5 million monthly active users (April 2026, +22% YoY growth) and removed commercial licensing requirements on April 9, 2026, enabling free business-scale deployment. The 18-person bootstrapped team ships actively: 2,700+ community plugins, 858,733 downloads of the Smart Connections AI plugin. Smart Connections has evolved from single plugin to official ecosystem: Smart Connections Suite (April 2026) includes Chat, Graph, Context, and local-first operations—repositioning semantic knowledge discovery from optional add-on to expected feature set. Smart Connections Pro ($30/month) targets 1,000+ note power users with local performance indexing, agentic chat actions, and PDF/image context packs, signaling market maturity and freemium monetization. Logseq occupies complementary position: database rewrite delivers sub-second load times for 20,000-page graphs; Thoughtworks included it on the Technology Radar for team knowledge base use (March 2026). However, critical adoption barriers persist: heavy users report completely absent mobile app support despite full desktop maturity; multiple users report sync failures, crashing on login, and data loss incidents sufficient to cause product abandonment. Mem released complete platform rebuild (March 2026) repositioning as "AI Thought Partner" with voice capture, agentic chat, and offline-first operation; achieved enterprise-grade compliance (SOC 2 Type II, ISO 27001, ISO 42001, GDPR, PCI-DSS, HIPAA) in April 2026. Practitioners are deploying sophisticated architectures: Obsidian + Claude Code for RAG-augmented wiki management (documented at 100+ article scale with 20–40x token reduction); 3,400-file production vaults integrated with Claude Code for writing assistance and competitive intelligence; custom slash commands reading Obsidian markdown relationships via CLI for pattern detection and task automation. RAG deployments exceed scaling limits documented in 2025: simple vector RAG fails at semantic reasoning; practitioners building hybrid retrieval with knowledge graphs, entity extraction, and reranking—moving beyond vector search alone. SME teams adopted Obsidian for internal documentation showing benefits (bidirectional linking, discovery) with adoption barriers (collaboration gaps, learning curves). Large-scale user sentiment data (19,000+ reviews) shows 4.2-star rating with customization praise offset by mobile degradation and sync issues.
The market trajectory validates expansion. The AI personal knowledge base segment reached $1.65 billion in 2025 and is projected to grow to $7.6 billion by 2026 (30.3% CAGR) and $18.4 billion by 2034 (11.6% CAGR). Key growth driver: remote work creating knowledge fragmentation—institutional knowledge previously transferred in-person now siloed in digital workspaces. Practitioners experiment with emerging patterns: Obsidian as plaintext backend for AI assistants (for transparency and privacy), multi-tool workflows (Google NotebookLM + Claude Code + Obsidian), local-first architectures (Ollama + nomic-embed-text) to preserve data control. Privacy-conscious implementations documented: 73% of local-first Obsidian plugins tested in March 2025 defaulted to cloud APIs (Smart Connections among them), prompting practitioners to deploy local embeddings with offline operation verification.
Reliability and scale remain critical barriers. Production incidents documented in March-April 2026 include Obsidian rendering regressions (scrolling unusable on documents with embedded content), critical Logseq failures (sync crashing, 25% mobile login failure rate, complete mobile app absence despite user reliance), persistent data loss risks, and plugin startup load penalties (8.6 seconds on vaults with 3,266 files and 49 plugins). Adoption friction is well-documented: steep learning curves for non-technical users, slow mobile performance, lack of native AI features (most AI requires third-party plugins), and limited real-time collaboration support prevent team-scale deployment. Semantic search limitations are now documented: Stanford research confirms retrieval precision drops 87% at 50,000+ documents due to vector space crowding, affecting RAG-based deployments at scale. Corporate IT security policies continue blocking plugin deployment in organizational settings, and vendor lock-in concerns (94% of organizations surveyed express concern, 33% specifically fear lock-in) inhibit broader adoption. These constraints remain the binding factors preventing team-scale deployment, not AI capability maturity.
— Developer documents permanent data loss after auto-update; critical negative signal: local-first philosophy places backup burden entirely on users with no software guardrails; demonstrates maturity gap in PKM reliability.
— Production-grade bidirectional sync: 10x smaller than Remotely Save, handles 3000+ files with Git-style merge logic and AES-GCM-256 encryption; addresses scaling limits in existing vault sync solutions.
— Practitioner architecture solving recursive summary degradation through strict layer separation (raw/wiki/operations); demonstrates AI-augmented PKM design pattern preventing knowledge integrity loss at scale.
— Named deployment: three domain wikis (AI Governance, Cybersecurity, Cyber Guidepost) with automated ingestion; Claude Skills automate research gathering, with practitioner outcome: 'This is upgrading my PKM.'
— Open-source Claude Code plugin scaffolding LLM knowledge base setup; real deployment (Agentic Engineering Wiki, 51 tips + 9 company profiles + 10 paper summaries) demonstrates AI-augmented PKM at personal scale with compounding outcomes.
— Documents real adoption barriers preventing Logseq team-scale deployment: performance degradation at 1000+ pages, absence of real-time collaboration, weak mobile UX, steep learning curve—maturity constraints on current architecture.
— Practitioner managing 774-note vault with durable agent scaffolding (rules, scripts, skills); demonstrates compound knowledge gain across sessions—agent loads operational memory from .claude/rules/ avoiding session rediscovery.
— 100+ commits (March-April 2026) across sync, CLI, database, UI optimization; demonstrates sustained vendor engineering addressing scalability and reliability in knowledge base management.