Perly Consulting │ Beck Eco

The State of Play

A living index of AI adoption across industries — where established practice meets the bleeding edge
UPDATED DAILY

The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.

The Daily Dispatch

A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.

AI Maturity by Domain

Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail

DOMAIN
BLEEDING EDGEESTABLISHED

Personal knowledge management & organisation

BLEEDING EDGE

TRAJECTORY

Stalled

AI that organises personal files, notes, and information and enables semantic retrieval across personal knowledge stores. Includes automated tagging and cross-note linking; distinct from enterprise search which operates across organisational rather than personal knowledge.

OVERVIEW

AI-enhanced personal knowledge management has reached practitioner maturity with capable tooling, expanding AI-native patterns, and market growth validation, yet remains confined to individual power users with weak organizational spillover. Obsidian, Logseq, and Mem ship semantic search, automated tagging, and conversational retrieval as table stakes. Market validation is strong: the AI personal knowledge base segment reached $1.65 billion in 2025 and is projected to reach $7.6 billion by 2026 (30.3% CAGR), with $18.4 billion by 2034. Obsidian reached 1.5 million monthly active users and removed commercial licensing barriers in April 2026. Mem achieved SOC 2 Type II, ISO 27001, and HIPAA compliance—maturity reserved for enterprise vendors. Bleeding-edge deployments show sophisticated AI integration: Claude Code plugins automate wiki compilation (reducing tokens 20–40x), durable agent patterns maintain 700+ note vaults with persistent operational rules, and architectural innovations (strict layer separation) prevent recursive summary degradation. However, what sustains bleeding-edge classification is the gap between individual productivity gains and organizational adoption barriers. Critical constraints prevent team-scale deployment: local-first philosophy places backup burden entirely on users (permanent data loss documented with auto-update), reliability gaps (sync crashes, 25% mobile failures, complete mobile app absence), platform incompleteness (no real-time collaboration), and architectural limitations (performance degradation at 1000+ pages, semantic search precision drops 87% at 50,000+ documents). Vendors continue shipping, yet team-scale adoption remains blocked by ecosystem fragility and organizational readiness constraints, not AI capability maturity.

CURRENT LANDSCAPE

Obsidian leads with 1.5 million monthly active users (April 2026, +22% YoY growth) and removed commercial licensing requirements on April 9, 2026, enabling free business-scale deployment. The 18-person bootstrapped team ships actively: 2,700+ community plugins, 858,733 downloads of the Smart Connections AI plugin. Smart Connections has evolved from single plugin to official ecosystem: Smart Connections Suite (April 2026) includes Chat, Graph, Context, and local-first operations—repositioning semantic knowledge discovery from optional add-on to expected feature set. Smart Connections Pro ($30/month) targets 1,000+ note power users with local performance indexing, agentic chat actions, and PDF/image context packs, signaling market maturity and freemium monetization. Logseq occupies complementary position: database rewrite delivers sub-second load times for 20,000-page graphs; Thoughtworks included it on the Technology Radar for team knowledge base use (March 2026). However, critical adoption barriers persist: heavy users report completely absent mobile app support despite full desktop maturity; multiple users report sync failures, crashing on login, and data loss incidents sufficient to cause product abandonment. Mem released complete platform rebuild (March 2026) repositioning as "AI Thought Partner" with voice capture, agentic chat, and offline-first operation; achieved enterprise-grade compliance (SOC 2 Type II, ISO 27001, ISO 42001, GDPR, PCI-DSS, HIPAA) in April 2026. Practitioners are deploying sophisticated architectures: Obsidian + Claude Code for RAG-augmented wiki management (documented at 100+ article scale with 20–40x token reduction); 3,400-file production vaults integrated with Claude Code for writing assistance and competitive intelligence; custom slash commands reading Obsidian markdown relationships via CLI for pattern detection and task automation. RAG deployments exceed scaling limits documented in 2025: simple vector RAG fails at semantic reasoning; practitioners building hybrid retrieval with knowledge graphs, entity extraction, and reranking—moving beyond vector search alone. SME teams adopted Obsidian for internal documentation showing benefits (bidirectional linking, discovery) with adoption barriers (collaboration gaps, learning curves). Large-scale user sentiment data (19,000+ reviews) shows 4.2-star rating with customization praise offset by mobile degradation and sync issues.

The market trajectory validates expansion. The AI personal knowledge base segment reached $1.65 billion in 2025 and is projected to grow to $7.6 billion by 2026 (30.3% CAGR) and $18.4 billion by 2034 (11.6% CAGR). Key growth driver: remote work creating knowledge fragmentation—institutional knowledge previously transferred in-person now siloed in digital workspaces. Practitioners experiment with emerging patterns: Obsidian as plaintext backend for AI assistants (for transparency and privacy), multi-tool workflows (Google NotebookLM + Claude Code + Obsidian), local-first architectures (Ollama + nomic-embed-text) to preserve data control. Privacy-conscious implementations documented: 73% of local-first Obsidian plugins tested in March 2025 defaulted to cloud APIs (Smart Connections among them), prompting practitioners to deploy local embeddings with offline operation verification.

Reliability and scale remain critical barriers. Production incidents documented in March-April 2026 include Obsidian rendering regressions (scrolling unusable on documents with embedded content), critical Logseq failures (sync crashing, 25% mobile login failure rate, complete mobile app absence despite user reliance), persistent data loss risks, and plugin startup load penalties (8.6 seconds on vaults with 3,266 files and 49 plugins). Adoption friction is well-documented: steep learning curves for non-technical users, slow mobile performance, lack of native AI features (most AI requires third-party plugins), and limited real-time collaboration support prevent team-scale deployment. Semantic search limitations are now documented: Stanford research confirms retrieval precision drops 87% at 50,000+ documents due to vector space crowding, affecting RAG-based deployments at scale. Corporate IT security policies continue blocking plugin deployment in organizational settings, and vendor lock-in concerns (94% of organizations surveyed express concern, 33% specifically fear lock-in) inhibit broader adoption. These constraints remain the binding factors preventing team-scale deployment, not AI capability maturity.

TIER HISTORY

ResearchJan-2023 → Jan-2023
Bleeding EdgeJan-2023 → present

EVIDENCE (94)

— Developer documents permanent data loss after auto-update; critical negative signal: local-first philosophy places backup burden entirely on users with no software guardrails; demonstrates maturity gap in PKM reliability.

WebDAV Syncing Plugin for ObsidianProduct Launches

— Production-grade bidirectional sync: 10x smaller than Remotely Save, handles 3000+ files with Git-style merge logic and AES-GCM-256 encryption; addresses scaling limits in existing vault sync solutions.

— Practitioner architecture solving recursive summary degradation through strict layer separation (raw/wiki/operations); demonstrates AI-augmented PKM design pattern preventing knowledge integrity loss at scale.

— Named deployment: three domain wikis (AI Governance, Cybersecurity, Cyber Guidepost) with automated ingestion; Claude Skills automate research gathering, with practitioner outcome: 'This is upgrading my PKM.'

— Open-source Claude Code plugin scaffolding LLM knowledge base setup; real deployment (Agentic Engineering Wiki, 51 tips + 9 company profiles + 10 paper summaries) demonstrates AI-augmented PKM at personal scale with compounding outcomes.

— Documents real adoption barriers preventing Logseq team-scale deployment: performance degradation at 1000+ pages, absence of real-time collaboration, weak mobile UX, steep learning curve—maturity constraints on current architecture.

— Practitioner managing 774-note vault with durable agent scaffolding (rules, scripts, skills); demonstrates compound knowledge gain across sessions—agent loads operational memory from .claude/rules/ avoiding session rediscovery.

Logseq DB - Changelog #36Product Launches

— 100+ commits (March-April 2026) across sync, CLI, database, UI optimization; demonstrates sustained vendor engineering addressing scalability and reliability in knowledge base management.

HISTORY

  • 2023-H1: AI integration accelerated across major PKM vendors (Obsidian, Logseq), signaling ecosystem maturity. Deployment evidence remained primarily at individual/practitioner scale. Market consolidation pressures emerged as open-source tools (Obsidian, Logseq) gained traction against closed, higher-cost alternatives (Roam Research). Academic evidence showed promise for healthcare applications but also documented critical AI limitations (hallucinations, reproducibility) relevant to knowledge work.
  • 2023-H2: AI features became table stakes across PKM market (Mem X, Logseq, Obsidian all shipped conversational/semantic capabilities). Ecosystem limitations emerged: security vulnerabilities (Obsidian CVE-2023-2110), plugin compatibility issues, and performance degradation at scale. KM industry (KMWorld 2023) identified user behavior as core adoption barrier: "end users never tag content." Pilot research across knowledge workers (PolyU) explored ChatGPT integration into PKM practices. No evidence of team-scale enterprise deployment materialized.
  • 2024-Q1: Market consolidation accelerated, with Obsidian and Logseq solidifying leadership; Roam Research entered decline phase. Obsidian Smart Connections reached v2.0 maturity. Industry forecasted AI-driven automation of knowledge discovery and categorization. Practitioner deployments expanded (educators leveraging Smart Connections for content curation). Ecosystem maturity remained challenged: plugin brittleness and private API dependencies limited reliability for team-scale rollout. Behavioral adoption barriers persisted. No enterprise team-scale deployments emerged.
  • 2024-Q2: Ecosystem innovation accelerated (Smart Plugins launch for Obsidian), and community grassroots development continued (RAG chatbots for Logseq). However, critical technical barriers emerged: Logseq reported unresolved scalability issues (startup failures >300MB graphs), and community reports documented performance degradation with large knowledge bases. Industry analysis identified adoption barriers: 29% cite AI cost, 35% of AI projects fail on data quality, employee change support dropped to 38%. Adoption remained individual/small-team scale; no enterprise team-scale deployments evidenced.
  • 2024-Q3: Continued individual-scale deployments and positive user testimonials (Logseq for integrated personal/business data, Obsidian Smart Connections in daily use). However, ecosystem maturity challenges intensified: Logseq faced community concerns about development slowdown, stalled sync feature, and upcoming multi-month database rewrite; long-term users reported feature complexity contradicting outliner simplicity. Security risks emerged (Khoj CVE-2024-25639 XSS/RCE vulnerability). Industry analysis documented persistent AI limitations (context reasoning, bias amplification, hallucinations) relevant to PKM trust and accuracy. Adoption remained concentrated at individual practitioner scale; no team-scale or enterprise deployments documented.
  • 2024-Q4: AI adoption discourse shifted from concept to operational execution (Forrester KM conferences analysis). Platform engineering accelerated: Logseq DB released with encryption and real-time collaboration APIs; Obsidian completed independent security audits (Cure53, December). However, critical ecosystem fragility emerged as adoption barrier: plugin security vulnerabilities became acute (poor grades on widely-used plugins), and a data-corruption bug in obsidian-livesync highlighted reliability risks in production. New entrants (Mem 2.0) faced market skepticism over limited differentiation and AI hallucinations. Adoption remained individual/small-team scale. Core tension sharpened: organizational demand exists, but ecosystem reliability (not AI capability) became the blocking factor for team-scale deployment.
  • 2025-Q1: Vendor engineering continued: Logseq DB added AI inference workers for embeddings and bulk actions (March). Grassroots ecosystem innovation progressed (MCP servers enabling LLM agents to access knowledge graphs). Academic research proposed AI-native memory architectures (SECOND ME). Individual practitioner adoption sustained. However, adoption barriers intensified: corporate IT security policies prevented plugin deployment despite their value, and power users hit performance ceilings with large graphs. No team-scale or enterprise deployments emerged. Practice remained at individual productivity scale, with security governance and performance becoming primary adoption constraints.
  • 2025-Q2: Platform engineering accelerated with vendor releases: Logseq DB introduced Library feature for page management and performance optimization (June); Obsidian Smart Connections underwent rapid iteration (v2.1.68-69) to address semantic embedding bottlenecks. Agentic AI integration matured: MCP servers enabling Claude and other assistants to directly query Logseq knowledge graphs reached production deployment. Individual practitioner adoption remained strong. However, ecosystem barriers persisted: Logseq faced iCloud sync slowdowns affecting power users; database version transition announced for future release with expected multi-month rewrite; plugin architecture prevented cross-platform secure API credential storage. Corporate IT security concerns remained the primary blocker for team-scale adoption. Deployment remained at individual/small-team scale with no evidence of enterprise-wide synchronized knowledge base projects.
  • 2025-Q3: New market entrants emerged: MemUAI launched public beta with AI-powered knowledge structuring, continuing ecosystem expansion. Vendor platform engineering continued with incremental Logseq DB improvements (CLI, mobile UI, memory optimization). Adoption barriers intensified: project management friction surfaced (Logseq DB delivery delays), licensing model shifts created user dissatisfaction, and plugin reliability concerns persisted. Peer-reviewed academic research independently validated KM implementation barriers across technological, organisational, and ethical dimensions. Agentic AI integration remained stable in production (MCP tools reaching mature deployment). Individual practitioner adoption sustained; no evidence of team-scale or enterprise deployments emerged. Ecosystem maturity and project execution risk remained binding constraints.
  • 2025-Q4: Vendor ecosystem continued incremental development (Smart Connections plugin early release track, Logseq DB ongoing optimization), and new platform entrants maintained presence. Individual practitioner adoption remained stable. However, fundamental barriers to team-scale deployment persisted: research showed 95% of AI projects fail to deliver ROI and only 5% reach production deployment; hallucination rates remained around 30% even with web search enabled. Adoption remained concentrated at individual and small-team scales, with no evidence of enterprise-wide synchronized knowledge base deployments. The practice remained blocked at the individual productivity tier, with organizational adoption frameworks and viable ROI evidence becoming the binding limitation on advancement.
  • 2026-Jan: Market validation accelerated: AI-PKM market grew 30.4% YoY ($1.27B to $1.65B), with $4.74B projection by 2029. Obsidian released SecretStorage API for secure API key management; Logseq DB achieved sub-second load times for massive graphs. New entrant MyKioku launched with voice input and automatic tagging. Platform-level AI integration matured (local LLMs, MCP servers). Analyst framework identified critical unresolved gap: personal PKM tools fail at team-scale knowledge-to-action conversion. Adoption remained individual/small-team scale.
  • 2026-Feb: Platform maturation accelerated: Obsidian crossed 1.5M users (+22% YoY) with 100+ AI plugins; Logseq DB advanced with Query Builder tool and Thoughtworks Technology Radar inclusion; individual practitioner deployments demonstrated RAG architectures (8,963-note semantic search systems). Mem AI showed mixed signals: 4/5 independent review with 60% faster note retrieval but usability issues. However, critical reliability failures emerged: data loss incidents in Logseq sync, and security vulnerabilities in Obsidian path handling. Adoption remained individual/small-team scale with ecosystem fragility as persistent barrier.
  • 2026-Mar: Platform innovation accelerated: Mem released complete rebuild as "AI Thought Partner" with voice capture and agentic workflows; SeqLog shipped native macOS client with 10x faster search; Smart Connections reached 858K+ downloads. Market evidence strengthened: AI-PKM market hit $1.65B in 2025 (30.3% YoY) with $6.15B projection by 2030. Practitioners documented multi-tool AI-augmented workflows (NotebookLM + Claude Code + Obsidian). However, critical reliability barriers persisted: Obsidian rendering regression affecting transclusion-heavy documents marked production usage unusable; Stanford research quantified RAG semantic collapse (87% precision drop at 50K+ documents). Ecosystem remains fragile with plugin load penalties and vendor lock-in concerns (94% of organizations cite concern). Adoption remained concentrated at individual power-user and small-team scales; no evidence of team-scale or enterprise synchronized deployments emerged.
  • 2026-Apr: Obsidian made critical product decision: removed commercial licensing requirement (April 9), enabling free business-scale deployment. User base confirmed at 1.5M monthly active users with 22% YoY growth. Smart Connections ecosystem evolved: official product GA (April 25) launches Suite (Chat, Graph, Context, local-first) repositioning semantic discovery as expected feature; Pro tier ($30/month) targets power users with local indexing, agentic actions, and multimodal context. Mem achieved enterprise-grade compliance: SOC 2 Type II, ISO 27001, ISO 42001, GDPR, PCI-DSS Level 1, HIPAA certifications with zero exploitable vulnerabilities—maturation signal for consumer AI PKM adoption in regulated sectors. Practitioner deployments expanded: Obsidian + Claude Code at 100+ article scale with RAG solution reducing tokens 20–40x; 3,400-file production vault integrated with Claude Code for writing assistance and competitive intelligence; custom slash commands reading markdown relationships for pattern detection. Hybrid RAG architectures documented: vector search insufficient for semantic reasoning; practitioners deploying knowledge graphs with entity extraction and reranking. Market validation: analyst reports project AI KM market from $1.65B (2025) to $7.6B (2026) to $18.4B (2034). Privacy concerns materialized: 73% of local-first plugins tested default to cloud APIs despite local-first positioning; practitioners deploying fully offline stacks with Ollama + nomic-embed-text. However, critical security barrier emerged: Elastic Security Labs published research (April 14) documenting Obsidian plugin ecosystem vulnerability—community plugins inherit unrestricted filesystem/shell access weaponized in live social engineering campaigns targeting finance/crypto users. Critical adoption gaps documented: Logseq platform incompleteness (no mobile app), sync crashing, 25% login failures, data loss incidents driving product abandonment despite user outliner preference. Adoption shifted slightly from purely individual to SME/small-team scale; reliability barriers, platform incompleteness, and architectural security vulnerabilities prevent broader team-scale deployment.
  • 2026-May: AI-augmented PKM enters practitioner-scale deployment phase. Claude Code ecosystem matures: open-source Wiki Builder plugin (50+ tip sets deployed; real-world Agentic Engineering Wiki with 51 tips, 9 company profiles, 10 paper summaries) scaffolds knowledge base setup reducing friction. Durable agent patterns documented: 774-note vault managed via persistent operational rules (.claude/rules/), avoiding session-to-session rediscovery; three-domain practitioner deployment (AI Governance, Cybersecurity wikis) with Claude Skills automating research ingestion and outcome: "upgrading my PKM." Architectural innovation accelerates: strict 3-layer separation (raw/wiki/operations) prevents recursive summary degradation and maintains knowledge integrity over time. Platform reliability addressed: WebDAV sync plugin (10x faster than Remotely Save, handles 3000+ files with Git-style merge logic and AES-256 encryption) provides production-grade bidirectional synchronization. Logseq DB development continued: 100+ commits in 3-week window across sync reliability, CLI tooling, database optimization, and UI—sustained engineering toward scalability. However, critical negative evidence emerged: permanent data loss documented from misconfigured vault location during Obsidian auto-update (sector-level overwrite of hundreds of notes); developer response: 3-2-1-1-0 backup architecture required (local→mirror→Git bare→off-site→cloud). Adoption barriers identified: performance degradation at 1000+ Logseq pages, absence of real-time collaboration, weak mobile UX, steep learning curve prevent team-scale deployment despite user satisfaction. Adoption remained concentrated at individual/small-team scale with emerging AI-augmented deployment patterns.

TOOLS