The AI landscape doesn't move in one direction — it lurches. Some techniques leap from experiment to table stakes in a single quarter; others stall against regulatory walls, technical ceilings, or organisational inertia that no amount of hype can dislodge. Knowing which is which is the hard part. The State of Play cuts through the noise with a rigorously maintained index of AI techniques across every major business domain — classified by maturity, evidenced by real-world adoption, and updated daily so you always know where you stand relative to the field. Stop guessing. Start knowing.
A daily newsletter distilling the past two weeks of movement in a domain or two — delivered to your inbox while the index updates in the background.
Each dot marks the weighted maturity of practices within a domain — hover for a brief summary, click for more detail
AI that scores leads based on intent signals and continuously refines ideal customer profiles from win/loss data and engagement patterns. Includes predictive lead scoring and ICP evolution; distinct from prospecting which finds new leads rather than scoring existing ones.
Predictive lead scoring and ideal customer profiling are proven, broadly accessible capabilities -- every major CRM platform ships them as standard features, and deployment evidence consistently shows 20-50% conversion lifts in disciplined environments. The technology question is settled. What separates organisations that capture ROI from those that don't is execution: clean data pipelines, sales-marketing alignment on scoring criteria, and feedback loops that prevent model drift. Ideal customer profiling has matured in parallel, evolving from an annual strategic exercise into an operationalised, continuously refined scoring framework fed by win/loss patterns and engagement signals. The defining tension is no longer whether ML outperforms rules-based scoring -- it does -- but whether organisations can sustain the data hygiene and cross-functional discipline these models demand. With 93 evidence items spanning nearly a decade, the pattern is unambiguous: vendor tooling is commoditised and effective; organisational foundations remain the binding constraint on value realisation.
Salesforce's Summer 2026 release (April 2026) expanded Agentforce to qualify Contacts and Person Accounts against ICP -- the first meaningful product advance in 18+ months, signalling renewed vendor attention to ICP-based qualification as organisations seek agent-led lead assessment. Yet the broader ecosystem remains frozen: Einstein, Microsoft Dynamics 365, and HubSpot have zero new lead-scoring capabilities in 2026. The tooling is commoditised and proven. Where organisations apply it with discipline, results are strong: SaaS companies report 80-85% accuracy with AI-driven scoring versus 55-60% without, BDRs reach 99% adoption rates, and prospecting cycles compress from 10+ hours to 3-4 hours weekly. A SaaS case study (Emarkable, April 2026) documented 46% MQL-to-SQL conversion lift and 34% faster response times—but attributed the win explicitly to cross-functional sales-marketing alignment on scoring criteria rather than model sophistication. Fintech deployments achieved 215% conversion increases and Carson Group validated 96% model accuracy; HubSpot ecosystem deployments report 20-30% higher close rates. But a critical adoption bottleneck has crystallized: only 3% of organisations have deployed AI in marketing and sales functions despite 88% overall AI adoption and widespread tool availability. KPMG research (April 2026) reveals why: 74% report perceiving AI business value, but only 24% achieve scaled ROI. High performers gain 4.5x ROI through governance and data discipline—a gap most organisations fail to close. The prerequisites are severe: Salesforce Einstein requires 1,000+ leads with 120+ conversions and 70% field completion. Poor CRM hygiene costs an estimated $12.9 million annually on average, with six failure modes that individually render scoring models unreliable. Practitioners are reframing lead scoring as a capacity-management system (not just a prediction model), introducing friction signals, velocity, effort, and expansion-fit dimensions—acknowledging that traditional models optimising for accuracy miss the revenue-per-hour reality. Organisational alignment remains non-negotiable: Apollo.io research confirms 20% improvement occurs only when sales and marketing define scoring criteria jointly; misalignment nullifies the technology entirely. The deployment challenge is purely operational, not technical.
— Click Vision aggregates 65+ AI lead generation statistics showing 92% of marketers report AI impact and 30% planning predictive lead scoring adoption within two years.
— Valasys MarTech reports 73% of B2B companies have lead scoring models but only 27% achieve sales team trust; failure costs mid-market $2.4M annually in lost opportunities.
— Salesforce Summer '26 expands People Scoring to Foundations tier, extending native lead scoring capability to mid-market segment with ICP fit + engagement behavior evaluation.
— R[AI]SING SUN synthesis of Salesforce, Deloitte, and IBM 2026 research shows 87% of sales orgs use AI but only 24% have implemented predictive lead scoring—adoption gap persists.
— SyncGTM research shows teams using AI-driven lead scoring report 50% higher MQL-to-SQL conversion vs manual prioritization, validating capability ROI when implemented.
— Salesfully documents critical lead quality crisis: median B2B cost-per-lead $213, MQL-to-SQL conversion fell 24% YoY, and 106 leads required per closed deal.
— Adoption bottleneck signal: only 3% of organizations deployed AI in marketing & sales functions despite 88% overall AI adoption. Indicates lead scoring remains untapped despite technology maturity and platform availability.
— 2026 adoption metrics: Lead scoring accuracy 80-85% with AI vs 55-60% without; 99% BDR adoption; prospecting time cut from 10+ hours to 3-4 hours weekly. Strong validation of production value and mainstream uptake.
2017: Salesforce productized Einstein Lead Scoring (March GA) and Infer demonstrated 80%+ lift in lead quality with full ML integration. Major CRM and marketing automation vendors converged on ML-based scoring as a standard offering.
2018: Microsoft, Salesforce, and HubSpot all released or expanded predictive lead scoring capabilities across their ecosystems. The feature became table-stakes with 15+ dedicated vendors competing on vertical expertise. Gartner validated AI-powered lead scoring as core CRM competency.
2019: All three major CRM platforms (Salesforce, Microsoft, HubSpot) had mature predictive lead scoring in production. Deployment evidence from Schneider Electric and Zenconnect confirmed enterprise adoption. Microsoft's Dynamics 365 Sales Insights expanded real-time lead scoring to the Microsoft ecosystem. HubSpot refined its lead scoring UX and expanded predictive capabilities to Enterprise tier. The practice solidified from emerging to standard feature across the market.
2020: Platform capabilities matured with Salesforce expanding Einstein Behavior Scoring in Pardot Advanced and Microsoft expanding Dynamics 365 Sales Insights. However, Forrester and vendor analysis revealed a critical gap: organizations widely adopted scoring programs but failed to operationalize them due to sales team resistance, over-complex models, and difficulty measuring ROI. ICP profiling remained aspirational; implementation challenges and platform stability issues surfaced, showing that organizational execution barriers had become as important as technical maturity.
2021: Vendor platforms achieved stability and feature completeness with all major CRM systems offering mature predictive scoring. Real-world deployments demonstrated strong ROI: CraneWorks achieved 92% deal value growth with HubSpot scoring and routing; DocuSign reported 38% SQL increase using Lattice. However, community and practitioner analysis revealed persistent implementation barriers: vendor tool limitations (Einstein Prediction Builder constrained to 10 concurrent tasks), skepticism about algorithmic opacity, and continued failures in operationalization due to unclear lead criteria, stale data, and sales-marketing misalignment. ICP methodology gained visibility but remained largely un-operationalized. The gap between platform capability and organizational execution remained the practice's defining tension.
2022-H1: All major CRM platforms offered mature predictive scoring with standard requirements (Pardot: 1,000+ leads, 120+ conversions). Real deployments showed refinement in practice—Belkins continuously adjusted lead scoring criteria for privacy-driven signal decay. Industry data confirmed high-ROI potential (50% more SQLs at 33% lower cost) but persistent adoption failures: insufficient lead volume, data silos, sales-marketing misalignment, and critical platform limitations (Einstein models could not aggregate across related objects). Implementation barriers—governance, process discipline, organizational alignment—had crystallized as the practice's binding constraint rather than technology maturity.
2022-H2: Vendor platforms consolidated maturity: HubSpot and Salesforce finalized Enterprise-tier predictive lead scoring features (November 2022). Case study evidence showed 43% revenue influence from behavioral scoring, yet survey data exposed critical barriers—44% of organizations lost 10%+ revenue from poor CRM data quality, and 70% of leads were ignored by sales teams post-handoff. Independent analysis documented practice viability concerns: Zendesk experiment showed no correlation between scored leads and close rates. Paradox deepened: vendor features commoditized while organizational execution barriers (data quality, signal validity, sales alignment) remained intractable.
2023-H1: Peer-reviewed academic research validated the core methodologies: classification approaches dominated, with decision trees and logistic regression proving most applicable for practical lead scoring. RIT thesis confirmed ML model efficacy, achieving 82%+ accuracy on lead qualification tasks. However, practitioner landscape showed widening skepticism: Openprise documented that "very few long-term success stories" existed, with projects failing when data quality degraded (25% annual decay) or databases fell below minimum thresholds (10K+ opportunities). SiriusDecisions data revealed the core paradox: 68% of companies deployed lead scoring systems but only 40% of sales teams perceived actionable value—showing adoption widening while confidence narrowed. ICP as a distinct practice gained practitioner attention (GoodFit podcast), but remained entangled with lead scoring and subordinate to data availability constraints.
2023-H2: Vendor platforms consolidated advanced ML capabilities across Salesforce, HubSpot, and Microsoft ecosystems with no meaningful feature differentiation; Dreamforce 2023 keynotes positioned Einstein as data+AI+CRM convergence. Academic research continued validating algorithm efficacy: Random Forest and Decision Tree models achieved 93%+ accuracy in comparative studies. Practitioner discourse remained skeptical: industry analysis questioned whether lead scoring remained a viable differentiator or had stalled as a mature commodity feature. The practice showed organizational adoption growth (widespread system deployment) but persistent confidence gap (40% of sales teams finding actionable value), indicating technology maturity masking operational implementation barriers that remained unresolved.
2024-Q1: Vendor platforms showed continued feature maturity with no new capabilities: Microsoft Dynamics 365 Customer Insights, Salesforce Einstein, and HubSpot all maintained established lead scoring as standard. Real-world deployment data showed strong execution cases—B2B SaaS companies reported 140% productivity gains and 181% lead-to-customer improvements following AI-powered lead scoring implementation; industry analysis documented 87% time compression and 91% predictive accuracy. However, persistent architectural constraints remained: Salesforce Einstein could not aggregate cross-object signals. The practice remained stalled at organizational adoption despite feature maturity, with adoption barriers (data quality, sales-marketing misalignment) unchanged from prior years. No new evidence emerged to resolve fundamental skepticism about signal validity.
2024-Q2: Vendor feature consolidation accelerated with no new capabilities; Microsoft confirmed Dynamics 365 lead scoring GA (April 2024). Real-world deployment case: Grammarly achieved 80% conversion improvement and 50% deal cycle compression using Salesforce Einstein, validating high-ROI implementations in mature environments. Practitioner discourse shifted focus from vendor capabilities to organizational discipline: GTM analysis cautioned that traditional Ideal Customer Profiles frequently fail due to lack of grounding in current paying customers; AdRoll documented persistent execution failures (firmographic over-reliance, misalignment, lack of refinement). Feature commoditization complete; organizational adoption barriers (data quality, cross-functional alignment, signal validity skepticism) remained unresolved and increasingly central to adoption discourse.
2024-Q4: Vendor platforms showed zero new capability development; HubSpot reconfirmed AI contact scoring GA (October), Microsoft confirmed Dynamics 365 lead scoring continued in Customer Insights (December). Feature stasis accelerated. New adoption barrier crystallized: vendor lock-in risk, with major martech vendors using AI to increase ecosystem lock-in and customer retention. Data quality confirmed as binding technical constraint; tutorials and vendor documentation uniformly cited accuracy dependence on clean input data—a prerequisite 70%+ of organizations failed to meet. The practice remained at commoditized plateau: widespread vendor feature availability but persistent execution gaps and new ecosystem lock-in constraints suppressed organizational adoption velocity and confidence in ROI realization.
2025-Q1: Deployment maturity accelerated with production case evidence: BrainPredict documented 47% win rate increase and 51% revenue growth for €45M B2B SaaS company using 26 AI models; HubSpot published internal case validating fit/intent scoring combination. ICP adoption matured from aspiration to practice: Mereo consultancy showed AI-driven profiling overcoming executive skepticism and informing GTM strategy. However, Pedowitz Group analysis crystallized scaling failures driven by model drift and ungoverned signals. Data quality remained binding constraint; organizationally-driven barriers (governance, signal validity, cross-functional alignment) not technology maturity were the practice's limiting factor.
2025-Q2: Broad adoption signals accelerated: SMB survey (2,500 organizations) confirmed 64% AI adoption in sales with lead scoring as most common application ($3.70 ROI/dollar, 114 annual productivity hours); third-party research (HBR, Gartner) cited 51% conversion lifts and 70% of B2B companies planning predictive analytics adoption by Q2. HubSpot released redesigned February 2025 lead scoring system separating Fit (demographic) and Engagement (behavioral) scores, showing continued vendor platform evolution. Production deployment success persisted: Fifty Five and Five achieved 4.5x conversion improvement (4% to 18%). Implementation barriers crystallized as binding constraint: data quality, system integration, modeling expertise, model drift, and persistent sales rep distrust of black-box scoring remained the primary adoption determinants rather than vendor tooling maturity.
2025-Q3: Platform features stalled with zero innovation; Microsoft and HubSpot confirmed continued GA availability but no capability advancement. Critical concerns emerged: academic review of 44 lead scoring studies found most models lack bias detection frameworks; Experian data showed 94% of organizations suspect inaccurate customer data—raising foundational reliability concerns. Practitioner discourse intensified around failure modes: activity-vs-intent confusion, data quality decay (25%+ annually), narrow training sets, and missing feedback loops driving production model collapse. Bifurcation deepened: broad adoption claims (SMB survey 64% adoption) contrasted with mounting evidence of low confidence in scoring and limited ROI realization, suggesting adoption may outpace maturity.
2025-Q4: Vendor platform feature stasis accelerated with zero innovation from any major provider; HubSpot's November feature (high-impact web page analysis) represented minor refinement within existing framework. Deployment success remained narrow and implementation-dependent: Optif.ai documented mid-market SaaS achieving 55% conversion improvement (20% to 31%) with AI-driven scoring; AIQ Labs critical analysis showed HubSpot native tooling converted only 22% of high-scored leads while custom AI improved qualification 35% in 60 days. Systematic failure modes crystallized: Gencomm AI documented temporal data misalignment in model training causing feature bias; sales teams continued wasting 40% time on misprioritized leads. ICP matured from strategic aspiration to operational focus with expert consensus shifting to LTV-based profiling and AI-enabled continuous customer pattern identification. Organizational adoption barriers (data quality, model governance, sales alignment) remained unresolved binding constraints despite widespread tool availability.
2026-Jan: Vendor platforms confirmed feature stasis with zero new capabilities across Salesforce, Microsoft, and HubSpot ecosystems. Research validation accelerated: Databar.ai documented 98.39% accuracy potential with enriched data and Forrester-confirmed 38% conversion lift, but underscored data quality as binding constraint (Forrester: 28% faster cycles require unified architecture). Deployment evidence persisted: House of MarTech reported 2x close rates and $1.2M revenue from dormant lead revival; Coefficient.io identified Einstein cost barriers ($40K+ annually) and opacity as adoption friction. Rules-based model obsolescence became clear: industry analysis explained why static scoring fails and positioned AI transition as necessary. ICP evolved from strategic to operational: AI-driven continuous profiling with real-time signal detection replacing annual planning. Organizational barriers (data quality, governance, sales alignment) remained binding constraint; adoption metrics masked narrow ROI realization.
2026-Feb: Vendor platforms remained frozen with zero new capabilities. Deployment success evidence clarified constraints: House of MarTech documented 215% conversion lift (fintech) and Carson Group 96% accuracy validating ML maturity; HubSpot ecosystem reported 20-30% deal close improvement where discipline existed. Data quality identified as critical bottleneck: Chronic Digital quantified $12.9M annual cost from poor data, documenting six failure modes rendering models unreliable. Operational barriers crystallized: VolkartMay research via Apollo.io showed 20%+ improvement only with cross-functional alignment—misalignment prevented ROI even where technology worked. Salesforce Einstein prerequisites tightened visibility: 1,000+ leads minimum, 70%+ field completion, $50-75/user monthly. ICP matured toward continuous optimization: TOPO benchmark confirmed 68% faster closes with defined profiles. Adoption bifurcation sharpened: vendor tooling sufficient for disciplined orgs; organizational foundations (data hygiene, governance, alignment) remained binding constraint.
2026-Mar/Apr: Platform feature stasis confirmed: zero new capabilities across Salesforce, Microsoft, HubSpot ecosystems. Deployment case studies showed mature execution: Cotera's 18-month Einstein deployment (22 reps, 180K contacts) achieved 14%→19% conversion lift via score-based routing but identified critical architectural gap—Einstein scores but cannot pull external signals (company news, acquisitions) or auto-enrich records. House of MarTech documented two case studies showing ROI from strategic execution (disqualification workflows, conversation-based intent signals) rather than model sophistication. HubSpot internal case (100K+ leads/month, 6 months) reported MQL +47%, SQL conversion +78%, revenue +55%, CAC -26%, cycle -28 days. Third-party TCO analysis (HubSpot €291k-336k vs Salesforce €891k-1.17M over 3 years for 50 users) confirmed lead scoring as table-stakes feature with significant cost disparity. Practitioner analysis crystallized deployment reality: Forrester data shows 67% of sales time still wasted despite lead scoring (79% of MQLs never followed up), driven by backward model-building from assumptions rather than closed-deal data, missing negative scoring, and model decay. Einstein Lead Scoring validated as production-proven over 7+ years across thousands of orgs with stable performance—contrasting sharply with Agentforce's reported 77% deployment failure rate—but SMBs face a hard data quality barrier (500+ closed deals required; fewer than 200 yields noise dressed as ML). Organizational execution barriers (data discipline, model calibration, negative scoring implementation) remain binding constraint despite vendor maturity.
2026-May (2026-05-02 to 2026-05-16): Vendor platform feature acceleration continued: Salesforce Summer '26 (GA June 13, 2026) introduces People Scoring in Foundations tier, extending native lead scoring to mid-market segment with configurable ICP fit + engagement behavior evaluation. Adoption gap persists: R[AI]SING SUN synthesis of Salesforce, Deloitte, and IBM 2026 research shows 87% of sales orgs use AI but only 24% have implemented predictive lead scoring. Deployment validation holds: teams using AI-driven lead scoring report 50% higher MQL-to-SQL conversion vs manual prioritization (SyncGTM). Critical trust barrier crystallized: Valasys MarTech reports 73% of B2B companies have lead scoring models but only 27% achieve sales team trust; failure costs mid-market $2.4M annually in lost opportunities. Data quality crisis worsens: Salesfully documents median B2B cost-per-lead reached $213, MQL-to-SQL conversion fell 24% year-over-year, and 106 leads required per closed deal—indicating quality degradation despite adoption growth. Adoption momentum shows early signals: Click Vision aggregates 65+ statistics showing 92% of marketers report AI impact and 30% actively planning predictive lead scoring adoption within two years. Pattern synthesis: technology maturity and vendor availability established; organizational barriers (data quality, sales-marketing alignment, model governance) remain binding constraint; vendor lock-in risk accelerating as platforms use AI scoring to increase ecosystem stickiness for mid-market and enterprise.