Why AI-First Is the Wrong Ambition for Most Organisations Right Now
The organisations moving fastest on enterprise AI are abandoning the most. S&P Global Market Intelligence surveyed 1,006 midlevel and senior professionals across North America and Europe in late 2024 and found that the share of companies scrapping the majority of their AI initiatives before reaching production had jumped from 17 per cent to 42 per cent in a single year.¹ The average organisation abandoned 46 per cent of its proof-of-concept projects before production. This is not a story about organisations that tried AI and found the technology wanting. It is a story about organisations that moved at the pace the dominant narrative recommended and found the results did not follow.
The dominant narrative has a name. AI-first. It circulates through vendor keynotes, board papers, and strategy offsite decks with the confidence of settled wisdom. Get ahead of the curve. Move before competitors. Make AI central to everything. The organisations that hesitate will be left behind. The Q1 2026 Role Layer dataset, built from 187 senior decision-makers across ten C-suite roles at mid-to-large enterprises, collected January to March 2026, suggests the narrative has the causation inverted.² Speed of AI ambition is not the variable that predicts AI value. Coordination across the leadership system is.
What the failure rate is actually measuring
The MIT NANDA initiative found in August 2025 that 95 per cent of enterprise generative AI pilots produced no measurable impact on profit and loss.³ McKinsey’s State of AI 2025, drawing on 1,993 respondents across approximately 105 countries, found that nearly two-thirds of organisations had not yet begun scaling AI across the enterprise.⁴ S&P Global found abandonment rates tripling in twelve months. These three datasets are measuring the same underlying condition from different angles.
The standard interpretation is that the technology is harder to implement than advertised, or that enterprises lack the data infrastructure to support it, or that change management has not kept pace with deployment ambition. All three are partially true. None of them explains why the failure rate is accelerating as the technology matures and as enterprise AI investment reaches record levels. A technology problem should get easier as the technology improves. A coordination problem gets harder as the pace of deployment increases, because faster deployment widens the distance between the roles pushing the programme forward and the roles that will carry it at scale.
The S&P Global data makes this visible in a specific way. The abandonment surge from 17 to 42 per cent did not happen because the technology got worse. It happened in the same period that deployment accelerated. Organisations that were experimenting cautiously in 2023 and 2024 moved into production at scale in 2024 and 2025, and the failure rate followed the acceleration. The organisations scrapping 46 per cent of their proof-of-concept projects are not failing at the technology stage. They are failing at the transition from pilot to scaled deployment, which is precisely the point where the technology crosses from the roles that built it to the roles that have to run it.
Writer’s 2026 enterprise AI adoption survey found that 97 per cent of executives report their organisation deployed AI agents in the past year.⁵ Deployment is nearly universal. The failure modes the survey documents do not stem from lack of AI talent or enthusiasm. They stem from the absence of systems designed to scale what is working. Individual productivity gains are real. Nothing connects them to organisation-wide outcomes. The gap is not between ambition and technology. It is between the roles driving the programme and the roles that would need to absorb it.
The constraint the AI-first narrative cannot see
The AI-first framing treats enterprise AI adoption as a single organisational decision: commit to AI at the strategic level and the operational consequences will follow. The Q1 2026 Role Layer data shows what actually follows. Ten C-suite roles measured against the same five-point adoption scale at the same point in time, using the same fifteen decision-level questions. The Organisational Adoption Gradient between the most and least advanced role is 0.85 points.² The leadership system is not moving as a unit. It is moving as a set of roles at materially different speeds, with the role most responsible for operational delivery sitting furthest back.
That gradient is the structural condition the AI-first narrative cannot see because it does not have an instrument to measure it. The narrative operates at the organisational level. The constraint is at the role level. An organisation can be genuinely committed to AI at the strategic level while simultaneously having a leadership system whose internal distances make scaled deployment unreachable within any reasonable investment horizon. The commitment is real. The coordination is not there. AI-first programmes that push harder on speed hit that condition as a wall rather than addressing it as a variable.
What the gradient looks like in practice is a programme that moves through pilot and into early deployment without difficulty, then stalls at the point where the technology has to transfer from the roles that sponsored it to the roles that will run it. The CDO who built the business case hands the operating reality to the COO. The CTO who certified the technology hands the workforce challenge to the CHRO. The CMO who modelled the demand hands the compliance exposure to the CISO. Each handover crosses a role-level distance that the approval process did not measure and the programme plan did not account for. The stall is not a project management failure. It is the gradient becoming visible too late.
The four barriers to scaling in the Q1 2026 dataset cluster within 2.6 percentage points of one another.² A single dominant barrier is a solvable problem: identify it, resource it, clear it. Four barriers of roughly equal weight mean that clearing any one of them does not unlock the next phase. The programme moves forward and hits the next barrier almost immediately. Organisations in this position are not facing a resourcing problem or a technology problem. They are facing a sequencing problem: the leadership system has not determined which role’s position is the actual binding constraint at each stage, so every stage feels like a new obstacle rather than a known distance to close.
What alignment-first means in practice
Alignment-first is not a slower version of AI-first. It is a more accurate one. The distinction is not about pace. It is about which variable the organisation is managing.
AI-first manages pace. Get the technology deployed, get pilots running, get use cases in production. The assumption is that the leadership system will coordinate around the deployment once it is in place. The S&P Global data suggests that assumption is wrong in 42 per cent of cases and deteriorating. The organisations abandoning programmes are not abandoning them because the technology failed. They are abandoning them because the operational conditions for scaled deployment did not materialise on the timeline the programme assumed.
The organisations that do not abandon know something different going in. They know which role is furthest from deployment readiness before committing the next tranche of capital, and they treat that role’s position as a precondition of the approval rather than a variable to be discovered after the programme stalls. The question is not how fast to move. It is whether the leadership system is coordinated enough to convert the capital into deployed value inside the investment horizon being approved. That reframing does not slow the programme down. It changes what the programme is trying to manage.
McKinsey’s State of AI 2025 found that organisations producing significant financial returns from AI are nearly three times as likely as others to have fundamentally redesigned their workflows.⁴ That finding describes alignment-first behaviour without naming it. The organisations that succeeded did not start with the technology. They started with the operational and organisational conditions the technology would need to enter. The technology selection followed the alignment work. That is not a conservative approach to AI. It is the approach that produces the outcomes the AI-first narrative promises but consistently fails to deliver.
The 5 per cent of organisations producing measurable P&L impact from AI are not moving slower than the 95 per cent. They are moving in the right sequence. The sequence runs: understand the role-level distances inside the leadership system, identify the binding constraint at the current stage, address that constraint directly before committing the capital that assumes it has been resolved. That is a different programme logic from AI-first, and the gap between the two is where most of the S&P Global 42 per cent lives.
The 67.4 per cent is not a coincidence
The Confidence Gap in the Q1 2026 Role Layer dataset stands at 67.4 per cent. Two thirds of senior leaders across all ten roles cannot confirm that AI is creating durable economic value in their organisation.² The four barriers clustering within 2.6 percentage points mean there is no single dominant explanation for that figure. The system is constrained on multiple dimensions simultaneously.
That is precisely what a leadership system running on AI-first logic looks like from the inside. Multiple initiatives at different stages. Multiple roles at different adoption positions. No single blocker that, if cleared, would unlock the value. The 67.4 per cent is not a verdict on enterprise AI. It is a description of what happens when organisations move at the pace the narrative recommends without first understanding the internal distances the capital has to cross.
The vendor narrative will not correct itself. The incentive runs the other way: faster adoption means more licences, more implementation contracts, more platform commitment. The pressure on leadership teams to move quickly is not going to diminish in 2026. What can change is what the leadership team measures before it moves. An organisation that knows its Organisational Adoption Gradient before the next capital commitment is asking a different question from one that does not. Not whether to invest in AI. Not how quickly to deploy. Whether the distance between the roles that sponsor the programme and the roles that will carry it is close enough to cross inside the investment horizon being approved.
The organisations that will close the 67.4 per cent gap are not the ones that increase the pace. They are the ones that measure the distances first. The gradient is the variable AI-first cannot manage because AI-first cannot see it. Alignment-first starts there.
This essay is part of Quaie’s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.
Notes and sources
¹ S&P Global Market Intelligence, Voice of the Enterprise: AI and Machine Learning, Use Cases 2025. Survey of 1,006 midlevel and senior IT and line-of-business professionals across North America and Europe, conducted October to November 2024. Key findings: the share of companies abandoning the majority of AI initiatives before reaching production rose from 17% to 42% year over year; the average organisation scrapped 46% of proof-of-concept projects before production. Source: spglobal.com/market-intelligence/en/news-insights/research/ai-experiences-rapid-adoption-but-with-mixed-outcomes-highlights-from-vote-ai-machine-learning.
² Quaie Role Layer Executive Survey, Q1 2026 (n=187). The Organisational Adoption Gradient measures the distance between the most and least advanced leadership role in the dataset on the five-point adoption scale, where 1 represents no active AI initiatives and 5 represents embedded infrastructure. Q1 2026 gradient: 0.85 points. Confidence Gap: 67.4% of respondents could not confirm AI is creating durable economic value. Four barriers to scaling cluster within 2.6 percentage points of one another. Fieldwork conducted January to March 2026 across ten C-suite roles: CEO, CTO/CIO, COO, CFO, CMO, CRO/CSO, CDO, CISO, CHRO, CLO. Full methodology: quaie.io/p/methodology.
³ MIT NANDA (Networked Agents and Decentralised AI) initiative, “The GenAI Divide: State of AI in Business 2025,” published August 2025. Lead author Aditya Challapally. Research base: 150 leader interviews, 350-employee survey, and analysis of 300 public AI deployments. Finding: 95% of enterprise generative AI pilots delivered no measurable impact on profit and loss. Source: MIT NANDA initiative publications.
⁴ McKinsey QuantumBlack, “The state of AI in 2025: Agents, innovation, and transformation,” published 5 November 2025. Survey of 1,993 respondents across approximately 105 countries. Key findings: nearly two-thirds have not yet begun scaling AI across the enterprise; organisations producing significant financial returns from AI are nearly three times as likely as others to have fundamentally redesigned their workflows. Source: mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.
⁵ Writer, “Enterprise AI Adoption in 2026: Why 79% Face Challenges Despite High Investment,” published April 2026. Survey of 1,200 C-suite executives and 1,200 non-technical employees actively using AI at work, conducted with Workplace Intelligence. Key findings: 97% of executives report their organisation deployed AI agents in the past year; failure modes stem from absence of systems designed to scale individual productivity gains to organisation-wide outcomes. Source: writer.com/blog/enterprise-ai-adoption-2026.



