AI Adoption Isn’t a Technology Problem. It’s a Timing Problem.
There is a form of strategic anxiety that has settled across the senior floors of most large organisations, and it is worth naming precisely because it is being misdiagnosed. It is not anxiety about technology. Most executives have a reasonable working understanding of what large language models can and cannot do, what agentic systems are beginning to make possible, and where the frontier is likely to move over the next two to three years. The anxiety is not about capability. It is about position. About whether the organisation is moving at the right speed, in the right direction, with the right level of commitment visible to the right people.
What follows from that anxiety is now so consistent across industries and geographies that it has become the dominant pattern of enterprise AI adoption. Budgets are allocated, vendors are selected, pilots are launched, and announcements are made, not because the conditions for value creation are in place, but because the conditions for political exposure have arrived. The pilots stall. The value fails to appear. The investment is quietly reclassified or abandoned. A new cycle begins.
This is not a technology failure. It is a timing failure. And the distinction matters more than almost anything else a senior leader could understand about AI adoption right now.
Forty years
In 1990, the Stanford economist Paul David published a short paper that became one of the most cited analyses in the study of technological change.¹ Its argument was simple and uncomfortable. Computers, he observed, were everywhere in the American economy. Their impact on measured productivity was nowhere. David’s explanation was not scepticism about computing’s potential. It was a historical observation: the same pattern had occurred before, and it had a structural cause.
The electric dynamo was commercially viable by the late 1870s. By the turn of the twentieth century, electric motors still accounted for less than five per cent of factory mechanical drive. Two decades of available technology, deployed at negligible scale. When electrification did spread, factories made a revealing mistake: they replaced their steam engines with dynamos while keeping the centralised mechanical power distribution those steam engines had required. The technology changed. The organisation did not. It took until the 1920s, a full four decades after the lightbulb’s invention, for factories to redesign themselves around electricity’s actual logic: distributed unit drive, lighter structures, reconfigured floorplans. Only then did the productivity data move.²
David’s insight, extended by Erik Brynjolfsson and colleagues into what they later formalised as the productivity J-curve, is that transformative general purpose technologies require co-invention.³ The technology is necessary but not sufficient. The complementary organisational innovations, restructured workflows, redesigned roles, rebuilt operating models, realigned incentives, are what convert technological capability into economic output. And those complementary innovations take time. Not because organisations are slow or leaders are failing, but because the coordination required across functions, roles, and decision-making structures is irreducibly complex. It cannot be compressed by urgency, however genuine.
AI is at the beginning of this curve. Not the end of it. The data makes this difficult to ignore.
What the numbers actually describe
MIT’s NANDA research initiative, drawing on over 300 publicly disclosed AI deployments and interviews with representatives from more than 50 organisations, found that approximately 95 per cent of enterprise generative AI pilots fail to deliver measurable impact on profit and loss.⁴ The methodology behind that figure has been interrogated, and the precise rate is directional rather than definitive, but the direction is not in dispute. McKinsey’s 2025 survey of nearly 2,000 respondents found that only around five to six per cent of organisations qualified as what it defined as AI high performers, those attributing more than five per cent of EBIT to AI use.⁵ S&P Global Market Intelligence found that 42 per cent of companies abandoned most of their AI initiatives in 2025, up from 17 per cent the previous year.⁷ Three independent sources, three different methodologies, the same structural finding.
These figures are sometimes read as evidence that AI is overvalued. They are better read as evidence that organisations are investing ahead of the conditions required for that investment to produce returns. The technology is not failing the organisations. The organisations are not yet configured to succeed with the technology.
McKinsey’s analysis of what distinguishes the five per cent of high performers from the rest is revealing precisely because of what it does not say. The difference is not access to better models, larger budgets, or more technically sophisticated teams. The difference is that high performers redesigned workflows before selecting tools, established shared measurement frameworks before deployment, and secured leadership alignment across functions before committing at scale.⁵ They invested in the organisational preconditions for value creation, not just in the technology itself.
What the data collectively describes is a population of organisations that moved on the technology timeline rather than the organisational timeline. The gap between those two timelines is where the $227 billion in projected 2025 global AI spend⁶ is largely disappearing.
The coordination problem that vendors cannot solve
Enterprise AI adoption is, at its core, a leadership coordination problem. This is not a reframing designed to soften a difficult message. It is a structural observation about how value is created and destroyed in large organisations.
Every function in the C-suite processes AI through a different evidentiary lens. Finance and technology rarely share the same definition of sufficient evidence. Legal and compliance assess exposure that, in heavily governed industries, can be existential. Operations weighs workflow disruption against efficiency gain on a timeline that does not match the one marketing or strategy is working to. These functions do not reach their conclusions simultaneously, and in the absence of genuine alignment across them, AI investment does not fail at the technology layer. It fails at the coordination layer. Pilots produce results that one function finds compelling and another finds insufficient. Governance frameworks collapse because the legal position was never stable enough to support them. Change programmes stall because ownership was never genuinely shared. The technology performs. The organisation does not absorb it.
What senior leaders consistently identify as the primary blockers to AI adoption are not technical limitations. They are misalignment on strategy and ownership, an inability to confirm that early investment is producing durable economic value, unresolved governance exposure, and insufficient evidence of organisational readiness to move beyond pilots. Each of these is a coordination failure, not a technology failure. And coordination failures are not solved by moving faster. They are solved by reaching the state of alignment that makes productive movement possible.
The question of when to commit is therefore a question about where the leadership system currently sits. Not where the technology sits.
The asymmetry that is rarely stated directly
Investment in AI that arrives after the organisational conditions for value creation have formed is costly primarily in opportunity terms. You moved later than you could have, and you will spend time and resource closing gaps that earlier movers do not face. That is a real and sometimes significant cost.
Investment that arrives before those conditions have formed is costly in a structurally different way. Capital is deployed into pilots that cannot be absorbed. Governance structures are designed before the organisation has enough shared understanding to make them operational. Vendor relationships are established before internal capability exists to extract what those vendors offer. Political capital is spent on change programmes that have no stable coalition behind them. Each of these investments produces not zero return but negative return, because it consumes the attention, budget, and credibility that will be needed when the conditions for productive adoption are eventually present.
The 42 per cent of organisations that abandoned most of their AI initiatives in 2025 did not abandon them because the technology failed to perform.⁷ They abandoned them because the commitments were made before the leadership system was prepared to deliver on them. The cost of that premature commitment will be measured not only in the capital written off but in the organisational fatigue and cynicism that accompanies a large failed initiative. That is a harder thing to rebuild than a budget line.
What rational timing looks like
None of this is an argument for avoidance. The organisations that will fail most completely in AI adoption are not those that moved prematurely. They are those that never developed the capacity to move at all, that treated every signal of organisational unreadiness as a permanent condition rather than a solvable problem.
Rational timing is an active condition, not a passive one. It describes an organisation that is assembling the preconditions for productive investment: establishing governance frameworks before they are needed under pressure, building alignment between the technology and finance functions on how value will be measured, securing a shared interpretation of ownership that extends beyond the CTO’s office, and developing a pilot record rigorous enough to distinguish repeatable operating leverage from demo-stage results that will not survive contact with production.
A useful diagnostic is a simple one. Ask your leadership team: if we had to confirm, to the board, that our current AI investment is creating durable economic value, what evidence would we point to, and would every function in this room agree it was sufficient? The answers to that question, and more precisely the divergence between them, will tell you more about your organisation’s readiness to scale AI investment than any vendor assessment or maturity framework currently on the market. That condition is measurable. Most organisations have not yet measured it.
The signal that commitment has become rational is not a feeling of readiness, which can be manufactured. It is evidence that the leadership system has moved past the point where the absence of coordination is the binding constraint on value creation. That point arrives at different times for different organisations. It cannot be announced into existence by a board resolution or a vendor contract. It is an organisational condition, and it is legible to those who know what to look for.
The organisations that will produce the most durable returns from AI over the next decade will not be those that committed earliest. They will be those that committed when the organisational conditions were present, and that knew, with enough precision, when those conditions had arrived.
Electricity was commercially viable in 1880. It took forty years to rebuild the factory around it. The lesson is not that the forty years were wasted. It is that the forty years were necessary. And that the organisations which redesigned themselves around electricity’s actual logic, rather than simply installing the technology into structures built for steam, were the ones that captured the full value of what the technology made possible.
The technology is not the constraint. It has not been the constraint for some time. The question every senior leader should be asking is not how fast can we move? It is have we assembled the conditions that make moving productive?
Those are different questions. The second one is harder to answer. It is also the only one that matters.
This essay is part of Quaie’s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.
Notes and Sources
¹ Paul A. David, The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox, American Economic Review, Vol. 80, No. 2, May 1990, pp. 355–361. By 1900 electric motors accounted for less than 5 per cent of factory mechanical drive despite the commercial viability of the dynamo dating to the late 1870s.
² Warren D. Devine Jr., From Shafts to Wires: Historical Perspective on Electrification, Journal of Economic History, Vol. 43, No. 2, June 1983, pp. 347–372. The unit drive system — in which individual motors powered each piece of equipment — became widely adopted in the 1920s, producing measurable productivity gains approximately four decades after the technology’s commercial availability.
³ Erik Brynjolfsson, Daniel Rock, and Chad Syverson, The Productivity J-Curve: How Intangibles Complement General Purpose Technologies, American Economic Review: Insights, Vol. 3, No. 3, September 2021. Documents the J-curve pattern in which general purpose technologies produce short-run disruption costs before medium-term performance gains, contingent on complementary organisational co-invention. Brynjolfsson has described the 30 to 40-year lag between factory electrification and measurable productivity gains as the central historical parallel for understanding AI’s current productivity trajectory.
⁴ Aditya Challapally et al., The GenAI Divide: State of AI in Business 2025, MIT NANDA Initiative, July 2025. Analysis based on review of over 300 public AI deployments, interviews with representatives from more than 50 organisations, and a survey of 350 employees. The finding that approximately 95 per cent of enterprise generative AI pilots fail to deliver measurable P&L impact has been contested on methodological grounds; the figure is treated here as directionally significant across the broader body of evidence rather than as a precise point estimate.
⁵ Alex Singla, Alexander Sukharevsky, and Lareina Yee, The State of AI in 2025: Agents, Innovation, and Transformation, McKinsey QuantumBlack, November 2025. Survey of approximately 2,000 respondents. Approximately 5.5 per cent qualified as AI high performers attributing more than 5 per cent of EBIT to AI. High performers distinguished primarily by workflow redesign before tool selection and cross-functional leadership alignment before scaled commitment.
⁶ IDC, Worldwide AI Spending Guide, 2025. Projected global enterprise AI spend of approximately $227 billion in 2025, encompassing software, hardware, and associated services.
⁷ S&P Global Market Intelligence, 2025 Enterprise AI Survey, cited in industry analysis, July 2025. Survey of over 1,000 enterprises across North America and Europe. 42 per cent of companies abandoned most AI initiatives in 2025, up from 17 per cent in 2024. Average organisation scrapped 46 per cent of proof-of-concepts before production.



