AI Moves Fast. Organisations Don’t.
Artificial intelligence is the fastest-adopted consumer technology in history. More than a billion people used AI tools within three years of their mainstream availability, faster than the internet, faster than the personal computer, and faster than the smartphone. ChatGPT reached a hundred million users in two months.¹ By 2025, more than a third of working American adults were using generative AI on the job.² Something happened quickly, and the narrative built around that speed has become the dominant frame through which leaders evaluate their own progress: AI is moving fast, adoption is accelerating, and those who hesitate will be left behind.
The narrative is not wrong about the technology. It is wrong about the unit of analysis. Downloading a tool is not the same as reorganising a business around it. And once you shift attention from individual usage to organisational transformation, the picture inverts almost entirely.
Ninety-five per cent of enterprise generative AI pilots fail to deliver measurable financial returns.³ According to IDC, for every thirty-three prototypes a company builds, four reach production.⁴ Nearly two-thirds of organisations remain stuck in the pilot stage. According to S&P Global Market Intelligence, forty-two per cent of companies abandoned most of their AI initiatives in 2025, more than double the previous year’s rate.⁵ These are not the numbers of a fast transformation temporarily encountering friction. They are the numbers of a slow transformation being mistaken for a fast one.
The instinct is to treat the gap as a problem of execution, something better tools, more investment, or stronger leadership will close within a few quarters. But the evidence points somewhere more uncomfortable. OpenAI’s own enterprise research concludes that the primary constraints are no longer model performance or tooling, but organisational readiness and implementation.⁶ BCG’s widely cited finding puts the ratio at ten per cent algorithms, twenty per cent technology and data, seventy per cent people, processes, and cultural change.⁷ When an AI company tells you the technology is not the bottleneck, it is worth believing them. And seventy per cent of the challenge sitting in people, processes, and culture means seventy per cent of the challenge operates on the timescale of organisational change, which is measured in years and decades, not quarters.
History confirms what the data suggests. Electricity took four decades to move from negligible to seventy per cent household adoption. The telephone took six. Even within recent memory, the pattern holds. Enterprises had websites by the early 2000s; most had not fundamentally restructured around digital capabilities until the mid-2010s. ERP systems were available in the 1990s; full organisational integration took a decade or more.⁸ The common thread is that general-purpose technologies requiring deep organisational adaptation follow extended timelines regardless of how quickly the underlying capability matures. The technology arrives, early adopters experiment, results are mixed, structures resist, roles disagree, consensus forms slowly, and capital follows conviction at the pace conviction actually forms, which is never as fast as anyone would like.
AI fits this pattern with uncomfortable precision. What distinguishes it from faster adoption curves, cloud computing, mobile, is not complexity but distribution. Cloud was primarily an infrastructure decision that could be led by a single function. A CTO could migrate to cloud without the CMO needing to believe it was the right call. AI is different. Its value and its risk are distributed across the entire organisation. Marketing uses it for different purposes than engineering. Finance evaluates it against different criteria than operations. The CEO must reconcile these perspectives before committing direction and capital. No single function can adopt AI on behalf of the organisation the way IT adopted cloud on behalf of the enterprise. This makes AI adoption less like a technology upgrade and more like digitalisation, financialisation, or industrialisation, transformations that reshaped not just what organisations used but how they made decisions, allocated resources, and coordinated across functions. Those were generational processes. Not because the technology was slow, but because the human coordination required to absorb it was deep, cross-functional, and irreducibly complex.
This is precisely what Quaie’s Q1 2026 fieldwork is designed to make visible. When we measure AI adoption readiness across ten executive roles, from CEO and CTO to CFO, CHRO, and General Counsel, the hypothesis is that the sharpest divergence will not be between companies, or sectors, or revenue bands, but between roles within the same cohort. Role context, the specific evidence standards, risk tolerances, and organisational mandates that each function carries, is likely to shape readiness more than organisational maturity does. Quaie’s Organisational Adoption Gradient is designed to measure this distance precisely: the spread between the roles that have moved and the roles that have not. The Role Shift Index tracks the underlying movement that produces this gradient, mapping where each of the ten executive roles sits on the adoption spectrum and whether that position is advancing, holding, or reverting quarter by quarter.
The blocker distribution is likely to tell the same story from a different angle. ROI uncertainty may dominate among CEOs and CMOs. CTOs are more likely to cite integration complexity and security concerns. CFOs will probably require evidence before releasing capital. CHROs are likely to raise workforce readiness questions that no other role has yet addressed. If that pattern holds, the organisation is not facing a single constraint but several, distributed unevenly across the people responsible for resolving them. A CTO wants to solve an integration problem. A CMO wants to see commercial proof. A CFO wants both answered before releasing budget. A CHRO wants to know what happens to the workforce. Each position is rational. None of them can see the others clearly enough to converge without a mechanism for making the full picture visible. This is precisely what the Role Alignment Map is designed to provide: a measure of whether the leadership system shares a common interpretation of AI’s strategic priorities and ownership, making visible the gap between a leadership team that describes itself as aligned and one that has actually formed shared conviction.
This is the structural reality that the speed narrative obscures. An organisation does not adopt AI the way a person downloads an application. It adopts AI through a sequence of decisions made by different roles, each operating under different constraints, evaluating risk against different criteria, and reaching conviction at different speeds. Those roles must eventually converge before committed action becomes rational. That convergence is inherently slow, because it depends on evidence accumulating across functions, not enthusiasm concentrating in one.
The practical consequences are significant, and they cut against much of the advice currently circulating.
If AI adoption is generational, then the cost of moving wrong exceeds the cost of moving slowly. Premature commitment, scaling before alignment has formed, forcing rollout before roles have converged on shared conviction, carries compounding costs. This is the risk the Q1 fieldwork is designed to surface: capital allocation front-loaded in organisations that committed before internal alignment was in place, and initiatives that stalled not because they moved too slowly but because they moved before the decision was shared.
If AI adoption is generational, then sequencing matters more than speed. The order in which roles engage, who leads, who validates, who follows, determines whether adoption propagates through an organisation or fractures within it. Understanding that sequence requires knowing where confidence sits today, not where deployment stands. Quaie’s Role Lead-Lag Ranking tracks exactly this: the temporal distance between roles as they move through adoption stages, making visible whether the organisation is converging or diverging.
And if AI adoption is generational, then the intelligence leaders need cannot be a snapshot of what has been bought or deployed. It must be a continuously updated, role-based view of how decisions are forming, tracking conviction, alignment, and timing at the level where decisions are actually made. Capital markets have yield curves. Labour markets have employment data. AI adoption, the most consequential organisational transformation in a generation, has no equivalent. Every leadership team navigates in isolation, treating its internal dynamics as unique, unable to distinguish between a problem that is genuinely local and a pattern that is structural.
The organisations that navigate this well will not be the ones that moved fastest. They will be the ones that understood where they stood, moved in the right order, and committed when the evidence supported it. Patience is not a popular prescription in a market saturated with urgency. But urgency that outruns the structural pace of organisational change does not produce transformation. It produces expensive false starts and eroded confidence, the very conditions that make the next attempt harder.
This essay is part of Quaie’s Founding Essay Series, examining how organisations decide to adopt AI role by role, over time.
Notes and Sources
¹ ChatGPT reaching 100 million users in two months: Reported by Reuters, February 2023, based on data from analytics firms including Similarweb.
² More than a third of working American adults using generative AI on the job by 2025: Pew Research Center, “AI in the Workplace” survey data, 2025. Multiple corroborating surveys from McKinsey (Global Survey on AI, 2024) and Salesforce (Generative AI Snapshot, 2024) report similar or higher figures.
³ 95 per cent of generative AI pilots failing to deliver measurable financial returns: Reported across multiple analyst sources, 2024–2025. Gartner predicted in July 2024 that at least 30 per cent of generative AI projects would be abandoned after proof of concept by end of 2025 (Gartner Data & Analytics Summit, Sydney, July 2024).
⁴ IDC prototype-to-production ratio: IDC research findings on enterprise AI deployment, cited across industry reporting, 2024–2025. For every 33 AI prototypes built, approximately 4 reached production deployment.
⁵ S&P Global Market Intelligence: 42 per cent of companies abandoned most AI initiatives in 2025. S&P Global Market Intelligence, 451 Research survey, published 2025.
⁶ OpenAI enterprise research on organisational readiness as primary constraint: OpenAI enterprise deployment findings, reported 2024–2025. OpenAI’s enterprise team has publicly stated that the primary barriers to enterprise AI value are organisational, not technical.
⁷ BCG AI adoption composition: Boston Consulting Group, “From Potential to Profit: Closing the AI Impact Gap” (AI Radar 2025), January 2025. Survey of 1,803 C-level executives across 19 markets. BCG’s related publications cite approximately 70 per cent of AI challenges stemming from people, processes, and cultural change.
⁸ ERP implementation timescales: Panorama Consulting Group, annual ERP reports (2010–2020). More than 70 per cent of ERP implementations failed to meet their objectives, with average timescales extending from planned 18-month schedules to 3–5 years. See also: Quaie’s essay “What ERP Taught Us About AI, and What Leaders Have Already Forgotten” for extended analysis.
Quaie’s constructs referenced in this essay (the Organisational Adoption Gradient, Role Shift Index, Role Lead-Lag Ranking, and Role Alignment Map) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in subsequent essays in this series.



