The Distance Nobody Measures
The post-mortem on GE Digital runs to thousands of words. Cloud transformation’s runs longer. They have the same last paragraph.
The roles that commit to investment and the roles that implement it are operating from different pictures of the same programme, and nobody is measuring the distance between them before it hardens into a failure. The consulting market has not missed this through oversight; the intelligence simply does not exist. No instrument currently available to enterprise leaders measures role-level misalignment in real time, tracks it quarter by quarter, and returns before the programme has absorbed the cost of the distance. In its absence, the gradient between the roles that lead and the roles that follow is closing or compounding right now, in live programmes, with no dashboard reading to show it. Until one does, the 70 per cent transformation failure rate that McKinsey documented across a decade of programme data will replicate itself in AI.
The evidence is thirty years of documented pattern, and the pattern is specific enough to name.
The structural cause the post-mortems keep finding
In 2015, General Electric’s chief executive Jeff Immelt launched GE Digital with a publicly stated ambition: to become a top ten software company by 2020. The investment exceeded $4 billion. Predix was a novel industrial platform. For two years, the numbers supported the thesis. Then GE’s industrial business units kept filing quarterly results to targets that had been set before the platform existed, and the platform had no mechanism to change that. Multiple analysts and journalists, working independently, arrived at the same conclusion: the technology was not what failed. Immelt’s conviction did not transfer. The COO layer kept its operational targets. The CFO layer, which would have needed a different kind of evidence to sustain commitment through the years between investment and return, never got it.¹
GE had a lead cluster and a lag cluster inside its own C-suite. Nobody measured the distance between them until the distance had already decided the outcome.
A decade later, at market scale, the same pattern appeared in cloud. In Q4 2022 and Q1 2023, HFS Research in collaboration with EY surveyed 508 senior executives from Global 2000 enterprises on cloud-native transformation. The finding was precise. Sixty-five per cent of organisations had made cloud a strategic investment. Thirty-two per cent were realising their ambitions. Phil Fersht, HFS chief executive, described CFOs turning to their CIOs and asking what it had all been for. Matt Barrington, EY’s emerging technologies leader, concluded that half of cloud-native transformations had failed, not because the technology was wrong, but because technology and business objectives were not aligned across the leadership team.² The CIO had committed. The CFO was still waiting for the business case to materialise. The same organisation, the same programme, different roles operating on different timescales, with no instrument to measure the distance between them.
McKinsey’s decade of transformation data across industries produced a number that enterprise leaders have memorised and stopped examining: 70 per cent of initiatives fail to achieve their objectives. The explanation, when McKinsey went looking for it, was not technology and not budget. Executives declared alignment and recorded it as established. The leader who approved the initiative and the leader responsible for delivering it were working from different definitions of success. That distance was never measured, not, it turns out, because measuring it was technically difficult, but because nobody had built the instrument.³ The pattern held across ERP, CRM, cloud, and digital transformation. It is now holding in AI.
Why AI runs the same pattern faster
The failure mode in AI is thirty years old. The structural cause is identical to what stalled ERP, CRM, and cloud: the role that commits and the role that implements are operating from different pictures of the same programme. The clock is different, and that difference is what makes this particular iteration harder to absorb than the previous ones. An ERP failure takes roughly two years to become undeniable; cloud, about eighteen months. In AI, the investment cycle is short enough that the board is still announcing the initiative when the misalignment is already compounding. Organisations that have not finished explaining the last failure are being asked to account for this one.
The measurement infrastructure that would detect role-level misalignment before it produces a failure does not exist in any research programme currently available to enterprise leaders. McKinsey’s sector surveys and Gartner’s CIO reports measure adoption at the organisational level; neither disaggregates to the function that is blocking the programme. The consulting engagement that interviews a sample of the leadership team and returns a synthesis has the same limitation at higher cost: it is measuring the organisation, not the leadership system operating inside it. None of them tell a CEO which specific roles are misaligned, on which dimensions, and what evidence each lagging role would need in order to move. The difference matters for a practical reason: intelligence that arrives after the programme has stalled is research material for the next post-mortem, not an instrument for the current programme.
What role-level measurement reveals
The Q1 2026 Role Layer dataset, drawn from 187 senior decision-makers across ten C-suite functions between January and March 2026, produces a finding that aggregate data cannot produce: a measurement of the distance between the roles that are leading and the roles that are not.
A sceptical reader will notice that the instrument making this argument is also the instrument being cited as evidence for it. That is a fair observation. The Role Layer dataset cannot corroborate itself through external validation that does not yet exist. What it can do is apply a measurement approach to a structural problem that existing research programmes have not attempted to measure, and report what that measurement finds. The 0.85-point Organisational Adoption Gradient is not offered as a settled finding. It is a first reading from a new instrument, and its value is that it makes a previously unmeasured distance visible.
The Organisational Adoption Gradient between the most advanced and least advanced leadership role in the dataset is 0.85 points on a five-point adoption scale, where one represents no active AI investment and five represents scaled deployment with measurable business impact. That number is the structural signal.
Forty-seven per cent of the dataset (88 of 187 senior decision-makers) are already committed to scaled AI investment or expect to commit within six months. The momentum is real, and so is the gap. Both facts sit in the same leadership system, in the same organisation, often in adjacent offices. An organisation with a CTO approaching scaled deployment and a COO still weighing the evidence for limited production is not an organisation that is slow on AI. It is an organisation with a 0.85-point gradient running through its leadership layer, and that gradient, left unmeasured and unmanaged, will produce a specific and predictable outcome: the lead function scales, the lag function withholds the operating commitment to sustain it, and the programme stalls at exactly the point it appeared to be succeeding.
GE’s gradient was legible in every post-mortem. HFS and EY documented the same pattern across 508 cloud programmes. McKinsey found it across a decade of transformation data in industries that had nothing to do with each other. The gradient was present in every case. Nobody was measuring it.
The 67.4 per cent finding is the one that gets misread most often. Sixty-seven per cent of senior decision-makers unable to confirm that AI is creating durable economic value looks, from the outside, like scepticism or resistance. That reading is wrong on both counts. These are executives who have run the experiments. The experiments have not yet produced the evidence their function requires to commit at scale. That is an evidentiary gap, not an attitudinal one, and it concentrates in specific functions for reasons that are structural, which means it is predictable, and therefore addressable, if you are measuring at the role level.⁴
The gradient is the diagnosis
The enterprises that navigated ERP, CRM, and cloud without stalling shared one structural advantage: their leadership systems converged before the budget commitment became irreversible. The CFO got the evidence base it required. The COO moved. That did not happen by accident. It happened because someone was watching the distance.
In the organisations where it did not happen (GE, the 68 per cent of cloud programmes that failed to realise their ambitions, the 70 per cent of transformation initiatives that fell short of their objectives) the gradient was present from the beginning. The distance was legible in every post-mortem and absent from every dashboard. Those two facts are the same failure, seen from different ends of the programme timeline.
The 0.85-point Organisational Adoption Gradient is a present reading, not a historical one. The lead cluster is advancing. The lag cluster is sitting at the evidentiary threshold it has not yet crossed. The distance between them was measured between January and March 2026, in organisations that are making AI investment decisions now, which means the gradient is either closing or compounding at this moment, in real programmes, with real budget consequences.
The question enterprise leaders are implicitly asking (when will my leadership system reach the point where scaled AI investment becomes rational for every function that needs to sanction it) cannot be answered by aggregate adoption surveys. It requires a measurement instrument that disaggregates to the role level, repeats quarterly, and tracks whether the gradient is closing or compounding. GE did not have that instrument when Predix was scaling. Neither did the enterprises whose cloud programmes stalled at the midpoint of their ambitions. McKinsey’s analysts were not building it when they assembled the failure data; they were explaining what had already happened, which is the only thing the available intelligence was equipped to do.
The constraint on enterprise AI adoption is not where most of the current research is looking. It is the distance between the roles that lead and the roles that follow, a distance that is measurable, that has been present in every major technology adoption failure of the past thirty years, and that decides the outcome every time. The technology is not the problem and never was.
This essay is part of Quaie’s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.
Notes and sources
¹ GE Digital and the Predix platform: Panorama Consulting Group, GE Digital Transformation and Predix Failure, 2021. See also The Conversation, “GE’s big bet on digital has floundered,” 2018; Applicoinc, “Why GE Digital Failed,” 2023. GE Digital was established in 2015 under CEO Jeff Immelt with a stated target to become a top ten software company by 2020. Following six years of investment exceeding $4 billion, GE scaled back and subsequently wound down the programme. Post-mortems across multiple analyses cite organisational and cultural misalignment between GE’s industrial business units and the digital initiative as the primary cause of failure.
² Cloud-native transformation outcomes: HFS Research in collaboration with EY, cloud-native transformation study, Q4 2022 and Q1 2023. Survey of 508 senior executives from Forbes Global 2000 enterprises across 11 countries. Published October 2023. Phil Fersht, HFS CEO and Chief Analyst, quoted in CIO Dive, November 2023. Matt Barrington, Emerging Technologies Leader at EY, quoted in the same report.
³ Digital transformation failure rates: McKinsey and Company, Unlocking Success in Digital Transformations, 2018. McKinsey’s subsequent work, including Jon Garcia, Common Pitfalls in Transformations, McKinsey.com, 2022, identified consensus-based target-setting and assumed rather than measured alignment as the most consistent factors in transformation failure across industries.
⁴ Quaie Role Layer Executive Survey, Q1 2026 (n=187). Organisational Adoption Gradient, Consensus Formation Time, and Confidence Gap measured across ten C-suite functions, January to March 2026. The five-point adoption scale runs from 1 (no active AI investment under consideration) to 5 (scaled deployment with measurable business impact across core functions). Points 2 through 4 represent defined intermediate stages: limited experimentation, committed investment with deployment underway, and approaching scaled deployment respectively. The Organisational Adoption Gradient is the arithmetic distance between the mean score of the highest-ranking and lowest-ranking C-suite function in the dataset. 47.1% of respondents are already committed to scaled AI investment or expect to commit within six months. 67.4% cannot confirm that AI is creating durable economic value. Full methodology.



