The Most Expensive Mistake in AI Adoption Is Moving Before You’re Aligned
The most expensive mistake in AI adoption is not choosing the wrong tool. It is not underinvesting. It is not moving too slowly. It is committing before the roles responsible for sustaining that commitment have reached shared conviction about what they are committing to and discovering, quarters later, that the initiative was running on one function’s confidence and another function’s compliance.
This pattern is so consistent, across so many organisations and so many technology cycles, that it deserves to be treated as structural rather than incidental. Premature commitment is not a failure of ambition or intelligence. It is the predictable result of an organisation mistaking executive conviction for organisational alignment, and deploying capital against the first while the second has not yet formed.
The costs are asymmetric in a way that most leadership teams underestimate.
Acting too late, waiting for alignment when competitors have already moved, carries opportunity cost. But that cost is typically bounded and recoverable. An organisation that enters six months behind its peers, with broad internal alignment and clear evidence, can close the gap more quickly than an organisation that entered early with fractured conviction. The history of enterprise technology adoption supports this. The early movers in cloud migration, ERP deployment, and digital transformation were not always the winners. Frequently, the organisations that entered second or third, having learned from the pioneers’ mistakes and built broader internal consensus before committing, achieved better outcomes at lower cost.¹
Acting too early carries a different kind of cost, and it compounds. The initiative launches with one function’s conviction and another’s compliance. Early friction is interpreted as an implementation problem rather than a misalignment problem. Resources are committed to fix what appears to be a technical challenge but is actually a coordination challenge. By the time the real constraint is recognised, the organisation has spent capital, burned political goodwill, and, most damagingly, created the impression among sceptical roles that their concerns were justified all along. The next AI initiative starts not from zero but from a deficit of trust.
A missed opportunity leaves the organisation where it was. A failed commitment leaves it somewhere worse, with a depleted budget, a sceptical workforce, and a leadership team whose credibility on AI has been damaged. Recovery from the first requires only a decision. Recovery from the second requires rebuilding conviction that was destroyed by the last attempt.
This is not hypothetical. Two of the most consequential AI failures of the past decade illustrate the pattern at different scales and in different domains and both are instructive precisely because the technology worked. The failures were organisational.
Zillow committed $3.75 billion in credit facilities and a $20 billion revenue target to an algorithmically driven home-purchasing programme. The concept was sound. The Zestimate platform, trained on millions of home sales, would predict property values, and Zillow would purchase homes directly, renovate, and resell at a profit. The CEO, Rich Barton, set the pace. The board supported it. The market rewarded it. The workforce grew 32 per cent in nine months.²
But the conviction was not shared evenly across the functions that needed to sustain it. The data science team knew the algorithm’s limitations. The Zestimate had been designed to estimate current market value, a fundamentally different problem from predicting what a home would sell for three to six months later in a market that might have shifted. The operations team was struggling with capacity, labour shortages and supply chain disruptions meant properties sat in inventory longer, increasing exposure to market shifts. The finance function was absorbing the risk: between July and October 2021, revolving credit facilities expanded from $1.5 billion to $3.75 billion, and as late as seventeen days before the purchasing pause, Zillow issued a $700 million debt note toward the programme.
Most tellingly, managers on the ground were manually overriding the algorithm to buy more aggressively, not because they were reckless, but because the growth target demanded volume, and volume required winning bids, and winning bids required paying more than the model recommended. The machine learning system’s guardrails were being bypassed not by a technical failure but by an organisational one.
When the market shifted in Q3 2021, the result was a $421 million loss in a single quarter, the complete shutdown of the iBuying division, and the elimination of 25 per cent of the workforce, roughly two thousand people. Nearly $10 billion in market capitalisation was erased in days.³
The conventional diagnosis frames this as an algorithm problem. Barton himself used this language. But the evidence does not support it as the primary cause. Opendoor and Offerpad, operating in the same markets with similar algorithmic approaches, navigated the same period without comparable losses. Opendoor reported a profitable Q3 with positive margins. The algorithm was not the distinguishing factor. The organisational conditions surrounding the algorithm were. One function’s conviction had outrun the alignment of every other function required to sustain it. The gradient between the CEO’s growth ambition and the data science team’s confidence, the operations team’s capacity, and the finance team’s risk tolerance was steep and there was no instrument in place to make that gradient visible before capital was deployed against it.
IBM Watson Health tells the same story at a different scale and over a longer timeline.
Between 2015 and 2016, IBM spent approximately $4 billion acquiring Truven Health Analytics, Merge Healthcare, Explorys, and Phytel, firms whose combined datasets covered hundreds of millions of patient records, insurance claims, clinical data, and medical imaging. The strategic logic was compelling: expose Watson’s cognitive computing capabilities to massive healthcare data, and patterns invisible to human clinicians would emerge. The division grew to 7,000 employees. The ambition was explicit: transform cancer treatment, democratise elite medical expertise, reshape how medicine was practised globally.⁴
The commitment was driven by executive conviction. IBM’s leadership, buoyed by Watson’s Jeopardy! performance and early research partnerships with Memorial Sloan Kettering, believed the technology was ready for clinical deployment at scale. The capital followed that conviction, billions in acquisitions, thousands in headcount, partnerships with hospitals across multiple countries.
What the leadership could not see, because no instrument existed to make it visible, was the gradient between that conviction and the readiness of every other function required to sustain the commitment. The clinical function had not validated Watson’s recommendations at the standard required for medical practice. A 2017 investigation found internal documents describing unsafe and incorrect treatment recommendations. MD Anderson Cancer Center’s partnership alone cost $62 million before being shut down, with audits revealing the system had been trained on outdated data.⁵ The data science function knew the limitations, Watson’s ability to process structured genetic data was genuine, but its capacity to interpret the unstructured complexity of clinical medicine was nowhere near what the sales function was promising. The $4 billion in acquired data was never successfully integrated, the companies sat in separate systems, with different formats, different quality standards, and different clinical contexts. And the compliance function was not in a position to provide the oversight that clinical AI required, in one of the most heavily regulated domains in any economy.
IBM sold Watson Health to Francisco Partners in 2022 for approximately $1 billion.⁶ But the financial loss, while substantial, was not the deepest cost. Watson Health became the reference case for AI overreach in healthcare. Hospitals that had invested in Watson partnerships carried scepticism into every subsequent AI conversation. The broader sector’s appetite for AI adoption was dampened for years, not because the technology lacked potential, but because the most prominent commitment had been premature, and the failure was public.
This is the compounding mechanism. Premature commitment does not just fail in the present. It taxes the future. It makes the next initiative harder to advance, the next budget harder to secure, the next cross-functional conversation more cautious. The organisation does not return to its starting position. It starts from a deficit.
Early signals from Quaie’s Q1 2026 fieldwork across ten executive roles show the pre-conditions for this pattern already present in the cohort, not at Zillow or IBM scale, but in the structural dynamics that produce the same outcome.⁷ The most common response on value confidence was “too early to tell.” Only a minority reported high confidence, and that confidence concentrated almost entirely among roles already at scaled deployment. The Organisational Adoption Gradient, the distance between the most confident and least confident roles, was wide. CTOs reported high confidence and advanced deployment. CMOs reported low confidence and early experimentation. CFOs flagged insufficient evidence for capital commitment. CHROs raised workforce readiness concerns that no other role had addressed.
The roles signalling most appetite for accelerated commitment were the CTO and CEO, the functions closest to the technology’s potential and the strategic pressure to act on it. The roles most likely to report carrying unresolved risk from commitments already made were the CFO and CHRO. The CFO’s risk was financial: expenditure committed against returns that had not yet materialised. The CHRO’s risk was human: workforce implications arriving from automation decisions made by other functions, without the planning inputs needed to manage them.⁸ Neither had set the pace. Both were absorbing consequences that originated elsewhere in the organisation.
This is the Zillow anatomy at more modest scale. One function’s conviction outrunning the alignment of the others. The Role Alignment Map makes this directly measurable: not just where roles sit on the adoption spectrum, but whether they share a common interpretation of AI’s strategic priorities and ownership. Early signals confirm that for most organisations in the cohort, that shared interpretation has not yet formed. The Role Influence Index adds a further dimension: the roles driving adoption decisions were not always the roles best positioned to assess the full organisational risk. A highly influential CTO or CEO pushing for scale before the CFO, CHRO, and CLO have reached equivalent conviction is not leading consensus formation. The organisation may believe it is committed while the functions that carry the largest unresolved risks remain privately unconvinced. The conditions for failure form before any outcome is visible, in the gap between roles that current enterprise-level intelligence cannot see.
The question for any leadership team sitting in the pre-consensus phase is not whether to invest in AI. It is whether the investment matches the organisational conditions, or whether capital is being deployed against a conviction that has not yet become shared.
The diagnostic is not complicated. In which functions has AI usage become predictable and owned and in which is it still dependent on individual champions? What are the specific blockers each role cites, and are they the same concerns, or fundamentally different ones requiring different responses? Is the gap between the most advanced and least advanced roles narrowing or widening? If the budget doubled tomorrow, which functions could absorb it productively and which would convert it into activity without durability?
These are uncomfortable questions because they surface disagreements that leadership teams often prefer to leave implicit. They are also the questions that determine whether capital allocation produces compounding value or compounding waste.
The most expensive mistake in AI adoption is not moving too slowly. It is moving before you know whether the roles responsible for sustaining the commitment are aligned. Zillow did not lack ambition. IBM did not lack investment. Both lacked a way of seeing, before they committed, how far apart the functions required to sustain that commitment actually were.
The distance between conviction in the room and alignment across the organisation is where the most consequential risk sits. Measuring that distance, before capital is deployed against it, is not caution. It is the most rational investment an organisation can make.
This essay is part of Quaie’s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.
Notes and Sources
¹ Second-mover advantage in enterprise technology adoption: The pattern of later entrants achieving better outcomes is documented across ERP, cloud migration, and digital transformation cycles. Panorama Consulting Group’s longitudinal ERP reports (2010–2020) found that more than 70 per cent of ERP implementations failed to meet objectives, with organisations that sequenced adoption by function and built cross-functional alignment before committing achieving materially better outcomes. McKinsey Digital estimated $100 billion in failed cloud migrations (”Cloud’s trillion-dollar prize is up for grabs,” February 2021), with premature enterprise-wide commitment a recurring factor.
² Zillow iBuying programme: Zillow Group public filings, SEC filings, and earnings call transcripts, 2019–2021. $20 billion revenue target: Rich Barton’s statements on earnings calls. 32 per cent workforce growth in nine months: Zillow Group reporting. Credit facility expansion from $1.5 billion to $3.75 billion, and $700 million debt note issued 1 October 2021 (seventeen days before purchasing pause): Zillow Group SEC filings.
³ Zillow iBuying collapse: Q3 2021 loss of $421 million on Zillow Offers. Complete shutdown of iBuying division. Elimination of approximately 2,000 employees (25 per cent of workforce). Market capitalisation loss of approximately $10 billion. Source: Zillow Group Q3 2021 earnings release and SEC filings. Opendoor reported positive margins in the same quarter: Opendoor Technologies Q3 2021 earnings.
⁴ IBM Watson Health acquisitions: Approximately $4 billion in acquisitions, 2015–2016. Truven Health Analytics ($2.6 billion), Merge Healthcare ($1 billion), Explorys, and Phytel. Division grew to approximately 7,000 employees. Source: IBM public filings, SEC filings, and press reporting.
⁵ IBM Watson Health clinical failures: Internal documents describing unsafe and incorrect treatment recommendations: STAT investigation, 2017. MD Anderson Cancer Center partnership, $62 million spent, shut down 2017: University of Texas audit. Multiple partners scaling back or discontinuing oncology projects by 2018: reported across healthcare and technology press.
⁶ IBM Watson Health sale: Sold to Francisco Partners for approximately $1 billion, reported January 2022. Source: Wall Street Journal, Bloomberg, and IBM public announcement.
⁷ Quaie Q1 2026 fieldwork: Early signals from confidence, preparedness, adoption stage, and perceived blocker data across ten executive roles (CEO, CTO/CIO, COO, CFO, CMO, CRO, CDO, CISO, CHRO, CLO). High confidence concentrated among roles at scaled deployment. “Too early to tell” the most common response on value confidence.
⁸ Role-level risk distribution: Early fieldwork signals indicate CTO and CEO roles most likely to advocate for accelerated commitment, with CFO and CHRO roles most likely to report carrying unresolved risk from commitments made by other functions. Blocker distribution: ROI uncertainty (CEO, CMO), integration complexity and security (CTO), insufficient evidence for capital commitment (CFO), workforce readiness (CHRO), regulatory exposure (CLO). Source: Quaie Q1 2026 fieldwork, early signals.
Quaie’s six analytical constructs referenced in this essay (the Organisational Adoption Gradient, Role Alignment Map, Role Influence Index, Role Shift Index, Role Lead-Lag Ranking, and Consensus Formation Time) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in preceding essays in this series.



