How Organisations Actually Reach Consensus on AI
There is a moment in most AI adoption processes where experimentation gives way to commitment. Budget is allocated. Ownership is assigned. The organisation decides to act.
That moment is rarely a decision in any formal sense. It is the result of a process that unfolds across roles, over time, and usually more slowly than anyone involved would prefer. Understanding how that process works is important because most organisations act as though it has already completed when it hasn’t.
A board-level endorsement is treated as alignment. A successful pilot is treated as proof of readiness. A CTO’s conviction is treated as organisational confidence. In practice, none of these are consensus. They are inputs to a process that may or may not have run its course. The gap between these inputs and genuine cross-functional agreement is where many AI initiatives quietly come apart.
The consequences are well documented in retrospect, though rarely diagnosed correctly at the time. Initiatives that launched with enthusiasm and stalled at the budget review. Programmes that secured investment but lost momentum when a second function failed to engage. Deployments that succeeded technically but were abandoned because the organisation could not agree on who owned the outcomes or how to measure them. In each case, the post-mortem tends to focus on execution. The tool wasn’t right. The team wasn’t ready. The business case was weak. What is less often acknowledged is that the organisation committed before confidence had converged across the roles that needed to support the commitment. The problem was not execution. It was timing.
Zillow’s iBuying collapse illustrates the pattern at scale. The company committed $3.75 billion in credit facilities and a $20 billion revenue target to an algorithmically driven home-purchasing programme. The technology worked. The models were sophisticated. But the internal alignment required to operate a programme of that ambition, across pricing, risk management, operations, and market assessment, had not formed. When the models failed to account for market conditions that operational roles had flagged, Zillow lost $421 million in a single quarter, cut 25 per cent of its workforce, and exited the business entirely.¹ The failure was not technical. It was premature commitment at organisational scale, action taken before the roles responsible for sustaining it had reached shared conviction about the conditions under which it could work.
Quaie’s Q1 2026 fieldwork across ten executive roles is designed to provide a direct view of where consensus currently sits among senior decision-makers. The hypothesis is that it has not yet formed in most organisations. Among the signals the fieldwork is designed to surface: whether “too early to tell” emerges as the modal response on value confidence, whether high confidence concentrates almost entirely among roles already at scaled deployment, and whether mean confidence and preparedness scores across all roles sit materially below the midpoint of the scale. The Organisational Adoption Gradient, the distance between the most confident and least confident roles, is expected to be wide enough to confirm that enterprise-level averages are concealing the divergence that actually determines whether commitment is rational.
If those patterns hold, they will not describe organisations on the verge of coordinated action. They will describe organisations in the pre-consensus phase, where experimentation is active and interest is high but shared conviction has not converged to the point where committing significant capital, restructuring teams, or scaling across functions is rational.
This does not mean all action is premature. The distinction matters and is often lost in the urgency narrative that surrounds AI adoption. Localised experimentation within a single function, where the role has authority and short feedback loops, is rational even in the absence of broader consensus. A CTO running AI tooling in engineering does not need the CMO’s agreement to proceed. A COO testing AI-assisted operations does not need finance to sign off on a pilot budget.
What requires consensus is the next tier of commitment. Scaling across functions. Allocating capital that draws on shared budgets. Restructuring workflows that span multiple roles. Hiring or reorganising teams around AI as a core operating capability. These decisions require a degree of cross-role agreement that early-stage fieldwork consistently suggests does not yet exist in most organisations. Proceeding without it is not bold. It is premature, and the costs tend to surface in ways that are difficult to reverse.
This matters because the costs of premature action and delayed action are not symmetric. This asymmetry is under-appreciated and worth examining carefully.
Acting too early, before alignment has formed across the roles that need to support a commitment, tends to produce initiatives that are technically functional but organisationally unsupported. They survive as long as their internal champion drives them forward. When that champion moves on, or when the initiative requires buy-in from a function that was never genuinely convinced, it stalls. The cost is not just the failed investment itself. It is the erosion of confidence across the organisation. People remember the initiative that was launched before the ground was ready. That memory makes the next initiative harder to advance, the next budget harder to secure, the next cross-functional conversation more cautious. Premature action does not just fail in the present. It taxes the future.²
Acting too late, after consensus has formed and competitors have moved, carries opportunity cost. But that cost is typically bounded and recoverable. An organisation that enters a market or adopts a capability six months after its competitors can still compete effectively. The advantage lost by waiting is real but rarely existential. An organisation that enters before it is internally aligned may spend those six months not competing but unwinding a premature commitment, resolving the internal friction it created, and rebuilding the confidence it eroded.
The prevailing narrative around AI adoption emphasises urgency. Move fast. Don’t get left behind. First-mover advantage. These pressures are real, and they are felt acutely in boardrooms and leadership teams. But they tend to compress the pre-consensus phase rather than support it. Organisations feel pressure to act before they have established the internal conditions that make action sustainable. The urgency is externally imposed. The readiness, or lack of it, is internal.
The signals that genuine consensus is forming are observable, if you know what to look for. Confidence begins to converge across roles rather than concentrating in one, visible in Quaie’s Role Shift Index as the positions of different roles move closer together on the adoption spectrum. The language in internal discussions shifts from “testing” and “exploring” to “planning” and “resourcing.” Assumptions about impact, cost, and responsibility narrow rather than continue to diverge. Ownership questions get resolved rather than deferred to the next quarterly review. The Role Lead-Lag Rankings between key pairings, CTO and CFO, CMO and COO, begin to narrow rather than widen. The Role Alignment Map provides a direct read on whether this convergence is genuine: not just whether roles are moving along the adoption spectrum at similar speeds, but whether they are forming a shared interpretation of AI’s strategic priorities and ownership.
The signals that action remains premature are equally observable. One function pushes for scale while others remain unconvinced. Responsibility for outcomes is contested or deliberately left vague. Budget is allocated on the basis of momentum rather than agreement. The organisation describes itself as “committed to AI” while key roles privately describe themselves as “still evaluating.” There is a gap between the public position and the internal reality, and that gap is where premature commitment lives. The Role Influence Index is relevant here too: where one role carries disproportionate influence over adoption decisions, the organisation is at particular risk of mistaking that role’s conviction for collective readiness. A highly influential CTO who pushes for scale before other roles have converged is not leading consensus formation. The organisation may be describing itself as committed while several of the roles that matter most to sustained execution remain privately unconvinced.
The Q1 fieldwork is designed to establish which set of signals is more prevalent across the cohort, and whether the pre-consensus pattern anticipated here holds in practice. Consensus takes time. It forms through exposure, evidence, and repeated conversation across roles. Quaie’s Consensus Formation Time is designed to estimate how many quarters that convergence will take, giving leaders a forward-looking view of their decision timeline rather than a backward-looking account of what has already been deployed. The question is not whether that time is passing, but whether it is being used to build alignment deliberately or simply passing while the distance between roles widens without anyone measuring it.
Knowing where your organisation sits relative to genuine consensus is more useful than knowing how fast it is moving. Speed without convergence is not progress. It is exposure.
This essay is part of Quaie’s Founding Essay Series, examining how organisations decide to adopt AI role by role, over time.
Notes and Sources
¹ Zillow iBuying collapse: Zillow Group public filings, earnings calls, and financial reporting, 2019–2021. $20 billion revenue target, $3.75 billion in credit facilities, Q3 2021 loss of $421 million, 25 per cent workforce reduction (approximately 2,000 employees): Zillow Group SEC filings and earnings transcripts. Rich Barton’s statements on earnings calls: public record. See also Chapter 7 of The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) for extended analysis.
² Asymmetric costs of premature versus delayed action: The pattern is consistent with historical evidence from ERP implementations, where premature commitment, scaling before cross-functional alignment had formed, produced cost overruns averaging twice the original budget and implementation timelines extending from 18 months to 3–5 years (Panorama Consulting Group, annual ERP reports, 2010–2020). See also Quaie’s essay “What ERP Taught Us About AI and What Leaders Have Already Forgotten” for extended analysis of the parallel.
Quaie’s constructs referenced in this essay (the Organisational Adoption Gradient, Role Lead-Lag Ranking, Role Shift Index, Role Alignment Map, Role Influence Index, and Consensus Formation Time) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in preceding essays in this series.



