Where Misalignment Blocks AI Progress
When AI initiatives stall, the instinct is to look for technical failure. The model underperformed. The data wasn’t ready. Integration proved harder than expected.
In practice, the more common cause is quieter. Different roles reached different conclusions about the same initiative, and nobody surfaced the gap until momentum had already stalled.
This pattern is so consistent that it deserves to be treated not as an occasional setback but as a structural feature of how organisations adopt AI. Misalignment between roles is not a failure of communication or leadership. It is the predictable result of different functions evaluating the same situation through different lenses, with different risk tolerances, different time horizons, and different definitions of what success looks like. The CTO evaluating technical feasibility is asking a different question from the CMO evaluating commercial impact, which is a different question from the CFO evaluating capital justification, which is a different question again from the CEO evaluating strategic exposure. Each question is legitimate. Each produces a different answer. And those answers diverge before anyone in the organisation necessarily realises they have.
The challenge is that misalignment is hard to see from the inside. Each role’s position feels internally coherent. The CTO who has moved into production use sees an organisation that is making progress and wonders why other functions aren’t keeping pace. The CMO who is still evaluating sees an organisation that is moving too fast without sufficient proof of commercial value and wonders why technical teams seem indifferent to that concern. The CFO sees capital being committed without the evidentiary standard they would apply to any other investment of equivalent scale. The CHRO sees workforce implications that nobody else has raised. The General Counsel sees regulatory exposure that the operational roles have not accounted for. The CEO who senses tension between them may not be able to locate exactly where the gap sits, how wide it has become, or whether it is narrowing or widening over time.
Each of these perspectives is rational. That is precisely what makes the problem so persistent. Misalignment does not feel like misalignment from the inside. It feels like other people not understanding the situation as clearly as you do.
From the outside, looking across roles simultaneously, the picture is different.
Quaie’s Q1 2026 fieldwork is designed to make that picture visible across ten executive roles. The hypothesis is that divergence between roles will emerge as the dominant pattern across nearly every measure captured. CTOs at scaled deployment are expected to report both high confidence and high preparedness. CMOs at the experimentation stage are likely to report substantially lower scores on both. The gap between the most advanced and least advanced roles within the same cohort may prove wider than the gap between adoption stages, suggesting that role context shapes readiness more than organisational maturity does. Quaie’s Organisational Adoption Gradient is designed to quantify this spread precisely, making visible the divergence that enterprise-level averages conceal.
The blocker distribution is likely to tell the same story from a different angle. ROI uncertainty is expected to concentrate among CEO and CMO roles. CTOs are more likely to cite integration complexity and security concerns. CFOs are likely to flag insufficient evidence for capital commitment. CHROs are expected to raise workforce readiness questions that no other role has addressed. General Counsel is likely to cite regulatory uncertainty and liability exposure.¹ If that pattern holds, the organisation is not facing one shared constraint that can be addressed with a single intervention. It is facing several simultaneously, distributed unevenly across the people responsible for resolving them. The CTO wants to solve integration problems. The CMO wants to see ROI evidence. The CFO wants both resolved before releasing budget. The CHRO wants to know what happens to the workforce. General Counsel wants governance in place before deployment expands. The CEO wants all of these answered before committing further capital. None of them is wrong. But the absence of a shared view of where these concerns sit relative to each other means the organisation oscillates between priorities rather than converging toward a resolution. This is precisely the condition the Role Alignment Map is designed to surface: not merely where roles sit on the adoption spectrum, but whether the leadership system shares a common interpretation of AI’s strategic priorities, ownership, and direction.
This is what makes misalignment so resistant to the usual fixes. It is not a single disagreement that can be resolved in a meeting or a workshop. It is a set of parallel concerns, each legitimate, each owned by a different function, each pulling the organisation in a slightly different direction. A steering committee can coordinate activity. It cannot manufacture shared conviction where conviction has not yet formed.
The conventional response to this kind of friction is to push harder. Escalate decisions. Set deadlines. Create accountability structures. These interventions sometimes produce movement in the short term, but they tend to compress disagreement rather than resolve it. Roles comply without aligning. Activity continues without conviction. The initiative moves forward on paper while confidence remains fragmented underneath.
The result is a pattern that most leadership teams will recognise: an AI programme that looks healthy by activity metrics but stalls when it reaches a decision point that requires genuine cross-functional commitment. Budget review. Scaling decision. Governance sign-off. These moments expose whether alignment is real or performative, and the answer often surprises the people involved. The programme that everyone assumed was on track turns out to have been running on one function’s conviction and another function’s compliance.
Goldman Sachs offers an instructive case. Under CIO Marco Argenti, Goldman took a deliberately sequenced approach to AI adoption, building internal infrastructure, testing tools within contained functions, and declining to scale until the organisation’s own evidence supported it. Nearly a year after ChatGPT’s launch, Goldman had zero production generative AI use cases. This was not inertia. It was a deliberate decision to wait for alignment to form across functions before committing at scale. The same institution’s macro research division, meanwhile, published a widely cited report questioning whether AI spending across the industry would ever generate adequate returns.² The tension between these two positions, operational caution and analytical scepticism housed within the same firm, illustrates exactly how misalignment manifests even in organisations that are managing it deliberately.
What makes misalignment measurable rather than merely observable is that its signals appear early. They do not wait for a failed deployment or a missed milestone to become visible. The divergence between roles in both confidence and the nature of blockers cited is likely to be present during pilots, well before any organisation has attempted full integration. The friction that will slow or stall future deployment decisions is forming in the gap between how different roles are experiencing the same early-stage initiatives. The Role Lead-Lag Ranking is designed to track whether roles are converging toward shared conviction or pulling further apart. The Role Influence Index adds a further dimension: where one role exerts disproportionate influence over adoption decisions, misalignment between that role and its functional dependents carries greater organisational weight than the same gap between lower-influence roles. Understanding which roles are acting as gatekeepers or validators helps identify where unresolved divergence is most likely to stall progress.
This has implications for how organisations assess their own readiness. Most AI readiness assessments operate at the organisational level: does the company have the data, the tools, the talent, the budget? These are necessary conditions. They are not sufficient ones. An organisation can have all four and still stall if the roles responsible for acting on them do not share a common assessment of risk, value, and timing. Readiness that exists in one function but not in others is not organisational readiness. It is local capability masquerading as collective preparedness.
The more useful diagnostic is role-level. Where has one function moved ahead of shared agreement? Which roles are carrying risk that other roles have not acknowledged? Is budget being allocated on the basis of genuine convergence, or on the basis of one function’s conviction outweighing another’s hesitation? Is the organisation describing itself as aligned because alignment has been measured, or because nobody has asked the question directly? The Role Shift Index provides the baseline for each of these questions, mapping where each role sits on the adoption spectrum and making visible the gaps that enterprise-level metrics compress away.
These questions are uncomfortable because they surface disagreement that organisations prefer to leave implicit. But implicit disagreement does not resolve itself. It compounds. The gap between a CTO’s confidence and a CMO’s scepticism does not naturally close over time without deliberate intervention. It tends to widen, because each role continues to accumulate evidence that confirms its own position. The CTO sees the tool working and becomes more confident. The CMO sees the absence of commercial proof and becomes more sceptical. The CFO sees budget flowing without the returns that would justify it and becomes more cautious. Both are responding rationally to the evidence available to them. The problem is not that any of them is wrong. It is that none can see the others’ evidence clearly enough to update their own view.
Misalignment is not a problem to be eliminated. It is a natural phase of adoption that every organisation passes through on the way from experimentation to commitment. The question is not whether it will appear, but whether it will be surfaced and managed deliberately, or left to harden into a structural constraint that no amount of technical capability can overcome.
The organisations that stall are not usually the ones that lack ambition or talent. They are the ones where friction between roles went unacknowledged long enough to become the defining constraint. Seeing where that friction sits is the first step toward resolving it.
This essay is part of Quaie’s Founding Essay Series, examining how organisations decide to adopt AI role by role, over time.
Notes and Sources
¹ Blocker distribution by role: The anticipated pattern of ROI uncertainty concentrating among CEO and CMO roles, with integration concerns among CTOs and evidentiary concerns among CFOs, is consistent with BCG AI Radar 2025 (January 2025, 1,803 C-level executives), which found that approximately 70 per cent of AI challenges stem from people, processes, and cultural change rather than technology, and with McKinsey Global Survey on AI (2024), which identified trust and explainability concerns as primary barriers among non-technical leadership roles.
² Goldman Sachs AI adoption: Goldman’s deliberate sequencing under CIO Marco Argenti reported in Financial Times, Bloomberg, and Goldman Sachs technology division communications, 2023–2025. Zero production generative AI use cases nearly a year after ChatGPT launch: Argenti’s public remarks. Goldman Sachs Global Investment Research report “Gen AI: Too much spend, too little benefit?” published June 2024. The coexistence of operational caution and analytical scepticism within the same institution illustrates structured misalignment management.
Quaie’s constructs referenced in this essay (the Organisational Adoption Gradient, Role Lead-Lag Ranking, Role Shift Index, Role Alignment Map, and Role Influence Index) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in preceding essays in this series.



