Which Roles Lead AI Adoption and Which Follow
The problems here are more concentrated than the previous essays, which makes this one slightly easier to fix. The offending passages are:
Paragraph 4 (”In Quaie’s Q1 2026 fieldwork...sequencing was one of the clearest patterns in the data”) -- reports specific findings by role as fact, including the footnote citing Q1 fieldwork as source.
Paragraph 7 (”CMOs are the most interesting case in the Q1 data”) -- one sentence needs recasting, the rest of the paragraph is structural argument that stands fine.
Final paragraph before the closing line (”The Q1 data does not yet show how long the lag...What the data does show is that the sequence exists”) -- partially salvageable but needs recasting throughout.
Footnote 1 in Notes and Sources -- cites Q1 fieldwork directly as empirical source, needs removing or recasting.
The update footnote -- remove entirely.
Everything else, the CTO, CEO, CMO, CFO, CHRO, General Counsel structural arguments, is built on role logic rather than fieldwork findings and stands without any changes. That is actually the strongest part of the essay and needs no intervention.
Full essay below.
Which Roles Lead AI Adoption and Which Follow
Inside most organisations, AI adoption does not move as a single wave. It moves in sequence. One role begins. Others observe. Some follow when they see evidence. Some wait longer. The order is less random than it appears.
There is a persistent assumption in how organisations talk about AI adoption: that it is, or should be, a coordinated effort. Leadership sets a direction. Teams execute. Progress is measured collectively. When adoption stalls, the diagnosis tends to focus on resistance, or lack of vision, or insufficient investment. The possibility that the stall is a sequencing problem rather than a commitment problem is rarely considered.
But adoption has a natural sequence. Certain roles move first because their context makes early action rational. Others hold back because their context makes caution rational. Neither group is wrong. They are responding to different signals, operating under different constraints, and evaluating risk against different criteria. The question is not how to get everyone moving at the same speed. It is how to understand the order in which roles naturally engage, and to work with that order rather than against it.
Quaie’s Q1 2026 fieldwork is designed to make this sequence visible across ten executive roles. The hypothesis is that CTO and CIO roles will show the most advanced adoption stages, with the highest proportion reporting limited production use or scaled deployment. COO and CDO roles are likely to show similar forward positioning, operationally close to workflows where AI creates immediate leverage. CEO roles are expected to cluster at experimentation. CMO roles may show the widest variance of any group, ranging from no active initiatives at one end to scaled deployment at the other. Quaie’s Role Lead-Lag Ranking is designed to make this map visible, tracking the temporal distance between roles as they move through adoption stages, revealing whether the organisation is converging toward shared conviction or diverging away from it.
The instinct is to read this as a performance ranking. CTOs are ahead. CMOs are behind. CEOs need to catch up. But that reading misses what the pattern is actually showing. It is not a league table. It is a map of how adoption propagates through an organisation, shaped by the structural characteristics of each role.
CTOs move first because they sit closest to operational leverage and risk containment. They control technical infrastructure or own digital workflows directly. Their feedback loops are short: when they experiment with AI tooling, they can observe results within days or weeks, adjust their approach, and decide whether to continue or stop. When an experiment fails, the cost is contained within their function. They do not need cross-functional approval to iterate. This combination of direct authority, fast feedback, and contained downside makes early action rational for these roles in a way that it simply isn’t for others.¹ This structural advantage is also what the Role Influence Index captures: the CTO’s direct ownership of tooling decisions positions the role as a primary catalyst in the adoption sequence, with outsized influence over the pace at which the wider leadership system moves.
CEOs cluster at experimentation not because they are slow or disengaged, but because their role in the adoption process is fundamentally different. A CEO’s job at this stage is not to initiate adoption. It is to validate it. They need to see proof from the roles closer to operations before committing the organisation’s direction and capital. A CEO who moves ahead of that proof is taking a different kind of risk from a CTO who experiments within their own function. The CTO risks a failed tool. The CEO risks a failed strategy. The asymmetry explains the difference in pace, and it is entirely rational on both sides.
CMOs are among the most structurally interesting cases in the sequence. The wide variance anticipated in the Q1 fieldwork reflects the fact that the CMO’s position in the sequence is not fixed. It depends on context.
In some organisations, the CMO is an early mover. This tends to happen where marketing automation and customer personalisation create direct operational leverage, where the CMO has strong control over the relevant workflows, and where the feedback loops between AI-assisted activity and measurable outcomes are relatively tight. In these contexts, the CMO looks more like a CTO: close to the workflow, able to iterate quickly, positioned to see results.
In other organisations, the CMO is a follower. This tends to happen where creative work is central to the marketing function, where authority over workflows is shared with agencies and external partners, and where the outcomes that matter most are difficult to attribute cleanly. In these contexts, the CMO is waiting for evidence from technical teams before committing. This is not hesitation. It is a different role context producing a different, and perfectly rational, position in the sequence.
CFOs occupy a structurally distinct position. They are rarely early movers in AI adoption, not because finance is conservative by nature, but because the CFO’s decision criteria require evidence that does not yet exist when early movers are experimenting. A CFO evaluating an AI investment applies the same evidentiary standard they would apply to any capital allocation of equivalent scale. Until the roles closer to operations have stabilised and produced measurable returns, the CFO’s caution is not a blocker. It is a rational response to insufficient proof.²
CHROs and General Counsel sit further back still. Their concerns, workforce displacement, reskilling requirements, regulatory exposure, liability, are legitimate and largely unaddressed by the roles moving ahead of them. The CHRO cannot evaluate AI’s workforce implications until the operational roles have clarified what AI will actually be used for. General Counsel cannot assess regulatory risk until the scope of deployment is visible. These roles are structurally dependent on earlier movers for the inputs they need to act. Treating their position as resistance misreads the sequence entirely.
This has practical consequences for how organisations plan AI rollout.
The most common mistake is attempting to move all roles simultaneously. A board-level directive to “accelerate AI adoption” creates pressure across every function at once. But the functions are not equally positioned to respond. Technical roles may already be in production use. Commercial roles may still be evaluating feasibility. Finance may be waiting for evidence of durable value that doesn’t exist yet because the roles that would produce it haven’t finished stabilising. HR and Legal may be waiting for clarity on scope that the operational roles have not yet provided.
When pressure is applied uniformly, it doesn’t accelerate adoption. It creates friction. Roles that are not ready to move are forced into activity that lacks conviction. Roles that have already moved feel constrained by functions that haven’t caught up. The organisation experiences a sense of stalling that has nothing to do with capability and everything to do with attempting to run a relay as a sprint.
The alternative is to recognise the natural order and work with it.
This means funding the roles that are ready to move and letting them generate proof. It means understanding that proof, not instruction, is what pulls follow-on roles forward. A CTO who has stabilised AI use in engineering creates evidence that a CFO can evaluate against financial criteria. A COO who has moved into production use creates a reference point that a CMO in a different context can learn from. The proof generated by early movers reduces uncertainty for the roles that follow. It gives them something concrete to assess rather than a strategic narrative to trust on faith.
Pull beats push. Adoption accelerates when leading roles generate evidence that followers can use. Forced rollout reverses this dynamic and turns a sequencing challenge into a political one.
It also means accepting that lag is not the same as resistance. In many cases, follow-on roles are waiting for legitimate inputs: governance clarity from Legal, budget justification from Finance, workforce transition plans from HR, evidence of durable value from an adjacent function. These are reasonable dependencies. Treating them as obstacles to be overcome, rather than conditions to be met, poisons the relationship between early movers and later adopters and makes future coordination harder.
The organisations that navigate this well tend to share a common trait. They do not try to eliminate the gap between early movers and followers. They make the gap visible. They track which roles have moved, which are waiting, and what those waiting roles need in order to act. The Role Shift Index provides the baseline, where each role sits today. The Organisational Adoption Gradient quantifies the spread between the most advanced and least advanced roles. The Role Lead-Lag Ranking shows whether that spread is narrowing or widening over time. And the Role Alignment Map charts whether the leadership system shares a common interpretation of AI’s strategic direction, a distinct question from where each role sits on the adoption spectrum, and one that determines whether the relay produces coordinated organisational commitment or simply a collection of isolated functional advances. Together, these constructs treat adoption as a relay where each handoff depends on the previous runner finishing their leg, not as a race where everyone starts at the same time.
What the Q1 fieldwork is designed to establish is whether the sequence anticipated here holds in practice, how long the lag between roles typically runs, and whether specific sequences produce better outcomes than others. That last question requires multiple quarters of observation, precisely what Quaie’s Consensus Formation Time is designed to estimate as longitudinal data accumulates. The structural logic of who moves first, and why, is already visible. Whether it holds consistently across the cohort is what the data will confirm.
Understanding who moves first, and why, is the beginning of understanding how adoption actually propagates.
This essay is part of Quaie’s Founding Essay Series, examining how organisations decide to adopt AI role by role, over time.
Notes and Sources
¹ CTO as structural early mover in enterprise technology adoption: The pattern of technology-proximate roles leading adoption is consistent with historical precedent. Samsung’s response to ChatGPT in 2023 illustrates the dynamic, with employees in technical roles beginning to use ChatGPT within weeks, uploading proprietary source code and meeting transcripts, before the organisation had established governance. The company subsequently banned the tool, followed by similar restrictions at JPMorgan, Amazon, Bank of America, Deutsche Bank, Goldman Sachs, and Accenture. Reported by Bloomberg, May 2023, and across financial press.
² CFO evidentiary standards for AI investment: BCG AI Radar 2025 (January 2025, 1,803 C-level executives) found that only 25 per cent of organisations reported significant value from AI, despite 75 per cent ranking it as a top-three priority. The gap between strategic priority and demonstrated value is the evidentiary challenge CFOs face when evaluating AI capital allocation.
Quaie’s six analytical constructs (the Role Shift Index, Role Lead-Lag Ranking, Consensus Formation Time, Role Influence Index, Organisational Adoption Gradient, and Role Alignment Map) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in the preceding essay in this series, “Why AI Adoption Needs a Reference Layer.”



