What a £500K Consulting Engagement Cannot Predict About Your C-Suite's AI Position
At some point in the past three years, most enterprise leadership teams commissioned something: an AI strategy review, a readiness assessment, a structured advisory engagement. The brief was reasonable. The team was credible. The process ran for eight to twelve weeks. Fifteen to thirty people were interviewed. A framework emerged. A deck was presented. Recommendations were made.
The deck was useful. The process was not wasted. And the leadership team left the room less certain than the report suggested they should be.
The executives being interviewed knew they were being interviewed. They knew who had commissioned the engagement. They knew what a constructive answer looked like. The CFO who had privately decided that the AI investment case was not yet proven did not say so to a consultant sitting in a room booked by the CEO. The COO who believed the CTO was moving faster than the operating model could absorb did not say so in a structured interview being synthesised into a report the CTO would read. The CMO who had watched three AI pilots fail to reach production described the organisation’s AI ambition in terms that were accurate at the level of intent and misleading at the level of reality.
The pattern repeated across the room. The CHRO described an AI-enabled talent acquisition pilot as a strategic priority. The CMO spoke about personalisation at scale as the next twelve months’ focus. The COO outlined plans to embed AI into three operational workflows by the end of the year. Each answer was true. Each answer was also constructed for the room it was delivered in. The consultant synthesising those answers into a single report had no instrument for detecting the distance between what each role said and what each role actually believed. The report reflected the organisation’s collective account of its AI position. It did not reflect the organisation’s actual state.
This is not dishonesty. It is the entirely rational behaviour of senior leaders operating in a politically complex environment, observed by an external team whose findings would be visible to their colleagues and their board. The consulting engagement captured the organisation’s official position on AI. It did not capture the role-level reality beneath it.
That distinction is not a marginal refinement. It is the difference between a measurement and a performance of measurement. And it is why organisations that commission thorough, expensive AI strategy engagements still find themselves blindsided when the programme they believed was aligned turns out not to be.
What the engagement model cannot reach
The structural limitation is not the consultant’s fault. It is a function of the instrument.
An interview is a social interaction. Social interactions between senior executives and external advisors are governed by well-understood norms around discretion, loyalty, and presentation. A CFO will tell a consultant what the CFO is prepared to have attributed to the CFO. That is categorically different from what the CFO actually believes about the pace of AI investment, the quality of the business case, and the likelihood that the organisation will reach coordinated commitment within the next twelve months.
A survey is different in kind, not just in method. A CFO or COO completing an eight-minute anonymous survey, alone, without their name attached to the response, is not performing for an audience. They are answering the question they were actually asked. The CFO who would never tell a consultant that they rate their confidence in the organisation’s AI strategy at two out of five will tell a survey that. The COO who believes the adoption gradient between their function and the CTO’s function is dangerously wide will say so when the answer goes into a dataset rather than into a slide that will be read in a boardroom.
This is not a methodological preference. It is a structural difference in what each instrument can access. The consulting engagement produces the leadership team’s official account of its AI position. The survey produces the leadership system’s actual state. Those are not the same thing. The distance between them is where AI programmes stall.
The intelligence market is repricing around exactly this gap
That gap is not a secret. The consulting and legacy intelligence industries are confronting it directly, and the financial markets are drawing their own conclusions.
Gartner’s stock fell approximately 71% from its November 2024 peak within months of reporting. Forrester, a company generating close to $400 million in annual revenue, is now valued by the market at approximately $105 million. Forrester’s strategy consulting division saw bookings fall more than 50% in 2025. The firm has since exited strategy consulting entirely. Gartner’s consulting revenue fell 12.8% in the fourth quarter of 2025 alone.¹
The case for the model rests on proprietary data, original analysis, and direct access to the researchers who produced it. The market is questioning whether the traditional delivery format is still the right vehicle for any of those things.
What is being disrupted is not intelligence. It is intelligence that can be approximated by a well-prompted AI working from publicly available sources. Aggregated, enterprise-level analysis of the kind that Gartner and Forrester have sold for decades is increasingly available for free, or close to it. The question enterprises are beginning to ask is not whether they need intelligence on AI adoption. It is whether the intelligence they are buying tells them something they could not find elsewhere.
Role-level data on how specific C-suite functions are positioned on the adoption spectrum, how far apart they are from each other, and whose evidence requirement is blocking coordinated commitment does not exist in any publicly available source. Neither does the answer to the question every CEO is privately asking: whether the alignment they perceive matches the alignment the CMO and COO are actually experiencing.
It does not exist in Gartner’s research library. It does not exist in McKinsey’s sector surveys. It does not exist in the slide deck from the last consulting engagement. It exists only in a dataset built from direct, anonymous, longitudinal fieldwork with the decision-makers themselves.
What eight minutes produces that twelve weeks cannot
The Q1 2026 Role Layer Dataset is being built with more than 150 senior decision-makers across ten C-suite functions completing an eight-minute survey between January and March 2026. No interviews. No stakeholder maps. No consultant in the room shaping the frame of the question. Each respondent answered alone, anonymously, in the language of their actual position rather than their official one.
What the dataset shows is not a leadership system moving in one direction. It is a system pulled apart, some roles committed, some stalled, some waiting for evidence that has not yet arrived.
Consider what that looks like in practice. A CTO reports being already committed to the next significant AI investment. The CFO, in the same organisation, reports a timeline of twelve to twenty-four months and names financial evidence as the blocker. The CEO, asked separately whether the leadership team is aligned on AI priorities, rates alignment at four out of five. The CFO rates it at two. Those three data points, collected anonymously across eight minutes, describe a leadership system in which the CTO is ready to scale, the CFO is not yet in a position to sanction it, and the CEO believes the gap does not exist. A consulting engagement interviewing all three would return a report describing an organisation with strong AI ambition and broad leadership support. The anonymous survey returns a different picture entirely: a lead role, a lag role, a misperceived alignment score, and a programme that will stall at exactly the point it appears to be succeeding.
What the dataset also reveals is the direction of travel. Which roles are committed to their next significant AI investment within six months. Which are at twelve to twenty-four months. Which have no current plans. The consulting engagement can capture where each role says it is today. It cannot capture the sequencing of commitment across the leadership system, because sequencing requires the same question asked of the same roles across multiple time periods, with answers given in conditions that remove the pressure to perform alignment.
That is what longitudinal anonymous fieldwork produces that a point-in-time engagement cannot. Not a better snapshot. A different kind of intelligence entirely. The consulting engagement tells you what the leadership team says about its AI position. The Role Layer Intelligence System tells you what the leadership system actually is, and where it is going.
The question the engagement cannot answer
Every CEO who has commissioned an AI strategy engagement has, at some point, left the room uncertain whether the alignment the report described was real. Whether the CFO who nodded through the recommendations had privately committed or was still waiting for evidence that had not yet arrived. Whether the COO’s apparent support reflected genuine conviction or the political calculation of a leader who knew which way the wind was blowing.
That uncertainty is not a failure of the engagement. It is the natural consequence of using an instrument that captures official positions rather than actual states. The consulting model was not designed to answer the question: what does each specific role in this leadership system actually believe, right now, about the pace, ownership, and direction of AI investment, and what would need to change for that belief to shift?
It was designed to answer a different question: what does this organisation’s leadership team collectively say when asked by an external advisor?
Those are not the same question. The distance between them is where AI programmes stall, where capital allocation decisions go wrong, and where the gap between the CEO’s conviction and the COO’s implementation reality quietly decides the outcome before anyone has started measuring it.
If you are a C-suite executive and have not yet contributed your perspective to the Q1 dataset, the survey is still open. The role-level findings will be published in The Role Layer Intelligence Quarterly Q1 2026.
This essay is part of Quaie’s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.
Notes and Sources
¹ Gartner and Forrester financial results: Gartner Q4 2025 earnings, reported February 2026. Full year 2025 revenue $6.5 billion, up 4%, with consulting revenue declining 12.8% in Q4 2025. Stock price fell approximately 71% from November 2024 peak to approximately $155. Forrester Q4 2025 earnings, reported February 2026. Full year revenue $396.9 million, down 8% from 2024. Strategy consulting bookings fell over 50% in 2025. Forrester subsequently exited strategy consulting entirely. Market capitalisation approximately $105 million as of February 2026. Source: SaaStr analysis of Gartner and Forrester Q4 2025 earnings, February 2026. Primary sources: Gartner Q4 2025 earnings release, investor.gartner.com, February 2026. Forrester Q4 2025 earnings call transcript, February 2026.
² Quaie Q1 2026 fieldwork: Role Shift Index, Role Lead–Lag Ranking, Consensus Formation Time, Role Influence Index, Organisational Adoption Gradient, and Role Alignment Map measured across ten C-suite functions.



