What Your Board Is Missing When It Approves the AI Budget
When a board approves an AI budget, it does so on the best information available to it. Across the 187 senior decision-makers Quaie surveyed in the first quarter of 2026, the intelligence reaching the boardroom came through a consistent set of channels: a technology assessment describing the capability of the tools under consideration, a market comparison showing what peer organisations are committing, and a management recommendation from the leadership team presenting the case for investment. In most organisations, these three inputs constitute the full intelligence picture available to the board at the moment of approval. They are not trivial documents. The process that generates them was built for a different question.
And yet every one of those inputs is structurally blind to the single piece of intelligence that most determines whether the investment will succeed: the state of the leadership system that will be asked to absorb, implement, and sustain it. The technology assessment does not show which C-suite functions are committed to deployment and which are still weighing the evidence. The market comparison does not show whether the organisation’s most advanced roles are aligned with its least advanced ones, or how wide the distance between them has become. The management recommendation reflects what the leadership team collectively presents, which is not the same thing as what the leadership team individually believes. None of these instruments shows the gradient running through the organisation’s own leadership system. The board is approving capital allocation into a system it cannot see.
What boards are currently receiving
The Deloitte Global Boardroom Program surveyed 695 board members and C-suite executives across 56 countries in early 2025 and found that two thirds of boards report limited or no knowledge or experience with AI. Nearly a third say AI is not on the board agenda at all. Only 17 per cent address AI at every meeting. When boards do engage with AI, Deloitte found they engage primarily through two channels: the CIO and CTO, cited by 72 per cent of respondents, and the CEO, cited by over half. Engagement with the CFO, the CRO, and the CISO on AI matters remains limited.¹
That channel structure is itself a diagnosis. A board that receives its AI intelligence primarily through the CIO, the CTO, and the CEO is receiving a view of AI adoption shaped by three of the most consistently optimistic functions in the leadership system. The CIO and CTO’s professional identity and often personal conviction is invested in technology leadership. The CEO is presenting a strategy the board has already endorsed. None are disinterested witnesses to the state of the system. None are positioned to tell the board that the CFO has privately concluded the business case is not yet proven, that the CHRO has unresolved concerns about workforce readiness, or that the COO believes the operating model cannot yet carry what the technology leadership is proposing to build.
No CIO, CTO, or CEO can reasonably be expected to narrate the full system they sit within. The CFO’s private assessment of the return timeline is not visible to the technology leaders. The COO’s concerns about operating model readiness are not part of the CEO’s strategy presentation. The CHRO’s workforce transition planning is a separate workstream from the technology deployment programme the board is being asked to fund. The board receives a consolidated account of the organisation’s AI position. What it does not receive is the unconsolidated reality underneath it: the function-level divergence, the privately held reservations, the evidentiary gaps noted and deferred rather than resolved.
The board is receiving the best information the available channels can produce. Those channels are structurally incapable of showing what the board most needs to see.
The gradient the board cannot see
In the first quarter of 2026, Quaie measured the leadership systems of 187 senior decision-makers across ten C-suite functions in mid-to-large enterprises. The Organisational Adoption Gradient — the distance between the most advanced and least advanced leadership role in the dataset on the five-point adoption scale — was 0.85 points. That number describes, in a single figure, what no board pack currently shows: the spread within the leadership system itself, between the functions pulling toward full deployment and the functions that have not yet crossed the evidentiary threshold required to commit.²
An 0.85-point gradient is not a marginal difference. On a scale where the distance between experimentation and embedded infrastructure spans three points, a leadership system with a gradient approaching one point contains within it functions operating from fundamentally different pictures of the organisation’s AI position. The function at the leading edge believes the organisation is making serious progress. The function at the lagging edge believes the evidence base for full commitment has not yet formed. Both are correct about their own position. Neither can see the other’s position clearly. The board, approving capital allocation into this system, cannot see either.
This is the intelligence gap no existing board instrument addresses. Not because boards are negligent, but because the instrument required to show a leadership system’s internal gradient has not previously existed. MIT CISR researchers Peter Weill, Stephanie Woerner, and Jennifer Banner, analysing 2,788 publicly traded US companies with over one billion dollars in revenue, found in 2024 that only 26 per cent of boards were digitally and AI savvy by updated criteria accounting for generative AI and machine learning, and that those companies outperformed their peers by 10.9 percentage points in return on equity. The 74 per cent of companies without AI-savvy boards averaged 3.8 percentage points below their industry average.³ The performance differential is real and it is growing. Board AI savviness, as measured by director backgrounds and expertise, is not the same thing as board visibility into the leadership system’s actual state. A board can have three directors with AI experience and still have no instrument showing which of the organisation’s ten C-suite functions are committed, which are stalled, and how wide the divergence between them has become.
The approval process that cannot ask the right question
McKinsey’s December 2025 analysis of board AI governance found that only approximately 15 per cent of boards currently receive AI-related metrics of any kind. The recommended metrics — return on investment by business unit, percentage of AI-enabled processes, workforce reskilling progress — are output measures. They describe what has happened after investment has been made. They do not describe the state of the leadership system before the investment is approved: which functions have reached the evidentiary threshold for commitment, which have not, and what the distance between them implies for the programme’s likelihood of producing compounding rather than fragmented value.⁴
The question that should precede every AI budget approval concerns the state of the leadership system, not the capability of the technology or the movement of competitors. Whether the system into which capital is about to be deployed is sufficiently aligned to metabolise it productively — and if it is not, what the board’s approval is actually sanctioning — precedes every other consideration in the approval pack. A board that approves a budget without that visibility is not approving an AI programme. It is approving a management intention to run one. The distinction matters because management intention and organisational readiness are not the same thing, and the space between them is precisely where AI programmes encounter their most consequential friction.
The gap between management intention and organisational readiness is not a minor variance. It is the space where most AI programmes quietly fail.
Consider what the approval conversation looks like in practice. Management presents three to five use cases with projected returns. A technology partner confirms feasibility. A competitor reference demonstrates that the sector is moving. The board asks about data governance, about cybersecurity risk, about the regulatory position. These are necessary questions. They are the questions the board has been equipped to ask by the intelligence it has received. What the board is not equipped to ask, because no instrument has provided the input required, is which specific leadership functions have reached the evidentiary threshold for full commitment and which have not. Whether the CFO’s private assessment of the return timeline matches the CTO’s. Whether the COO’s concerns about operating model readiness have been resolved or deferred. Whether the CHRO’s workforce transition planning is sufficiently advanced to support what deployment at scale will require. These questions are answerable. They require a different class of intelligence than the board currently receives.
WTW’s John Bremen, writing in Forbes on 19 September 2025 and republished on the WTW Insights platform in October, found that only 11 per cent of boards have approved an annual budget for AI projects at all.⁵ That figure is itself a measure of how far most boards are from a governance process capable of evaluating organisational readiness. A board cannot formally evaluate the readiness of a system it has not yet established a budgeting relationship with. The absence of the role-level intelligence required to assess leadership system state is not a failure of the approval process. It is a gap in the available instruments, and it is a gap the existing approval process was not built to detect or compensate for.
What the missing instrument would show
A board that could see the Organisational Adoption Gradient within its own organisation’s leadership system before approving the AI budget would be asking different questions at the approval meeting. The question shifts from whether to invest to where the gradient currently sits and whether it is closing or compounding. Credibility of the management recommendation becomes a secondary matter; the primary question is whether the functions most likely to determine the programme’s success have reached a stage of adoption that makes full commitment rational. Technical capability recedes as the central variable. The variable that replaces it is whether the leadership system is aligned enough to convert capability into economic value that holds.
These are not harder questions than the ones boards currently ask. They are different questions, and they require a different class of intelligence to answer them. The technology assessment, the market comparison, and the management recommendation are the right inputs for the questions they are designed to answer. They are the wrong inputs for the question of whether the leadership system is ready to sustain what the board is being asked to approve.
The practical difference is not abstract. A board that knows the gradient between its most advanced and least advanced C-suite function before approving an enterprise-wide AI deployment can ask management to close that distance as a precondition of committed investment, rather than as a remediation task after the programme has stalled. It can direct capital toward the alignment work that determines whether deployment produces durable value, rather than toward the deployment itself in a system not yet ready to carry it. Setting a measurable condition for the next approval follows from that: not a milestone in the programme plan, but a movement in the gradient that indicates the leadership system is converging rather than diverging. None of these interventions requires the board to become technically expert in AI. They require the board to see the leadership system it is investing in, which is a governance question, not a technology question.
The 67.4 per cent of senior leaders in Quaie’s Q1 2026 dataset who cannot confirm that AI is creating durable economic value in their organisations are not, in most cases, signalling failure.⁶ They are signalling that investment has been committed into a system whose state was not fully visible at the point of approval, and whose gradient between committed and uncommitted functions has not yet been closed by deliberate intervention. The capital went in. The alignment question was deferred. The gradient remains.
The intelligence the report provides
The Q1 2026 Role Layer Intelligence Quarterly measured the leadership system that boards are currently approving budgets into. It shows, across 187 senior decision-makers at ten C-suite functions, where the Organisational Adoption Gradient currently sits, which roles are leading and which are following, where alignment is forming and where it is fracturing, and what the distance between the most advanced and least advanced functions implies for organisations making commitment decisions now.
This is not the intelligence boards are currently receiving. It is the intelligence boards need before they approve the next budget. The instruments currently reaching the boardroom were built to describe external opportunity and management intention. The Role Layer dataset describes the internal system those intentions will have to move through. What falls between the two descriptions is where most AI programmes lose the thread: in the months following approval, when the record has been made and the gradient has not yet been seen.
This essay is part of Quaie’s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.
Notes and sources
¹ Deloitte Global Boardroom Program, “Governance of AI: A Critical Imperative for Today’s Boards,” second edition. Survey of 695 board members and C-suite executives across 56 countries, January to February 2025. Key findings: 66% of boards report limited to no AI knowledge or experience; 31% say AI is not on the board agenda; only 17% address AI at every meeting. Engagement with management on AI is led by the Chief Information Officer and Chief Technology Officer, mentioned by 72% of respondents, followed by the Chief Executive Officer, mentioned by over half. Other C-suite functions are engaged at materially lower rates: CFOs at 27%, CISOs and CROs at 12% each. Published April 2025. Source: deloitte.com/global/en/issues/trust/progress-on-ai-in-the-boardroom-but-room-to-accelerate.html. Summary also published as Anna Marks, Lara Abrash, and Arno Probst, “Governance of AI: A Critical Imperative for Today’s Boards,” Harvard Law School Forum on Corporate Governance, 27 May 2025.
² Quaie Role Layer Executive Survey, Q1 2026 (n=187). The Organisational Adoption Gradient measures the distance between the most advanced and least advanced leadership role in the dataset on the five-point adoption scale, where 1 represents no active AI investment and 5 represents embedded infrastructure. Fieldwork conducted January to March 2026 across ten C-suite functions: CEO, CTO/CIO, COO, CFO, CMO, CRO/CSO, CDO, CISO, CHRO, CLO. Full methodology: quaie.io/p/methodology.
³ Peter Weill, Stephanie L. Woerner, and Jennifer Banner, “Digitally Savvy Boards: AI Update,” MIT Center for Information Systems Research Research Briefing No. XXV-3, 20 March 2025. Underlying findings also published as Peter Weill, Stephanie L. Woerner, and Jennifer S. Banner, “AI-Savvy Boards Drive Superior Performance,” MIT Sloan Management Review, 8 December 2025. Analysis based on machine learning examination of 2,788 publicly traded US companies with over $1 billion in revenue. Companies with digitally and AI-savvy boards outperformed peers by 10.9 percentage points in return on equity. Companies with non-savvy boards averaged 3.8 percentage points below industry average. Only 26% of boards met the updated AI-savvy criteria. Sources: cisr.mit.edu/publication/2025_0301_SavvyBoardsUpdate_WeillWoernerBannerMoore; sloanreview.mit.edu/article/ai-savvy-boards-drive-superior-performance.
⁴ McKinsey, “The AI reckoning: How boards can evolve,” 4 December 2025. Finding that only approximately 15% of boards currently receive AI-related metrics, citing National Association of Corporate Directors, “2025 private company board practices oversight survey: Data pack: Artificial intelligence,” 26 August 2025. Source: mckinsey.com/capabilities/mckinsey-technology/our-insights/the-ai-reckoning-how-boards-can-evolve.
⁵ John M. Bremen, “Lessons in Implementing Board-Level AI Governance,” originally published in Forbes, 19 September 2025, and republished on WTW Insights, 1 October 2025. Finding that only 11% of boards have approved an annual budget for AI projects. Source: wtwco.com/en-us/insights/2025/10/lessons-in-implementing-board-level-ai-governance; forbes.com/sites/johnbremen/2025/09/19/lessons-in-implementing-board-level-ai-governance.
⁶ Quaie Role Layer Executive Survey, Q1 2026 (n=187). 67.4% of respondents could not confirm that AI is creating durable economic value. Source: as note 2.



