The CFO Is Carrying AI Risk the Approval Process Was Never Built to Show
The CFO approves AI capital into a leadership system whose internal state is not visible at the point of approval. In the Q1 2026 Role Layer dataset of 187 C-suite respondents, the COO sits furthest back of all ten leadership roles, materially behind the CFO who signs the cheque.¹ The Organisational Adoption Gradient between the lead role and the lag role is 0.85 points.² That gap is not an artefact of survey design. It is the structural condition the approval pack does not show, and it is the condition under which most enterprise AI investment is now being committed.
What the board is asking the CFO has changed
The question reaching CFOs from boards has shifted in the past twelve months. It is no longer whether to invest in AI. It is which investments worked. Todd McElhatton, CFO of Zuora, put the new register plainly in January 2026: boards want to know which tools are generating real ROI, which to double down on, which to retire.³ CFO Dive reported the same month that finance chiefs face growing pressure from boards and investors to show results from AI spending.⁴ Deloitte’s Finance Trends 2026 found that 57 per cent of finance executives now describe themselves as among the top leaders driving AI strategy.⁵ Accountability has caught up with authority.
It has caught up against an unfavourable evidence base. MIT’s NANDA initiative reported in August 2025 that 95 per cent of enterprise generative AI pilots had produced no measurable P&L impact.⁶ McKinsey’s State of AI 2025 survey of 1,993 respondents found that nearly two-thirds of organisations were not yet scaling AI across the enterprise.⁷ The aggregate picture is well-known. What it does not explain is why. Two-thirds is a number, not a mechanism.
The approval pack was not built to show the lag
The CFO approval process was designed for a different question. It was built to test the financial case: business case, technology assessment, vendor comparison, risk register, return profile. It was not built to show the role-level state of the leadership system the capital is entering. The inputs that reach the CFO come, predominantly, from the roles most invested in AI progress. The CDO sponsors. The CTO or CIO certifies feasibility. The CMO or CRO signals demand. The management recommendation arrives stitched together from those positions.
The COO’s evidentiary standard for scaled deployment does not appear in that pack. Neither does the CHRO’s view on workforce readiness for the post-deployment operating model. Neither does the CISO’s assessment of agentic compliance exposure once the pilot leaves the sandbox. These are not omissions of process. They are omissions of design. The approval architecture was built when the binding constraint on enterprise technology investment was capital allocation. The binding constraint on AI investment is operational coordination, and the approval architecture has not caught up.
There is a second feature of the approval architecture that compounds the first. The roles that prepare the pack are not the roles that will operate the system once the pilot scales. The CDO sponsoring the investment hands the operating reality to the COO at the point of deployment. The CTO certifying technical feasibility hands the workforce reality to the CHRO. The CMO modelling demand hands the compliance reality to the CISO and the CLO. In each handover, the role that knew the most about whether the programme would scale was not in the room when the capital decision was made. The approval pack is, in effect, a document about whether to proceed prepared by the roles least exposed to whether proceeding will work.
The cost is asymmetric. The CFO carries the accountability for outcomes shaped by roles whose state was never disclosed to them.
Quaie’s gradient is consistent with what other datasets are showing
The 0.85-point gradient in the Q1 2026 Role Layer dataset is not an isolated finding. Grant Thornton’s 2026 AI Impact Survey, working from a separate sample and a different instrument, identified the same role-level pattern from the operational side. CIOs and CTOs are five times more likely than COOs to say their workforce is fully ready for AI deployment.⁸ Fifty-four per cent of COOs reported concern about agentic AI compliance, against 20 per cent of CIOs and CTOs.⁹ Grant Thornton characterises the dynamic as COOs discovering governance gaps that CFOs are not funding.¹⁰
Two datasets, two methodologies, the same structural finding: the role accountable for converting AI investment into operational outcome is materially behind the role recommending the investment, and behind the role releasing the capital. The Quaie gradient is what the role-level mechanism looks like. The Grant Thornton readiness gap is what it produces in the field. The MIT 95 per cent and the McKinsey two-thirds are what it produces on the P&L.
The aggregate failure rates do not say which role lags. The role-level data does. Once both sit on the same page, the question of why most enterprise AI investment fails to scale stops being a mystery and becomes a coordination problem with a known shape.
The shape matters for what the CFO does with it. A coordination problem at the role level does not respond to the instruments the financial case uses. It does not respond to a tighter business case, a sharper vendor selection, or a more aggressive return threshold. It responds to the position of the lag role at the point of approval. If the COO’s evidentiary standard for scaled deployment has not been met, no adjustment to the financial case will produce the operational outcome the financial case is forecasting. The CFO is being asked to underwrite a forecast whose principal risk variable does not appear in the model.
The Confidence Gap is a base rate, not an outlier
A second Quaie finding belongs alongside the gradient. The Confidence Gap in the Q1 2026 dataset stands at 67.4 per cent. Two-thirds of senior leaders, across all ten roles, report that they cannot confirm AI is yet creating durable economic value in their organisation.¹¹ Four barriers to scaling cluster within 2.6 percentage points of one another, which means there is no single dominant blocker. The system is constrained on multiple dimensions at once, and any individual leader’s inside-view confidence about their own programme is a weak signal against that base rate.
For a CFO weighing a Q3 capital allocation, the implication is direct. The base rate for senior-leader confidence in AI value creation is 32.6 per cent. The base rate for organisations scaling AI is roughly one in three. The base rate for pilots producing measurable P&L impact is, on the MIT data, one in twenty. The inside view, which says this programme is different, this vendor is better, this use case is proven, has to clear a base rate that the inside view rarely acknowledges.
What seeing the gradient changes
The CFO who can see the gradient is in a different position from the CFO who cannot. The decision is no longer a binary on the financial case. The decision is whether the leadership system into which the capital will flow is coordinated enough to convert it. The COO’s position becomes a precondition of approval rather than a variable that surfaces six months later when the programme stalls.
This is not a recommendation that CFOs should add a coordination check to the approval template. Templates are how organisations institutionalise the questions they were already asking. The point is sharper than that. The CFO is being held to account, in front of the board, for outcomes shaped by a structural feature of the leadership system that the approval process was never designed to expose. Accountability has been transferred without the corresponding visibility.
That is the inversion. The CFO carries the consequences of a coordination failure they were structurally prevented from seeing. The COO, who can see it from the inside, does not own the capital decision. The CDO and CTO, who own the recommendation, are not the roles whose lag determines whether the recommendation will deliver. The roles are misaligned with the accountability, and the approval process locks the misalignment in place.
Boards are not yet asking the question this would imply. They are asking which AI investments worked. The harder question, and the one the next twelve months of board cycles will start to surface, is which investments were approved on the basis of evidence the approving role could actually see.
The CFO who anticipates that question now has the option of building the role-level visibility into the approval pack before being asked for it. The CFO who waits will explain, after the fact, why two-thirds of the programme did not scale.
This essay is part of Quaie’s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.
Notes and sources
¹ Quaie Q1 2026 Role Layer Intelligence Quarterly, n=187 C-suite respondents across ten leadership roles. The COO’s position as the lag role is cleared for public reference. Precise role-level Role Shift Index scores are gated to the paid report. Full methodology: quaie.io/p/methodology.
² Quaie Q1 2026, Organisational Adoption Gradient: 0.85 points between the lead role and the lag role (COO). The lead role identity is available to The Role Layer Intelligence Quarterly subscribers. Full methodology: quaie.io/p/methodology.
³ Todd McElhatton, “The Year CFOs Hold AI Accountable,” Finance Leaders Unfiltered newsletter, Zuora, January 2026. Source: zuora.com.
⁴ CFO Dive, “Top 5 AI adoption challenges facing CFOs in 2026,” published 23 January 2026. Source: cfodive.com/news/top-5-ai-adoption-challenges-facing-cfos-in-2026/810277.
⁵ Deloitte, “Finance Trends 2026.” Finding: 57% of finance executives describe themselves as among the top leaders driving AI strategy development across the organisation. Source: deloitte.com/us/en/programs/chief-financial-officer/articles/cfo-insights-ai-cost-risk-roi.html.
⁶ MIT NANDA (Networked Agents and Decentralized AI) initiative, “The GenAI Divide: State of AI in Business 2025,” published August 2025. Lead author Aditya Challapally. Research base: 150 leader interviews, 350-employee survey, and analysis of 300 public AI deployments. Finding: 95% of enterprise generative AI pilots delivered no measurable impact on profit and loss. Source: MIT NANDA initiative publications.
⁷ McKinsey QuantumBlack, “The state of AI in 2025: Agents, innovation, and transformation,” published 5 November 2025. Survey of 1,993 respondents across approximately 105 countries, 38% from organisations with over one billion dollars in annual revenue. Key finding: nearly two-thirds have not yet begun scaling AI across the enterprise. Source: mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.
⁸ Grant Thornton, “2026 AI Impact Survey Report: The AI proof gap — why AI is not delivering the performance leaders expected,” published April 2026. Survey of 950 C-suite and senior business leaders across ten industries, conducted February to March 2026. Finding: CIOs and CTOs are five times more likely than COOs to say their workforce is fully ready for AI deployment. Source: grantthornton.com/services/advisory-services/artificial-intelligence/2026-ai-impact-survey.
⁹ Grant Thornton 2026 AI Impact Survey. Finding: 54% of COOs report concern about agentic AI compliance and regulatory uncertainty, against 20% of CIOs and CTOs. Source as note 8.
¹⁰ Grant Thornton 2026 AI Impact Survey. Finding: COOs overseeing AI-affected operations are discovering governance gaps that CFOs are not funding and that CIOs and CTOs are not surfacing. Source as note 8.
¹¹ Quaie Q1 2026 Role Layer Intelligence Quarterly, n=187. Confidence Gap: 67.4% of respondents report no confidence, low confidence, or that it is too early to tell whether AI is creating durable economic value in their organisation. Four barriers to scaling cluster within 2.6 percentage points of one another. Full methodology: quaie.io/p/methodology.



