<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Quaie]]></title><description><![CDATA[Quaie provides role-based predictive intelligence on how enterprises adopt AI, capturing first-party signals from senior decision-makers to show where value is forming, where alignment breaks down, and when investment becomes rational.]]></description><link>https://quaie.io</link><generator>Substack</generator><lastBuildDate>Tue, 12 May 2026 07:11:04 GMT</lastBuildDate><atom:link href="https://quaie.io/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Quaie Ltd]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[quaie@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[quaie@substack.com]]></itunes:email><itunes:name><![CDATA[Simon MacTaggart]]></itunes:name></itunes:owner><itunes:author><![CDATA[Simon MacTaggart]]></itunes:author><googleplay:owner><![CDATA[quaie@substack.com]]></googleplay:owner><googleplay:email><![CDATA[quaie@substack.com]]></googleplay:email><googleplay:author><![CDATA[Simon MacTaggart]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Why AI-First Is the Wrong Ambition for Most Organisations Right Now]]></title><description><![CDATA[The organisations moving fastest on enterprise AI are abandoning the most.]]></description><link>https://quaie.io/p/why-ai-first-is-the-wrong-ambition-for-most-organisations-right-now</link><guid isPermaLink="false">https://quaie.io/p/why-ai-first-is-the-wrong-ambition-for-most-organisations-right-now</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Tue, 05 May 2026 07:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pt8w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pt8w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pt8w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg 424w, https://substackcdn.com/image/fetch/$s_!pt8w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg 848w, https://substackcdn.com/image/fetch/$s_!pt8w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!pt8w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pt8w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg" width="1179" height="713" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:713,&quot;width&quot;:1179,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:141162,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/196525693?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pt8w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg 424w, https://substackcdn.com/image/fetch/$s_!pt8w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg 848w, https://substackcdn.com/image/fetch/$s_!pt8w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!pt8w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb508e9a1-a5aa-42b9-8027-d53c6f907101_1179x713.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The organisations moving fastest on enterprise AI are abandoning the most. S&amp;P Global Market Intelligence surveyed 1,006 midlevel and senior professionals across North America and Europe in late 2024 and found that the share of companies scrapping the majority of their AI initiatives before reaching production had jumped from 17 per cent to 42 per cent in a single year.&#185; The average organisation abandoned 46 per cent of its proof-of-concept projects before production. This is not a story about organisations that tried AI and found the technology wanting. It is a story about organisations that moved at the pace the dominant narrative recommended and found the results did not follow.</p><p>The dominant narrative has a name. AI-first. It circulates through vendor keynotes, board papers, and strategy offsite decks with the confidence of settled wisdom. Get ahead of the curve. Move before competitors. Make AI central to everything. The organisations that hesitate will be left behind. The Q1 2026 Role Layer dataset, built from 187 senior decision-makers across ten C-suite roles at mid-to-large enterprises, collected January to March 2026, suggests the narrative has the causation inverted.&#178; Speed of AI ambition is not the variable that predicts AI value. Coordination across the leadership system is.</p><h3>What the failure rate is actually measuring</h3><p>The MIT NANDA initiative found in August 2025 that 95 per cent of enterprise generative AI pilots produced no measurable impact on profit and loss.&#179; McKinsey&#8217;s State of AI 2025, drawing on 1,993 respondents across approximately 105 countries, found that nearly two-thirds of organisations had not yet begun scaling AI across the enterprise.&#8308; S&amp;P Global found abandonment rates tripling in twelve months. These three datasets are measuring the same underlying condition from different angles.</p><p>The standard interpretation is that the technology is harder to implement than advertised, or that enterprises lack the data infrastructure to support it, or that change management has not kept pace with deployment ambition. All three are partially true. None of them explains why the failure rate is accelerating as the technology matures and as enterprise AI investment reaches record levels. A technology problem should get easier as the technology improves. A coordination problem gets harder as the pace of deployment increases, because faster deployment widens the distance between the roles pushing the programme forward and the roles that will carry it at scale.</p><p>The S&amp;P Global data makes this visible in a specific way. The abandonment surge from 17 to 42 per cent did not happen because the technology got worse. It happened in the same period that deployment accelerated. Organisations that were experimenting cautiously in 2023 and 2024 moved into production at scale in 2024 and 2025, and the failure rate followed the acceleration. The organisations scrapping 46 per cent of their proof-of-concept projects are not failing at the technology stage. They are failing at the transition from pilot to scaled deployment, which is precisely the point where the technology crosses from the roles that built it to the roles that have to run it.</p><p>Writer&#8217;s 2026 enterprise AI adoption survey found that 97 per cent of executives report their organisation deployed AI agents in the past year.&#8309; Deployment is nearly universal. The failure modes the survey documents do not stem from lack of AI talent or enthusiasm. They stem from the absence of systems designed to scale what is working. Individual productivity gains are real. Nothing connects them to organisation-wide outcomes. The gap is not between ambition and technology. It is between the roles driving the programme and the roles that would need to absorb it.</p><h3>The constraint the AI-first narrative cannot see</h3><p>The AI-first framing treats enterprise AI adoption as a single organisational decision: commit to AI at the strategic level and the operational consequences will follow. The Q1 2026 Role Layer data shows what actually follows. Ten C-suite roles measured against the same five-point adoption scale at the same point in time, using the same fifteen decision-level questions. The Organisational Adoption Gradient between the most and least advanced role is 0.85 points.&#178; The leadership system is not moving as a unit. It is moving as a set of roles at materially different speeds, with the role most responsible for operational delivery sitting furthest back.</p><p>That gradient is the structural condition the AI-first narrative cannot see because it does not have an instrument to measure it. The narrative operates at the organisational level. The constraint is at the role level. An organisation can be genuinely committed to AI at the strategic level while simultaneously having a leadership system whose internal distances make scaled deployment unreachable within any reasonable investment horizon. The commitment is real. The coordination is not there. AI-first programmes that push harder on speed hit that condition as a wall rather than addressing it as a variable.</p><p>What the gradient looks like in practice is a programme that moves through pilot and into early deployment without difficulty, then stalls at the point where the technology has to transfer from the roles that sponsored it to the roles that will run it. The CDO who built the business case hands the operating reality to the COO. The CTO who certified the technology hands the workforce challenge to the CHRO. The CMO who modelled the demand hands the compliance exposure to the CISO. Each handover crosses a role-level distance that the approval process did not measure and the programme plan did not account for. The stall is not a project management failure. It is the gradient becoming visible too late.</p><p>The four barriers to scaling in the Q1 2026 dataset cluster within 2.6 percentage points of one another.&#178; A single dominant barrier is a solvable problem: identify it, resource it, clear it. Four barriers of roughly equal weight mean that clearing any one of them does not unlock the next phase. The programme moves forward and hits the next barrier almost immediately. Organisations in this position are not facing a resourcing problem or a technology problem. They are facing a sequencing problem: the leadership system has not determined which role&#8217;s position is the actual binding constraint at each stage, so every stage feels like a new obstacle rather than a known distance to close.</p><h3>What alignment-first means in practice</h3><p>Alignment-first is not a slower version of AI-first. It is a more accurate one. The distinction is not about pace. It is about which variable the organisation is managing.</p><p>AI-first manages pace. Get the technology deployed, get pilots running, get use cases in production. The assumption is that the leadership system will coordinate around the deployment once it is in place. The S&amp;P Global data suggests that assumption is wrong in 42 per cent of cases and deteriorating. The organisations abandoning programmes are not abandoning them because the technology failed. They are abandoning them because the operational conditions for scaled deployment did not materialise on the timeline the programme assumed.</p><p>The organisations that do not abandon know something different going in. They know which role is furthest from deployment readiness before committing the next tranche of capital, and they treat that role&#8217;s position as a precondition of the approval rather than a variable to be discovered after the programme stalls. The question is not how fast to move. It is whether the leadership system is coordinated enough to convert the capital into deployed value inside the investment horizon being approved. That reframing does not slow the programme down. It changes what the programme is trying to manage.</p><p>McKinsey&#8217;s State of AI 2025 found that organisations producing significant financial returns from AI are nearly three times as likely as others to have fundamentally redesigned their workflows.&#8308; That finding describes alignment-first behaviour without naming it. The organisations that succeeded did not start with the technology. They started with the operational and organisational conditions the technology would need to enter. The technology selection followed the alignment work. That is not a conservative approach to AI. It is the approach that produces the outcomes the AI-first narrative promises but consistently fails to deliver.</p><p>The 5 per cent of organisations producing measurable P&amp;L impact from AI are not moving slower than the 95 per cent. They are moving in the right sequence. The sequence runs: understand the role-level distances inside the leadership system, identify the binding constraint at the current stage, address that constraint directly before committing the capital that assumes it has been resolved. That is a different programme logic from AI-first, and the gap between the two is where most of the S&amp;P Global 42 per cent lives.</p><h3>The 67.4 per cent is not a coincidence</h3><p>The Confidence Gap in the Q1 2026 Role Layer dataset stands at 67.4 per cent. Two thirds of senior leaders across all ten roles cannot confirm that AI is creating durable economic value in their organisation.&#178; The four barriers clustering within 2.6 percentage points mean there is no single dominant explanation for that figure. The system is constrained on multiple dimensions simultaneously.</p><p>That is precisely what a leadership system running on AI-first logic looks like from the inside. Multiple initiatives at different stages. Multiple roles at different adoption positions. No single blocker that, if cleared, would unlock the value. The 67.4 per cent is not a verdict on enterprise AI. It is a description of what happens when organisations move at the pace the narrative recommends without first understanding the internal distances the capital has to cross.</p><p>The vendor narrative will not correct itself. The incentive runs the other way: faster adoption means more licences, more implementation contracts, more platform commitment. The pressure on leadership teams to move quickly is not going to diminish in 2026. What can change is what the leadership team measures before it moves. An organisation that knows its Organisational Adoption Gradient before the next capital commitment is asking a different question from one that does not. Not whether to invest in AI. Not how quickly to deploy. Whether the distance between the roles that sponsor the programme and the roles that will carry it is close enough to cross inside the investment horizon being approved.</p><p>The organisations that will close the 67.4 per cent gap are not the ones that increase the pace. They are the ones that measure the distances first. The gradient is the variable AI-first cannot manage because AI-first cannot see it. Alignment-first starts there.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><h3>Notes and sources</h3><p>&#185; S&amp;P Global Market Intelligence, Voice of the Enterprise: AI and Machine Learning, Use Cases 2025. Survey of 1,006 midlevel and senior IT and line-of-business professionals across North America and Europe, conducted October to November 2024. Key findings: the share of companies abandoning the majority of AI initiatives before reaching production rose from 17% to 42% year over year; the average organisation scrapped 46% of proof-of-concept projects before production. Source: spglobal.com/market-intelligence/en/news-insights/research/ai-experiences-rapid-adoption-but-with-mixed-outcomes-highlights-from-vote-ai-machine-learning.</p><p>&#178; Quaie Role Layer Executive Survey, Q1 2026 (n=187). The Organisational Adoption Gradient measures the distance between the most and least advanced leadership role in the dataset on the five-point adoption scale, where 1 represents no active AI initiatives and 5 represents embedded infrastructure. Q1 2026 gradient: 0.85 points. Confidence Gap: 67.4% of respondents could not confirm AI is creating durable economic value. Four barriers to scaling cluster within 2.6 percentage points of one another. Fieldwork conducted January to March 2026 across ten C-suite roles: CEO, CTO/CIO, COO, CFO, CMO, CRO/CSO, CDO, CISO, CHRO, CLO. Full methodology: quaie.io/p/methodology.</p><p>&#179; MIT NANDA (Networked Agents and Decentralised AI) initiative, &#8220;The GenAI Divide: State of AI in Business 2025,&#8221; published August 2025. Lead author Aditya Challapally. Research base: 150 leader interviews, 350-employee survey, and analysis of 300 public AI deployments. Finding: 95% of enterprise generative AI pilots delivered no measurable impact on profit and loss. Source: MIT NANDA initiative publications.</p><p>&#8308; McKinsey QuantumBlack, &#8220;The state of AI in 2025: Agents, innovation, and transformation,&#8221; published 5 November 2025. Survey of 1,993 respondents across approximately 105 countries. Key findings: nearly two-thirds have not yet begun scaling AI across the enterprise; organisations producing significant financial returns from AI are nearly three times as likely as others to have fundamentally redesigned their workflows. Source: mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.</p><p>&#8309; Writer, &#8220;Enterprise AI Adoption in 2026: Why 79% Face Challenges Despite High Investment,&#8221; published April 2026. Survey of 1,200 C-suite executives and 1,200 non-technical employees actively using AI at work, conducted with Workplace Intelligence. Key findings: 97% of executives report their organisation deployed AI agents in the past year; failure modes stem from absence of systems designed to scale individual productivity gains to organisation-wide outcomes. Source: writer.com/blog/enterprise-ai-adoption-2026.</p>]]></content:encoded></item><item><title><![CDATA[The CFO Is Carrying AI Risk the Approval Process Was Never Built to Show]]></title><description><![CDATA[The CFO approves AI capital into a leadership system whose internal state is not visible at the point of approval.]]></description><link>https://quaie.io/p/the-cfo-is-carrying-ai-risk-the-approval-process-was-never-built-to-show</link><guid isPermaLink="false">https://quaie.io/p/the-cfo-is-carrying-ai-risk-the-approval-process-was-never-built-to-show</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 27 Apr 2026 07:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6Fow!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6Fow!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6Fow!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6Fow!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6Fow!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6Fow!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6Fow!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg" width="1254" height="837" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:837,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:324404,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/195337114?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6Fow!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg 424w, https://substackcdn.com/image/fetch/$s_!6Fow!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg 848w, https://substackcdn.com/image/fetch/$s_!6Fow!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!6Fow!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59f7f6a3-7d9a-474a-a87a-d2e916f968d9_1254x837.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The CFO approves AI capital into a leadership system whose internal state is not visible at the point of approval. In the Q1 2026 Role Layer dataset of 187 C-suite respondents, the COO sits furthest back of all ten leadership roles, materially behind the CFO who signs the cheque.&#185; The Organisational Adoption Gradient between the lead role and the lag role is 0.85 points.&#178; That gap is not an artefact of survey design. It is the structural condition the approval pack does not show, and it is the condition under which most enterprise AI investment is now being committed.</p><h3>What the board is asking the CFO has changed</h3><p>The question reaching CFOs from boards has shifted in the past twelve months. It is no longer whether to invest in AI. It is which investments worked. Todd McElhatton, CFO of Zuora, put the new register plainly in January 2026: boards want to know which tools are generating real ROI, which to double down on, which to retire.&#179; CFO Dive reported the same month that finance chiefs face growing pressure from boards and investors to show results from AI spending.&#8308; Deloitte&#8217;s Finance Trends 2026 found that 57 per cent of finance executives now describe themselves as among the top leaders driving AI strategy.&#8309; Accountability has caught up with authority.</p><p>It has caught up against an unfavourable evidence base. MIT&#8217;s NANDA initiative reported in August 2025 that 95 per cent of enterprise generative AI pilots had produced no measurable P&amp;L impact.&#8310; McKinsey&#8217;s State of AI 2025 survey of 1,993 respondents found that nearly two-thirds of organisations were not yet scaling AI across the enterprise.&#8311; The aggregate picture is well-known. What it does not explain is why. Two-thirds is a number, not a mechanism.</p><h3>The approval pack was not built to show the lag</h3><p>The CFO approval process was designed for a different question. It was built to test the financial case: business case, technology assessment, vendor comparison, risk register, return profile. It was not built to show the role-level state of the leadership system the capital is entering. The inputs that reach the CFO come, predominantly, from the roles most invested in AI progress. The CDO sponsors. The CTO or CIO certifies feasibility. The CMO or CRO signals demand. The management recommendation arrives stitched together from those positions.</p><p>The COO&#8217;s evidentiary standard for scaled deployment does not appear in that pack. Neither does the CHRO&#8217;s view on workforce readiness for the post-deployment operating model. Neither does the CISO&#8217;s assessment of agentic compliance exposure once the pilot leaves the sandbox. These are not omissions of process. They are omissions of design. The approval architecture was built when the binding constraint on enterprise technology investment was capital allocation. The binding constraint on AI investment is operational coordination, and the approval architecture has not caught up.</p><p>There is a second feature of the approval architecture that compounds the first. The roles that prepare the pack are not the roles that will operate the system once the pilot scales. The CDO sponsoring the investment hands the operating reality to the COO at the point of deployment. The CTO certifying technical feasibility hands the workforce reality to the CHRO. The CMO modelling demand hands the compliance reality to the CISO and the CLO. In each handover, the role that knew the most about whether the programme would scale was not in the room when the capital decision was made. The approval pack is, in effect, a document about whether to proceed prepared by the roles least exposed to whether proceeding will work.</p><p>The cost is asymmetric. The CFO carries the accountability for outcomes shaped by roles whose state was never disclosed to them.</p><h3>Quaie&#8217;s gradient is consistent with what other datasets are showing</h3><p>The 0.85-point gradient in the Q1 2026 Role Layer dataset is not an isolated finding. Grant Thornton&#8217;s 2026 AI Impact Survey, working from a separate sample and a different instrument, identified the same role-level pattern from the operational side. CIOs and CTOs are five times more likely than COOs to say their workforce is fully ready for AI deployment.&#8312; Fifty-four per cent of COOs reported concern about agentic AI compliance, against 20 per cent of CIOs and CTOs.&#8313; Grant Thornton characterises the dynamic as COOs discovering governance gaps that CFOs are not funding.&#185;&#8304;</p><p>Two datasets, two methodologies, the same structural finding: the role accountable for converting AI investment into operational outcome is materially behind the role recommending the investment, and behind the role releasing the capital. The Quaie gradient is what the role-level mechanism looks like. The Grant Thornton readiness gap is what it produces in the field. The MIT 95 per cent and the McKinsey two-thirds are what it produces on the P&amp;L.</p><p>The aggregate failure rates do not say which role lags. The role-level data does. Once both sit on the same page, the question of why most enterprise AI investment fails to scale stops being a mystery and becomes a coordination problem with a known shape.</p><p>The shape matters for what the CFO does with it. A coordination problem at the role level does not respond to the instruments the financial case uses. It does not respond to a tighter business case, a sharper vendor selection, or a more aggressive return threshold. It responds to the position of the lag role at the point of approval. If the COO&#8217;s evidentiary standard for scaled deployment has not been met, no adjustment to the financial case will produce the operational outcome the financial case is forecasting. The CFO is being asked to underwrite a forecast whose principal risk variable does not appear in the model.</p><h3>The Confidence Gap is a base rate, not an outlier</h3><p>A second Quaie finding belongs alongside the gradient. The Confidence Gap in the Q1 2026 dataset stands at 67.4 per cent. Two-thirds of senior leaders, across all ten roles, report that they cannot confirm AI is yet creating durable economic value in their organisation.&#185;&#185; Four barriers to scaling cluster within 2.6 percentage points of one another, which means there is no single dominant blocker. The system is constrained on multiple dimensions at once, and any individual leader&#8217;s inside-view confidence about their own programme is a weak signal against that base rate.</p><p>For a CFO weighing a Q3 capital allocation, the implication is direct. The base rate for senior-leader confidence in AI value creation is 32.6 per cent. The base rate for organisations scaling AI is roughly one in three. The base rate for pilots producing measurable P&amp;L impact is, on the MIT data, one in twenty. The inside view, which says this programme is different, this vendor is better, this use case is proven, has to clear a base rate that the inside view rarely acknowledges.</p><h3>What seeing the gradient changes</h3><p>The CFO who can see the gradient is in a different position from the CFO who cannot. The decision is no longer a binary on the financial case. The decision is whether the leadership system into which the capital will flow is coordinated enough to convert it. The COO&#8217;s position becomes a precondition of approval rather than a variable that surfaces six months later when the programme stalls.</p><p>This is not a recommendation that CFOs should add a coordination check to the approval template. Templates are how organisations institutionalise the questions they were already asking. The point is sharper than that. The CFO is being held to account, in front of the board, for outcomes shaped by a structural feature of the leadership system that the approval process was never designed to expose. Accountability has been transferred without the corresponding visibility.</p><p>That is the inversion. The CFO carries the consequences of a coordination failure they were structurally prevented from seeing. The COO, who can see it from the inside, does not own the capital decision. The CDO and CTO, who own the recommendation, are not the roles whose lag determines whether the recommendation will deliver. The roles are misaligned with the accountability, and the approval process locks the misalignment in place.</p><p>Boards are not yet asking the question this would imply. They are asking which AI investments worked. The harder question, and the one the next twelve months of board cycles will start to surface, is which investments were approved on the basis of evidence the approving role could actually see.</p><p>The CFO who anticipates that question now has the option of building the role-level visibility into the approval pack before being asked for it. The CFO who waits will explain, after the fact, why two-thirds of the programme did not scale.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><h3>Notes and sources</h3><p>&#185; Quaie Q1 2026 Role Layer Intelligence Quarterly, n=187 C-suite respondents across ten leadership roles. The COO&#8217;s position as the lag role is cleared for public reference. Precise role-level Role Shift Index scores are gated to the paid report. Full methodology: quaie.io/p/methodology.</p><p>&#178; Quaie Q1 2026, Organisational Adoption Gradient: 0.85 points between the lead role and the lag role (COO). The lead role identity is available to The Role Layer Intelligence Quarterly subscribers. Full methodology: quaie.io/p/methodology.</p><p>&#179; Todd McElhatton, &#8220;The Year CFOs Hold AI Accountable,&#8221; Finance Leaders Unfiltered newsletter, Zuora, January 2026. Source: zuora.com.</p><p>&#8308; CFO Dive, &#8220;Top 5 AI adoption challenges facing CFOs in 2026,&#8221; published 23 January 2026. Source: cfodive.com/news/top-5-ai-adoption-challenges-facing-cfos-in-2026/810277.</p><p>&#8309; Deloitte, &#8220;Finance Trends 2026.&#8221; Finding: 57% of finance executives describe themselves as among the top leaders driving AI strategy development across the organisation. Source: deloitte.com/us/en/programs/chief-financial-officer/articles/cfo-insights-ai-cost-risk-roi.html.</p><p>&#8310; MIT NANDA (Networked Agents and Decentralized AI) initiative, &#8220;The GenAI Divide: State of AI in Business 2025,&#8221; published August 2025. Lead author Aditya Challapally. Research base: 150 leader interviews, 350-employee survey, and analysis of 300 public AI deployments. Finding: 95% of enterprise generative AI pilots delivered no measurable impact on profit and loss. Source: MIT NANDA initiative publications.</p><p>&#8311; McKinsey QuantumBlack, &#8220;The state of AI in 2025: Agents, innovation, and transformation,&#8221; published 5 November 2025. Survey of 1,993 respondents across approximately 105 countries, 38% from organisations with over one billion dollars in annual revenue. Key finding: nearly two-thirds have not yet begun scaling AI across the enterprise. Source: mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.</p><p>&#8312; Grant Thornton, &#8220;2026 AI Impact Survey Report: The AI proof gap &#8212; why AI is not delivering the performance leaders expected,&#8221; published April 2026. Survey of 950 C-suite and senior business leaders across ten industries, conducted February to March 2026. Finding: CIOs and CTOs are five times more likely than COOs to say their workforce is fully ready for AI deployment. Source: grantthornton.com/services/advisory-services/artificial-intelligence/2026-ai-impact-survey.</p><p>&#8313; Grant Thornton 2026 AI Impact Survey. Finding: 54% of COOs report concern about agentic AI compliance and regulatory uncertainty, against 20% of CIOs and CTOs. Source as note 8.</p><p>&#185;&#8304; Grant Thornton 2026 AI Impact Survey. Finding: COOs overseeing AI-affected operations are discovering governance gaps that CFOs are not funding and that CIOs and CTOs are not surfacing. Source as note 8.</p><p>&#185;&#185; Quaie Q1 2026 Role Layer Intelligence Quarterly, n=187. Confidence Gap: 67.4% of respondents report no confidence, low confidence, or that it is too early to tell whether AI is creating durable economic value in their organisation. Four barriers to scaling cluster within 2.6 percentage points of one another. Full methodology: quaie.io/p/methodology.</p>]]></content:encoded></item><item><title><![CDATA[What Your Board Is Missing When It Approves the AI Budget]]></title><description><![CDATA[When a board approves an AI budget, it does so on the best information available to it.]]></description><link>https://quaie.io/p/what-your-board-is-missing-when-it-approves-the-ai-budget</link><guid isPermaLink="false">https://quaie.io/p/what-your-board-is-missing-when-it-approves-the-ai-budget</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 20 Apr 2026 07:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!S1bm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!S1bm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!S1bm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg 424w, https://substackcdn.com/image/fetch/$s_!S1bm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg 848w, https://substackcdn.com/image/fetch/$s_!S1bm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!S1bm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!S1bm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg" width="1254" height="837" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:837,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:592914,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/194780824?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!S1bm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg 424w, https://substackcdn.com/image/fetch/$s_!S1bm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg 848w, https://substackcdn.com/image/fetch/$s_!S1bm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!S1bm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9dedea5e-d78f-4eae-85dc-15321dee5eee_1254x837.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When a board approves an AI budget, it does so on the best information available to it. Across the 187 senior decision-makers Quaie surveyed in the first quarter of 2026, the intelligence reaching the boardroom came through a consistent set of channels: a technology assessment describing the capability of the tools under consideration, a market comparison showing what peer organisations are committing, and a management recommendation from the leadership team presenting the case for investment. In most organisations, these three inputs constitute the full intelligence picture available to the board at the moment of approval. They are not trivial documents. The process that generates them was built for a different question.</p><p>And yet every one of those inputs is structurally blind to the single piece of intelligence that most determines whether the investment will succeed: the state of the leadership system that will be asked to absorb, implement, and sustain it. The technology assessment does not show which C-suite functions are committed to deployment and which are still weighing the evidence. The market comparison does not show whether the organisation&#8217;s most advanced roles are aligned with its least advanced ones, or how wide the distance between them has become. The management recommendation reflects what the leadership team collectively presents, which is not the same thing as what the leadership team individually believes. None of these instruments shows the gradient running through the organisation&#8217;s own leadership system. The board is approving capital allocation into a system it cannot see.</p><h3>What boards are currently receiving</h3><p>The Deloitte Global Boardroom Program surveyed 695 board members and C-suite executives across 56 countries in early 2025 and found that two thirds of boards report limited or no knowledge or experience with AI. Nearly a third say AI is not on the board agenda at all. Only 17 per cent address AI at every meeting. When boards do engage with AI, Deloitte found they engage primarily through two channels: the CIO and CTO, cited by 72 per cent of respondents, and the CEO, cited by over half. Engagement with the CFO, the CRO, and the CISO on AI matters remains limited.&#185;</p><p>That channel structure is itself a diagnosis. A board that receives its AI intelligence primarily through the CIO, the CTO, and the CEO is receiving a view of AI adoption shaped by three of the most consistently optimistic functions in the leadership system. The CIO and CTO&#8217;s professional identity and often personal conviction is invested in technology leadership. The CEO is presenting a strategy the board has already endorsed. None are disinterested witnesses to the state of the system. None are positioned to tell the board that the CFO has privately concluded the business case is not yet proven, that the CHRO has unresolved concerns about workforce readiness, or that the COO believes the operating model cannot yet carry what the technology leadership is proposing to build.</p><p>No CIO, CTO, or CEO can reasonably be expected to narrate the full system they sit within. The CFO&#8217;s private assessment of the return timeline is not visible to the technology leaders. The COO&#8217;s concerns about operating model readiness are not part of the CEO&#8217;s strategy presentation. The CHRO&#8217;s workforce transition planning is a separate workstream from the technology deployment programme the board is being asked to fund. The board receives a consolidated account of the organisation&#8217;s AI position. What it does not receive is the unconsolidated reality underneath it: the function-level divergence, the privately held reservations, the evidentiary gaps noted and deferred rather than resolved.</p><p>The board is receiving the best information the available channels can produce. Those channels are structurally incapable of showing what the board most needs to see.</p><h3>The gradient the board cannot see</h3><p>In the first quarter of 2026, Quaie measured the leadership systems of 187 senior decision-makers across ten C-suite functions in mid-to-large enterprises. The Organisational Adoption Gradient &#8212; the distance between the most advanced and least advanced leadership role in the dataset on the five-point adoption scale &#8212; was 0.85 points. That number describes, in a single figure, what no board pack currently shows: the spread within the leadership system itself, between the functions pulling toward full deployment and the functions that have not yet crossed the evidentiary threshold required to commit.&#178;</p><p>An 0.85-point gradient is not a marginal difference. On a scale where the distance between experimentation and embedded infrastructure spans three points, a leadership system with a gradient approaching one point contains within it functions operating from fundamentally different pictures of the organisation&#8217;s AI position. The function at the leading edge believes the organisation is making serious progress. The function at the lagging edge believes the evidence base for full commitment has not yet formed. Both are correct about their own position. Neither can see the other&#8217;s position clearly. The board, approving capital allocation into this system, cannot see either.</p><p>This is the intelligence gap no existing board instrument addresses. Not because boards are negligent, but because the instrument required to show a leadership system&#8217;s internal gradient has not previously existed. MIT CISR researchers Peter Weill, Stephanie Woerner, and Jennifer Banner, analysing 2,788 publicly traded US companies with over one billion dollars in revenue, found in 2024 that only 26 per cent of boards were digitally and AI savvy by updated criteria accounting for generative AI and machine learning, and that those companies outperformed their peers by 10.9 percentage points in return on equity. The 74 per cent of companies without AI-savvy boards averaged 3.8 percentage points below their industry average.&#179; The performance differential is real and it is growing. Board AI savviness, as measured by director backgrounds and expertise, is not the same thing as board visibility into the leadership system&#8217;s actual state. A board can have three directors with AI experience and still have no instrument showing which of the organisation&#8217;s ten C-suite functions are committed, which are stalled, and how wide the divergence between them has become.</p><h3>The approval process that cannot ask the right question</h3><p>McKinsey&#8217;s December 2025 analysis of board AI governance found that only approximately 15 per cent of boards currently receive AI-related metrics of any kind. The recommended metrics &#8212; return on investment by business unit, percentage of AI-enabled processes, workforce reskilling progress &#8212; are output measures. They describe what has happened after investment has been made. They do not describe the state of the leadership system before the investment is approved: which functions have reached the evidentiary threshold for commitment, which have not, and what the distance between them implies for the programme&#8217;s likelihood of producing compounding rather than fragmented value.&#8308;</p><p>The question that should precede every AI budget approval concerns the state of the leadership system, not the capability of the technology or the movement of competitors. Whether the system into which capital is about to be deployed is sufficiently aligned to metabolise it productively &#8212; and if it is not, what the board&#8217;s approval is actually sanctioning &#8212; precedes every other consideration in the approval pack. A board that approves a budget without that visibility is not approving an AI programme. It is approving a management intention to run one. The distinction matters because management intention and organisational readiness are not the same thing, and the space between them is precisely where AI programmes encounter their most consequential friction.</p><p>The gap between management intention and organisational readiness is not a minor variance. It is the space where most AI programmes quietly fail.</p><p>Consider what the approval conversation looks like in practice. Management presents three to five use cases with projected returns. A technology partner confirms feasibility. A competitor reference demonstrates that the sector is moving. The board asks about data governance, about cybersecurity risk, about the regulatory position. These are necessary questions. They are the questions the board has been equipped to ask by the intelligence it has received. What the board is not equipped to ask, because no instrument has provided the input required, is which specific leadership functions have reached the evidentiary threshold for full commitment and which have not. Whether the CFO&#8217;s private assessment of the return timeline matches the CTO&#8217;s. Whether the COO&#8217;s concerns about operating model readiness have been resolved or deferred. Whether the CHRO&#8217;s workforce transition planning is sufficiently advanced to support what deployment at scale will require. These questions are answerable. They require a different class of intelligence than the board currently receives.</p><p>WTW&#8217;s John Bremen, writing in Forbes on 19 September 2025 and republished on the WTW Insights platform in October, found that only 11 per cent of boards have approved an annual budget for AI projects at all.&#8309; That figure is itself a measure of how far most boards are from a governance process capable of evaluating organisational readiness. A board cannot formally evaluate the readiness of a system it has not yet established a budgeting relationship with. The absence of the role-level intelligence required to assess leadership system state is not a failure of the approval process. It is a gap in the available instruments, and it is a gap the existing approval process was not built to detect or compensate for.</p><h3>What the missing instrument would show</h3><p>A board that could see the Organisational Adoption Gradient within its own organisation&#8217;s leadership system before approving the AI budget would be asking different questions at the approval meeting. The question shifts from whether to invest to where the gradient currently sits and whether it is closing or compounding. Credibility of the management recommendation becomes a secondary matter; the primary question is whether the functions most likely to determine the programme&#8217;s success have reached a stage of adoption that makes full commitment rational. Technical capability recedes as the central variable. The variable that replaces it is whether the leadership system is aligned enough to convert capability into economic value that holds.</p><p>These are not harder questions than the ones boards currently ask. They are different questions, and they require a different class of intelligence to answer them. The technology assessment, the market comparison, and the management recommendation are the right inputs for the questions they are designed to answer. They are the wrong inputs for the question of whether the leadership system is ready to sustain what the board is being asked to approve.</p><p>The practical difference is not abstract. A board that knows the gradient between its most advanced and least advanced C-suite function before approving an enterprise-wide AI deployment can ask management to close that distance as a precondition of committed investment, rather than as a remediation task after the programme has stalled. It can direct capital toward the alignment work that determines whether deployment produces durable value, rather than toward the deployment itself in a system not yet ready to carry it. Setting a measurable condition for the next approval follows from that: not a milestone in the programme plan, but a movement in the gradient that indicates the leadership system is converging rather than diverging. None of these interventions requires the board to become technically expert in AI. They require the board to see the leadership system it is investing in, which is a governance question, not a technology question.</p><p>The 67.4 per cent of senior leaders in Quaie&#8217;s Q1 2026 dataset who cannot confirm that AI is creating durable economic value in their organisations are not, in most cases, signalling failure.&#8310; They are signalling that investment has been committed into a system whose state was not fully visible at the point of approval, and whose gradient between committed and uncommitted functions has not yet been closed by deliberate intervention. The capital went in. The alignment question was deferred. The gradient remains.</p><h3>The intelligence the report provides</h3><p>The Q1 2026 Role Layer Intelligence Quarterly measured the leadership system that boards are currently approving budgets into. It shows, across 187 senior decision-makers at ten C-suite functions, where the Organisational Adoption Gradient currently sits, which roles are leading and which are following, where alignment is forming and where it is fracturing, and what the distance between the most advanced and least advanced functions implies for organisations making commitment decisions now.</p><p>This is not the intelligence boards are currently receiving. It is the intelligence boards need before they approve the next budget. The instruments currently reaching the boardroom were built to describe external opportunity and management intention. The Role Layer dataset describes the internal system those intentions will have to move through. What falls between the two descriptions is where most AI programmes lose the thread: in the months following approval, when the record has been made and the gradient has not yet been seen.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><h3>Notes and sources</h3><p>&#185; Deloitte Global Boardroom Program, &#8220;Governance of AI: A Critical Imperative for Today&#8217;s Boards,&#8221; second edition. Survey of 695 board members and C-suite executives across 56 countries, January to February 2025. Key findings: 66% of boards report limited to no AI knowledge or experience; 31% say AI is not on the board agenda; only 17% address AI at every meeting. Engagement with management on AI is led by the Chief Information Officer and Chief Technology Officer, mentioned by 72% of respondents, followed by the Chief Executive Officer, mentioned by over half. Other C-suite functions are engaged at materially lower rates: CFOs at 27%, CISOs and CROs at 12% each. Published April 2025. Source: deloitte.com/global/en/issues/trust/progress-on-ai-in-the-boardroom-but-room-to-accelerate.html. Summary also published as Anna Marks, Lara Abrash, and Arno Probst, &#8220;Governance of AI: A Critical Imperative for Today&#8217;s Boards,&#8221; Harvard Law School Forum on Corporate Governance, 27 May 2025.</p><p>&#178; Quaie Role Layer Executive Survey, Q1 2026 (n=187). The Organisational Adoption Gradient measures the distance between the most advanced and least advanced leadership role in the dataset on the five-point adoption scale, where 1 represents no active AI investment and 5 represents embedded infrastructure. Fieldwork conducted January to March 2026 across ten C-suite functions: CEO, CTO/CIO, COO, CFO, CMO, CRO/CSO, CDO, CISO, CHRO, CLO. Full methodology: quaie.io/p/methodology.</p><p>&#179; Peter Weill, Stephanie L. Woerner, and Jennifer Banner, &#8220;Digitally Savvy Boards: AI Update,&#8221; MIT Center for Information Systems Research Research Briefing No. XXV-3, 20 March 2025. Underlying findings also published as Peter Weill, Stephanie L. Woerner, and Jennifer S. Banner, &#8220;AI-Savvy Boards Drive Superior Performance,&#8221; MIT Sloan Management Review, 8 December 2025. Analysis based on machine learning examination of 2,788 publicly traded US companies with over $1 billion in revenue. Companies with digitally and AI-savvy boards outperformed peers by 10.9 percentage points in return on equity. Companies with non-savvy boards averaged 3.8 percentage points below industry average. Only 26% of boards met the updated AI-savvy criteria. Sources: cisr.mit.edu/publication/2025_0301_SavvyBoardsUpdate_WeillWoernerBannerMoore; sloanreview.mit.edu/article/ai-savvy-boards-drive-superior-performance.</p><p>&#8308; McKinsey, &#8220;The AI reckoning: How boards can evolve,&#8221; 4 December 2025. Finding that only approximately 15% of boards currently receive AI-related metrics, citing National Association of Corporate Directors, &#8220;2025 private company board practices oversight survey: Data pack: Artificial intelligence,&#8221; 26 August 2025. Source: mckinsey.com/capabilities/mckinsey-technology/our-insights/the-ai-reckoning-how-boards-can-evolve.</p><p>&#8309; John M. Bremen, &#8220;Lessons in Implementing Board-Level AI Governance,&#8221; originally published in Forbes, 19 September 2025, and republished on WTW Insights, 1 October 2025. Finding that only 11% of boards have approved an annual budget for AI projects. Source: wtwco.com/en-us/insights/2025/10/lessons-in-implementing-board-level-ai-governance; forbes.com/sites/johnbremen/2025/09/19/lessons-in-implementing-board-level-ai-governance.</p><p>&#8310; Quaie Role Layer Executive Survey, Q1 2026 (n=187). 67.4% of respondents could not confirm that AI is creating durable economic value. Source: as note 2.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Two Thirds of Your C-Suite Cannot Confirm AI Is Working. That’s Not a Confidence Problem.

]]></title><description><![CDATA[In the first quarter of 2026, Quaie asked 187 senior decision-makers across ten C-suite roles a direct question: can you confirm that AI is creating durable economic value in your organisation?]]></description><link>https://quaie.io/p/two-thirds-of-your-c-suite-cannot-confirm-ai-is-working</link><guid isPermaLink="false">https://quaie.io/p/two-thirds-of-your-c-suite-cannot-confirm-ai-is-working</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 13 Apr 2026 07:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rGWF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rGWF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rGWF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rGWF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rGWF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rGWF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rGWF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg" width="1254" height="836" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:836,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:599717,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/194076619?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rGWF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!rGWF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!rGWF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!rGWF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7df9d147-3f18-459f-ac46-f25f0e2dcd77_1254x836.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In the first quarter of 2026, Quaie asked 187 senior decision-makers across ten C-suite roles a direct question: can you confirm that AI is creating durable economic value in your organisation? Sixty-seven per cent could not. They were not sceptics. They were not laggards. They were executives running live AI programmes, allocating real budgets, and reporting to boards that had approved the investment. They simply had no instrument that told them whether it was working. That is not a confidence problem. It is a measurement problem, and the distinction determines everything that follows: which intervention an organisation commissions, which question its board asks at the next budget review, and whether the capital already committed compounds into compounding advantage or accumulates quietly into a position that nobody can evaluate.</p><h3>The instrument gap nobody is naming</h3><p>Every large organisation investing in AI has built some version of a reporting infrastructure around it. Dashboards tracking deployment velocity. Steering committees reviewing programme milestones. Quarterly updates to the board on AI initiatives, headcount committed, and budget allocated. These instruments are not useless. They tell the organisation what is happening. They do not tell the organisation whether what is happening is working.</p><p>The distinction is structural. Activity metrics and value metrics are not the same class of measurement, and conflating them produces a reporting environment that looks comprehensive and is not. An organisation can have thirty AI initiatives in production, two hundred people working on them, and fifty million pounds committed, and still have no instrument that connects any of those inputs to a confirmed economic output. The reporting architecture measures the investment. It does not measure the return.</p><p>This is not unique to AI and it is not a new problem. In June 2024, Goldman Sachs published a research note titled &#8220;Gen AI: Too Much Spend, Too Little Benefit?&#8221; Jim Covello, the bank&#8217;s head of global equity research, put the central concern precisely: AI applications must solve extremely complex and important problems for enterprises to earn an appropriate return on investment, and the technology is not currently designed to do that at the cost levels being incurred.&#185; In the same month, David Cahn at Sequoia Capital published what he called &#8220;AI&#8217;s $600B Question&#8221;: a calculation showing that the revenue gap between AI infrastructure investment and confirmed end-user value had grown from $200 billion to $600 billion in the space of nine months.&#178; Neither Covello nor Cahn was arguing that AI has no value. Both were making a measurement point: investment was scaling at a rate that confirmed value was not matching, and the reporting infrastructure available to boards and investors was not designed to detect the gap.</p><p>What was true at the market level in June 2024 is true at the organisational level in April 2026. The gap between AI activity and confirmed AI value is not a question of whether the technology works. It is a question of whether the instrument exists to confirm that it does.</p><h3>What confirmed value actually requires</h3><p>The phrase &#8220;durable economic value&#8221; in Quaie&#8217;s survey instrument was chosen precisely because it raises the evidentiary standard above what most AI reporting currently reaches. Durable means repeatable and persistent: not a one-quarter efficiency gain that reverted when the champion left, not a cost saving that materialised in one function and was absorbed by increased spend in another, not a productivity improvement that showed up in individual output and disappeared at team level. Economic means measurable in the currency that boards and CFOs use: revenue, margin, cost, risk. Value means the output exceeds the input in a way that a sceptical finance director could verify with access to the numbers.</p><p>Most AI reporting does not reach this standard, not because the value is absent, but because the measurement system was not designed to capture it at this level of specificity. Pilot results are measured against pilot metrics. Programme progress is measured against programme milestones. Neither is designed to answer the question a board should be asking: is this investment producing a confirmed, repeatable economic return, and do we know which parts of the organisation are generating it and which are not?</p><p>The Klarna case is the confirmed value story that confirmed value stories should be measured against. In early 2024, Klarna announced that its AI customer service assistant, built in partnership with OpenAI, was doing the work equivalent to 700 full-time agents, handling two thirds of all customer service interactions in its first month of deployment. The CEO, Sebastian Siemiatkowski, was explicit: the system was resolving cases nine minutes faster than human agents, customer satisfaction matched human representative scores, and the programme was projected to deliver $40 million in additional profit in 2024.&#179; This was, by every measure available to the organisation at the time, confirmed AI value.</p><p>By May 2025, Siemiatkowski told Bloomberg that &#8220;cost unfortunately seems to have been a too predominant evaluation factor when organising this&#8221; and that the company had seen lower quality as a result. Klarna began rehiring human customer service agents.&#8308; The programme had not failed in a conventional sense. The AI assistant had functioned as described. What had failed was the measurement framework used to confirm value in the first place: customer satisfaction scores and resolution time are activity metrics. They measure whether the interaction completed. They do not measure whether the interaction produced the outcome the customer required, whether quality held across interaction types, or whether the cost saving was sustained across the full range of service demands. The value that was confirmed was real but incomplete. The incompleteness only became visible after the commitment was irreversible.</p><p>This is the measurement failure the 67.4% finding is indexing. Not scepticism. Not resistance. The absence of an instrument capable of confirming value at the level of specificity that sustained commitment requires.</p><h3>What the licence-to-value gap reveals</h3><p>By the end of 2024, Microsoft had sold Copilot licences to 70 per cent of the Fortune 500, a figure Satya Nadella cited on the company&#8217;s FY25 Q1 investor call.&#8309; The headline number describes purchase. It does not describe confirmed value. Lighthouse, a technology consulting firm that examined Copilot deployment patterns across enterprise clients, found that for most organisations, adoption meant pilots and phased rollouts rather than enterprise-wide deployment. The gap between licence acquisition and confirmed production value was not a technology problem. Copilot functioned as described. The gap was a measurement problem: organisations had no consistent instrument for confirming whether the tool was generating durable economic value at the role level before, during, or after deployment.</p><p>This matters because the licence decision and the value confirmation question operate on different timescales and involve different functions. The decision to purchase Copilot licences was typically made at the CTO or CEO level, on the basis of vendor capability assessment, competitive pressure, and board-level AI ambition. The question of whether those licences were generating confirmed economic value fell to a different set of functions: the CFO assessing return on the licence spend, the CHRO measuring workforce productivity impact, the CMO confirming whether AI-assisted content and campaign work was delivering commercial outcomes. Those functions were not involved in the licence decision. They were left to measure value using instruments that were not designed for the purpose.</p><p>The 70 per cent figure describes an AI commitment made at one level of the organisation. The measurement question it left unanswered was distributed across every other level.</p><h3>The board question that is not being asked</h3><p>Boards approving AI budgets are, in most cases, approving them on the basis of three inputs: a technology assessment confirming the tool is capable, a market comparison confirming that competitors are investing, and a management recommendation confirming that the organisation is ready. None of these inputs answers the question that should precede the approval: do we have an instrument capable of confirming whether this investment is working at the role level, and if not, are we building one?</p><p>The technology assessment tells the board what the tool can do. The market comparison tells the board what peers are spending. The management recommendation tells the board what the leadership team collectively says it believes. What none of them provides is a mechanism for confirming value after the investment has been made, disaggregated to the functions responsible for generating it.</p><p>Consider what a board conversation about AI investment typically contains. A slide showing the number of active AI initiatives. A slide comparing the organisation&#8217;s AI maturity against a sector benchmark. A management summary describing progress against the AI strategy approved twelve months earlier. What it rarely contains is a slide showing which specific leadership functions have confirmed lasting value from AI deployment, which have not, and what the gap between those two positions implies for the capital allocation being requested. That slide does not exist in most board packs because the measurement instrument required to produce it does not exist in most organisations.</p><p>This is not a governance failure in the conventional sense. Boards are not asking the wrong question because they are incurious or incompetent. They are asking the questions they have the instruments to answer. The instruments available to most boards were not designed to confirm role-level economic value from AI investment. They were designed to track programme activity and report it upward. The 67.4% finding is, among other things, a measure of how many organisations have reached significant AI investment without yet building the measurement layer that would tell them whether the investment is justified.</p><h3>Why the gap compounds</h3><p>The measurement problem does not stay static. It compounds, in two directions simultaneously.</p><p>In the first direction: the clock runs. Every quarter that passes without a confirmed value measurement is a quarter in which capital continues to be allocated against an unconfirmed return. The CFO who cannot confirm value today will be asked to approve a larger budget next quarter. The approval decision will be made on the same insufficient evidence base, because the measurement infrastructure has not changed. The investment grows. The confirmation gap grows with it.</p><p>Covello&#8217;s concern at Goldman Sachs was precisely this: that organisations were committing capital at a rate that the available evidence base could not support, and that the absence of a confirmed value measurement was not slowing the commitment. &#8220;Sustained corporate profitability will allow sustained experimentation with negative ROI projects,&#8221; he wrote in the June 2024 note.&#8310; The organisations with the balance sheets to sustain the experimentation will do so regardless of whether the measurement infrastructure exists to confirm the return. The ones without that balance sheet flexibility will discover the gap at the point of reckoning, when the capital is already deployed and the confirmation is still missing.</p><p>In the second direction: the absence of confirmed value creates the conditions for a different kind of problem. Roles that are not generating confirmed value begin to diverge from roles that are. A CTO who has confirmed value in one domain and a CMO who has not will interpret the same organisational AI programme differently, because they are experiencing different things. That divergence, left unmeasured, hardens into the misalignment that stalls programmes at the point they appear to be succeeding. The measurement gap and the alignment gap are not separate problems. The first produces the second, and the second is significantly more expensive to resolve than the first.</p><p>The Klarna trajectory illustrates the sequence at compressed speed. Activity metrics confirmed value. Investment scaled. A different measurement standard, applied later, revealed that the confirmed value was incomplete. The correction required rehiring, reputational management, and a public recalibration by the CEO. The cost of that recalibration was materially higher than the cost of building a more complete measurement framework before the commitment scaled.</p><h3>What the 67.4% is indexing</h3><p>Two thirds of senior leaders in the Q1 2026 dataset cannot confirm that AI is creating durable economic value. At the market level, David Cahn at Sequoia had identified a $600 billion gap between AI infrastructure investment and confirmed end-user revenue as of June 2024. At the organisational level, the 67.4% finding identifies the same gap at the leadership system level: investment is scaling, confirmation is not keeping pace, and the measurement instrument required to close that gap does not yet exist in most organisations.</p><p>That is not a reason to slow down. It is not a reason to question the technology. It is a specific and actionable diagnosis: the organisations that will extract lasting value from AI over the next decade are not necessarily the ones investing most heavily right now. They are the ones building the measurement infrastructure to confirm where value is forming and where it is not, before the capital commitment becomes too large to redirect.</p><p>Organisations that treat 67.4% as a confidence problem will commission better communication, clearer leadership messaging, and more convincing evidence from early pilots. They will not build the instrument, because they have diagnosed the wrong problem. Jim Covello asked, in June 2024, what trillion-dollar problem AI would solve. Most organisations do not yet have an instrument that confirms whether the problem is being solved at all. Building it, before the next capital allocation cycle closes, is the most rational investment a leadership team can make in its own ability to know what it is doing.</p><div><hr></div><p><em><a href="https://quaie.io/p/the-role-layer-intelligence">The Role Layer Intelligence Quarterly </a>applies these observations to role-level enterprise data each quarter.</em></p><p><em><a href="https://quaie.io/p/contribute">Contribute</a> to the research programme to receive the Executive Summary.</em></p><div><hr></div><h3>Notes and sources</h3><p>&#185; Goldman Sachs, &#8220;Gen AI: Too Much Spend, Too Little Benefit?&#8221; June 2024. Jim Covello, head of global equity research at Goldman Sachs, quoted: &#8220;My main concern is that the substantial cost to develop and run AI technology means that AI applications must solve extremely complex and important problems for enterprises to earn an appropriate return on investment.&#8221; The report estimated that tech giants and beyond were set to spend approximately $1 trillion on AI capital expenditure in the coming years. Source: Goldman Sachs Top of Mind series, June 2024. goldmansachs.com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit.</p><p>&#178; David Cahn, &#8220;AI&#8217;s $600B Question,&#8221; Sequoia Capital, June 20, 2024. Cahn calculated the revenue gap between AI infrastructure investment and confirmed end-user value by multiplying Nvidia&#8217;s run-rate revenue forecast by 2x to reflect total data centre costs, then by 2x again to reflect a 50% gross margin for end-users. The original analysis had placed the gap at $200 billion in September 2023. Source: sequoiacap.com/article/ais-600b-question.</p><p>&#179; Klarna AI customer service deployment: Klarna press release, February 2024. The company announced that its AI assistant, built in partnership with OpenAI, was handling two thirds of all customer service chats in its first month of operation, equivalent to the work of 700 full-time agents. Resolution time was reported as nine minutes faster than human agents. Customer satisfaction scores were described as matching those of human representatives. The programme was projected to deliver $40 million in additional profit in 2024.</p><p>&#8308; Klarna recalibration: Sebastian Siemiatkowski, interview with Bloomberg, May 2025, quoted in CNBC and Fortune. Siemiatkowski acknowledged that cost had been too predominant an evaluation factor and that the AI-first customer service transition had resulted in lower quality. Klarna subsequently announced plans to rehire human customer service agents. Source: CNBC, &#8220;Klarna CEO says AI helped company shrink workforce by 40%,&#8221; May 14, 2025; Fortune, October 2025.</p><p>&#8309; Microsoft Copilot Fortune 500 adoption: Satya Nadella, Microsoft FY25 Q1 earnings call. Microsoft stated that 70% of Fortune 500 companies had adopted Microsoft 365 Copilot. Lighthouse technology consulting analysis noted that for most organisations, adoption meant pilots and phased rollouts rather than enterprise-wide deployment. Source: Microsoft FY25 Q1 investor call; Lighthouse, &#8220;What Microsoft 365 Copilot Adoption Really Looks Like,&#8221; 2025. lighthouseglobal.com.</p><p>&#8310; Covello, Goldman Sachs, June 2024: &#8220;Sustained corporate profitability will allow sustained experimentation with negative ROI projects.&#8221; Source: as note 1.</p><p>&#8311; Quaie Role Layer Executive Survey, Q1 2026 (n=187). 67.4% of respondents could not confirm that AI is creating durable economic value. Fieldwork conducted January to March 2026 across ten C-suite functions: CEO, CTO/CIO, COO, CFO, CMO, CRO/CSO, CDO, CISO, CHRO, CLO. Full methodology: quaie.io/p/methodology.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s ongoing research series, examining how organisations decide to adopt AI, role by role, over time.</em></p>]]></content:encoded></item><item><title><![CDATA[The Distance Nobody Measures]]></title><description><![CDATA[The post-mortem on GE Digital runs to thousands of words.]]></description><link>https://quaie.io/p/the-distance-nobody-measures</link><guid isPermaLink="false">https://quaie.io/p/the-distance-nobody-measures</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Tue, 07 Apr 2026 07:01:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!buBG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!buBG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!buBG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg 424w, https://substackcdn.com/image/fetch/$s_!buBG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg 848w, https://substackcdn.com/image/fetch/$s_!buBG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!buBG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!buBG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg" width="1292" height="812" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:812,&quot;width&quot;:1292,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:502961,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/190822821?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!buBG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg 424w, https://substackcdn.com/image/fetch/$s_!buBG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg 848w, https://substackcdn.com/image/fetch/$s_!buBG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!buBG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F83be13d1-bb77-47e6-803c-1cee4ee69ec2_1292x812.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The post-mortem on GE Digital runs to thousands of words. Cloud transformation&#8217;s runs longer. They have the same last paragraph.</p><p>The roles that commit to investment and the roles that implement it are operating from different pictures of the same programme, and nobody is measuring the distance between them before it hardens into a failure. The consulting market has not missed this through oversight; the intelligence simply does not exist. No instrument currently available to enterprise leaders measures role-level misalignment in real time, tracks it quarter by quarter, and returns before the programme has absorbed the cost of the distance. In its absence, the gradient between the roles that lead and the roles that follow is closing or compounding right now, in live programmes, with no dashboard reading to show it. Until one does, the 70 per cent transformation failure rate that McKinsey documented across a decade of programme data will replicate itself in AI.</p><p>The evidence is thirty years of documented pattern, and the pattern is specific enough to name.</p><h3>The structural cause the post-mortems keep finding</h3><p>In 2015, General Electric&#8217;s chief executive Jeff Immelt launched GE Digital with a publicly stated ambition: to become a top ten software company by 2020. The investment exceeded $4 billion. Predix was a novel industrial platform. For two years, the numbers supported the thesis. Then GE&#8217;s industrial business units kept filing quarterly results to targets that had been set before the platform existed, and the platform had no mechanism to change that. Multiple analysts and journalists, working independently, arrived at the same conclusion: the technology was not what failed. Immelt&#8217;s conviction did not transfer. The COO layer kept its operational targets. The CFO layer, which would have needed a different kind of evidence to sustain commitment through the years between investment and return, never got it.&#185;</p><p>GE had a lead cluster and a lag cluster inside its own C-suite. Nobody measured the distance between them until the distance had already decided the outcome.</p><p>A decade later, at market scale, the same pattern appeared in cloud. In Q4 2022 and Q1 2023, HFS Research in collaboration with EY surveyed 508 senior executives from Global 2000 enterprises on cloud-native transformation. The finding was precise. Sixty-five per cent of organisations had made cloud a strategic investment. Thirty-two per cent were realising their ambitions. Phil Fersht, HFS chief executive, described CFOs turning to their CIOs and asking what it had all been for. Matt Barrington, EY&#8217;s emerging technologies leader, concluded that half of cloud-native transformations had failed, not because the technology was wrong, but because technology and business objectives were not aligned across the leadership team.&#178; The CIO had committed. The CFO was still waiting for the business case to materialise. The same organisation, the same programme, different roles operating on different timescales, with no instrument to measure the distance between them.</p><p>McKinsey&#8217;s decade of transformation data across industries produced a number that enterprise leaders have memorised and stopped examining: 70 per cent of initiatives fail to achieve their objectives. The explanation, when McKinsey went looking for it, was not technology and not budget. Executives declared alignment and recorded it as established. The leader who approved the initiative and the leader responsible for delivering it were working from different definitions of success. That distance was never measured, not, it turns out, because measuring it was technically difficult, but because nobody had built the instrument.&#179; The pattern held across ERP, CRM, cloud, and digital transformation. It is now holding in AI.</p><h3>Why AI runs the same pattern faster</h3><p>The failure mode in AI is thirty years old. The structural cause is identical to what stalled ERP, CRM, and cloud: the role that commits and the role that implements are operating from different pictures of the same programme. The clock is different, and that difference is what makes this particular iteration harder to absorb than the previous ones. An ERP failure takes roughly two years to become undeniable; cloud, about eighteen months. In AI, the investment cycle is short enough that the board is still announcing the initiative when the misalignment is already compounding. Organisations that have not finished explaining the last failure are being asked to account for this one.</p><p>The measurement infrastructure that would detect role-level misalignment before it produces a failure does not exist in any research programme currently available to enterprise leaders. McKinsey&#8217;s sector surveys and Gartner&#8217;s CIO reports measure adoption at the organisational level; neither disaggregates to the function that is blocking the programme. The consulting engagement that interviews a sample of the leadership team and returns a synthesis has the same limitation at higher cost: it is measuring the organisation, not the leadership system operating inside it. None of them tell a CEO which specific roles are misaligned, on which dimensions, and what evidence each lagging role would need in order to move. The difference matters for a practical reason: intelligence that arrives after the programme has stalled is research material for the next post-mortem, not an instrument for the current programme.</p><h3>What role-level measurement reveals</h3><p>The Q1 2026 Role Layer dataset, drawn from 187 senior decision-makers across ten C-suite functions between January and March 2026, produces a finding that aggregate data cannot produce: a measurement of the distance between the roles that are leading and the roles that are not.</p><p>A sceptical reader will notice that the instrument making this argument is also the instrument being cited as evidence for it. That is a fair observation. The Role Layer dataset cannot corroborate itself through external validation that does not yet exist. What it can do is apply a measurement approach to a structural problem that existing research programmes have not attempted to measure, and report what that measurement finds. The 0.85-point Organisational Adoption Gradient is not offered as a settled finding. It is a first reading from a new instrument, and its value is that it makes a previously unmeasured distance visible.</p><p>The Organisational Adoption Gradient between the most advanced and least advanced leadership role in the dataset is 0.85 points on a five-point adoption scale, where one represents no active AI investment and five represents scaled deployment with measurable business impact. That number is the structural signal.</p><p>Forty-seven per cent of the dataset (88 of 187 senior decision-makers) are already committed to scaled AI investment or expect to commit within six months. The momentum is real, and so is the gap. Both facts sit in the same leadership system, in the same organisation, often in adjacent offices. An organisation with a CTO approaching scaled deployment and a COO still weighing the evidence for limited production is not an organisation that is slow on AI. It is an organisation with a 0.85-point gradient running through its leadership layer, and that gradient, left unmeasured and unmanaged, will produce a specific and predictable outcome: the lead function scales, the lag function withholds the operating commitment to sustain it, and the programme stalls at exactly the point it appeared to be succeeding.</p><p>GE&#8217;s gradient was legible in every post-mortem. HFS and EY documented the same pattern across 508 cloud programmes. McKinsey found it across a decade of transformation data in industries that had nothing to do with each other. The gradient was present in every case. Nobody was measuring it.</p><p>The 67.4 per cent finding is the one that gets misread most often. Sixty-seven per cent of senior decision-makers unable to confirm that AI is creating durable economic value looks, from the outside, like scepticism or resistance. That reading is wrong on both counts. These are executives who have run the experiments. The experiments have not yet produced the evidence their function requires to commit at scale. That is an evidentiary gap, not an attitudinal one, and it concentrates in specific functions for reasons that are structural, which means it is predictable, and therefore addressable, if you are measuring at the role level.&#8308;</p><h3>The gradient is the diagnosis</h3><p>The enterprises that navigated ERP, CRM, and cloud without stalling shared one structural advantage: their leadership systems converged before the budget commitment became irreversible. The CFO got the evidence base it required. The COO moved. That did not happen by accident. It happened because someone was watching the distance.</p><p>In the organisations where it did not happen (GE, the 68 per cent of cloud programmes that failed to realise their ambitions, the 70 per cent of transformation initiatives that fell short of their objectives) the gradient was present from the beginning. The distance was legible in every post-mortem and absent from every dashboard. Those two facts are the same failure, seen from different ends of the programme timeline.</p><p>The 0.85-point Organisational Adoption Gradient is a present reading, not a historical one. The lead cluster is advancing. The lag cluster is sitting at the evidentiary threshold it has not yet crossed. The distance between them was measured between January and March 2026, in organisations that are making AI investment decisions now, which means the gradient is either closing or compounding at this moment, in real programmes, with real budget consequences.</p><p>The question enterprise leaders are implicitly asking (when will my leadership system reach the point where scaled AI investment becomes rational for every function that needs to sanction it) cannot be answered by aggregate adoption surveys. It requires a measurement instrument that disaggregates to the role level, repeats quarterly, and tracks whether the gradient is closing or compounding. GE did not have that instrument when Predix was scaling. Neither did the enterprises whose cloud programmes stalled at the midpoint of their ambitions. McKinsey&#8217;s analysts were not building it when they assembled the failure data; they were explaining what had already happened, which is the only thing the available intelligence was equipped to do.</p><p>The constraint on enterprise AI adoption is not where most of the current research is looking. It is the distance between the roles that lead and the roles that follow, a distance that is measurable, that has been present in every major technology adoption failure of the past thirty years, and that decides the outcome every time. The technology is not the problem and never was.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><h3>Notes and sources</h3><p>&#185; GE Digital and the Predix platform: Panorama Consulting Group, GE Digital Transformation and Predix Failure, 2021. See also The Conversation, &#8220;GE&#8217;s big bet on digital has floundered,&#8221; 2018; Applicoinc, &#8220;Why GE Digital Failed,&#8221; 2023. GE Digital was established in 2015 under CEO Jeff Immelt with a stated target to become a top ten software company by 2020. Following six years of investment exceeding $4 billion, GE scaled back and subsequently wound down the programme. Post-mortems across multiple analyses cite organisational and cultural misalignment between GE&#8217;s industrial business units and the digital initiative as the primary cause of failure.</p><p>&#178; Cloud-native transformation outcomes: HFS Research in collaboration with EY, cloud-native transformation study, Q4 2022 and Q1 2023. Survey of 508 senior executives from Forbes Global 2000 enterprises across 11 countries. Published October 2023. Phil Fersht, HFS CEO and Chief Analyst, quoted in CIO Dive, November 2023. Matt Barrington, Emerging Technologies Leader at EY, quoted in the same report.</p><p>&#179; Digital transformation failure rates: McKinsey and Company, Unlocking Success in Digital Transformations, 2018. McKinsey&#8217;s subsequent work, including Jon Garcia, Common Pitfalls in Transformations, McKinsey.com, 2022, identified consensus-based target-setting and assumed rather than measured alignment as the most consistent factors in transformation failure across industries.</p><p>&#8308; Quaie Role Layer Executive Survey, Q1 2026 (n=187). Organisational Adoption Gradient, Consensus Formation Time, and Confidence Gap measured across ten C-suite functions, January to March 2026. The five-point adoption scale runs from 1 (no active AI investment under consideration) to 5 (scaled deployment with measurable business impact across core functions). Points 2 through 4 represent defined intermediate stages: limited experimentation, committed investment with deployment underway, and approaching scaled deployment respectively. The Organisational Adoption Gradient is the arithmetic distance between the mean score of the highest-ranking and lowest-ranking C-suite function in the dataset. 47.1% of respondents are already committed to scaled AI investment or expect to commit within six months. 67.4% cannot confirm that AI is creating durable economic value. Full <a href="https://quaie.io/p/methodology">methodology</a>.</p>]]></content:encoded></item><item><title><![CDATA[AI Adoption Isn’t a Technology Problem. It’s a Timing Problem.]]></title><description><![CDATA[There is a form of strategic anxiety that has settled across the senior floors of most large organisations, and it is worth naming precisely because it is being misdiagnosed.]]></description><link>https://quaie.io/p/ai-adoption-isnt-a-technology-problem-its-a-timing-problem</link><guid isPermaLink="false">https://quaie.io/p/ai-adoption-isnt-a-technology-problem-its-a-timing-problem</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 30 Mar 2026 07:01:04 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qnDR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qnDR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qnDR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qnDR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qnDR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qnDR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qnDR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:783477,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/192515536?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qnDR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qnDR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qnDR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qnDR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff6938f6b-cb6d-47de-83ba-05a02da58012_2475x1650.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a form of strategic anxiety that has settled across the senior floors of most large organisations, and it is worth naming precisely because it is being misdiagnosed. It is not anxiety about technology. Most executives have a reasonable working understanding of what large language models can and cannot do, what agentic systems are beginning to make possible, and where the frontier is likely to move over the next two to three years. The anxiety is not about capability. It is about position. About whether the organisation is moving at the right speed, in the right direction, with the right level of commitment visible to the right people.</p><p>What follows from that anxiety is now so consistent across industries and geographies that it has become the dominant pattern of enterprise AI adoption. Budgets are allocated, vendors are selected, pilots are launched, and announcements are made, not because the conditions for value creation are in place, but because the conditions for political exposure have arrived. The pilots stall. The value fails to appear. The investment is quietly reclassified or abandoned. A new cycle begins.</p><p>This is not a technology failure. It is a timing failure. And the distinction matters more than almost anything else a senior leader could understand about AI adoption right now.</p><h3>Forty years</h3><p>In 1990, the Stanford economist Paul David published a short paper that became one of the most cited analyses in the study of technological change.&#185; Its argument was simple and uncomfortable. Computers, he observed, were everywhere in the American economy. Their impact on measured productivity was nowhere. David&#8217;s explanation was not scepticism about computing&#8217;s potential. It was a historical observation: the same pattern had occurred before, and it had a structural cause.</p><p>The electric dynamo was commercially viable by the late 1870s. By the turn of the twentieth century, electric motors still accounted for less than five per cent of factory mechanical drive. Two decades of available technology, deployed at negligible scale. When electrification did spread, factories made a revealing mistake: they replaced their steam engines with dynamos while keeping the centralised mechanical power distribution those steam engines had required. The technology changed. The organisation did not. It took until the 1920s, a full four decades after the lightbulb&#8217;s invention, for factories to redesign themselves around electricity&#8217;s actual logic: distributed unit drive, lighter structures, reconfigured floorplans. Only then did the productivity data move.&#178;</p><p>David&#8217;s insight, extended by Erik Brynjolfsson and colleagues into what they later formalised as the productivity J-curve, is that transformative general purpose technologies require co-invention.&#179; The technology is necessary but not sufficient. The complementary organisational innovations, restructured workflows, redesigned roles, rebuilt operating models, realigned incentives, are what convert technological capability into economic output. And those complementary innovations take time. Not because organisations are slow or leaders are failing, but because the coordination required across functions, roles, and decision-making structures is irreducibly complex. It cannot be compressed by urgency, however genuine.</p><p>AI is at the beginning of this curve. Not the end of it. The data makes this difficult to ignore.</p><h3>What the numbers actually describe</h3><p>MIT&#8217;s NANDA research initiative, drawing on over 300 publicly disclosed AI deployments and interviews with representatives from more than 50 organisations, found that approximately 95 per cent of enterprise generative AI pilots fail to deliver measurable impact on profit and loss.&#8308; The methodology behind that figure has been interrogated, and the precise rate is directional rather than definitive, but the direction is not in dispute. McKinsey&#8217;s 2025 survey of nearly 2,000 respondents found that only around five to six per cent of organisations qualified as what it defined as AI high performers, those attributing more than five per cent of EBIT to AI use.&#8309; S&amp;P Global Market Intelligence found that 42 per cent of companies abandoned most of their AI initiatives in 2025, up from 17 per cent the previous year.&#8311; Three independent sources, three different methodologies, the same structural finding.</p><p>These figures are sometimes read as evidence that AI is overvalued. They are better read as evidence that organisations are investing ahead of the conditions required for that investment to produce returns. The technology is not failing the organisations. The organisations are not yet configured to succeed with the technology.</p><p>McKinsey&#8217;s analysis of what distinguishes the five per cent of high performers from the rest is revealing precisely because of what it does not say. The difference is not access to better models, larger budgets, or more technically sophisticated teams. The difference is that high performers redesigned workflows before selecting tools, established shared measurement frameworks before deployment, and secured leadership alignment across functions before committing at scale.&#8309; They invested in the organisational preconditions for value creation, not just in the technology itself.</p><p>What the data collectively describes is a population of organisations that moved on the technology timeline rather than the organisational timeline. The gap between those two timelines is where the $227 billion in projected 2025 global AI spend&#8310; is largely disappearing.</p><h3>The coordination problem that vendors cannot solve</h3><p>Enterprise AI adoption is, at its core, a leadership coordination problem. This is not a reframing designed to soften a difficult message. It is a structural observation about how value is created and destroyed in large organisations.</p><p>Every function in the C-suite processes AI through a different evidentiary lens. Finance and technology rarely share the same definition of sufficient evidence. Legal and compliance assess exposure that, in heavily governed industries, can be existential. Operations weighs workflow disruption against efficiency gain on a timeline that does not match the one marketing or strategy is working to. These functions do not reach their conclusions simultaneously, and in the absence of genuine alignment across them, AI investment does not fail at the technology layer. It fails at the coordination layer. Pilots produce results that one function finds compelling and another finds insufficient. Governance frameworks collapse because the legal position was never stable enough to support them. Change programmes stall because ownership was never genuinely shared. The technology performs. The organisation does not absorb it.</p><p>What senior leaders consistently identify as the primary blockers to AI adoption are not technical limitations. They are misalignment on strategy and ownership, an inability to confirm that early investment is producing durable economic value, unresolved governance exposure, and insufficient evidence of organisational readiness to move beyond pilots. Each of these is a coordination failure, not a technology failure. And coordination failures are not solved by moving faster. They are solved by reaching the state of alignment that makes productive movement possible.</p><p>The question of <em>when</em> to commit is therefore a question about where the leadership system currently sits. Not where the technology sits.</p><h3>The asymmetry that is rarely stated directly</h3><p>Investment in AI that arrives after the organisational conditions for value creation have formed is costly primarily in opportunity terms. You moved later than you could have, and you will spend time and resource closing gaps that earlier movers do not face. That is a real and sometimes significant cost.</p><p>Investment that arrives before those conditions have formed is costly in a structurally different way. Capital is deployed into pilots that cannot be absorbed. Governance structures are designed before the organisation has enough shared understanding to make them operational. Vendor relationships are established before internal capability exists to extract what those vendors offer. Political capital is spent on change programmes that have no stable coalition behind them. Each of these investments produces not zero return but negative return, because it consumes the attention, budget, and credibility that will be needed when the conditions for productive adoption are eventually present.</p><p>The 42 per cent of organisations that abandoned most of their AI initiatives in 2025 did not abandon them because the technology failed to perform.&#8311; They abandoned them because the commitments were made before the leadership system was prepared to deliver on them. The cost of that premature commitment will be measured not only in the capital written off but in the organisational fatigue and cynicism that accompanies a large failed initiative. That is a harder thing to rebuild than a budget line.</p><h3>What rational timing looks like</h3><p>None of this is an argument for avoidance. The organisations that will fail most completely in AI adoption are not those that moved prematurely. They are those that never developed the capacity to move at all, that treated every signal of organisational unreadiness as a permanent condition rather than a solvable problem.</p><p>Rational timing is an active condition, not a passive one. It describes an organisation that is assembling the preconditions for productive investment: establishing governance frameworks before they are needed under pressure, building alignment between the technology and finance functions on how value will be measured, securing a shared interpretation of ownership that extends beyond the CTO&#8217;s office, and developing a pilot record rigorous enough to distinguish repeatable operating leverage from demo-stage results that will not survive contact with production.</p><p>A useful diagnostic is a simple one. Ask your leadership team: if we had to confirm, to the board, that our current AI investment is creating durable economic value, what evidence would we point to, and would every function in this room agree it was sufficient? The answers to that question, and more precisely the divergence between them, will tell you more about your organisation&#8217;s readiness to scale AI investment than any vendor assessment or maturity framework currently on the market. That condition is measurable. Most organisations have not yet measured it.</p><p>The signal that commitment has become rational is not a feeling of readiness, which can be manufactured. It is evidence that the leadership system has moved past the point where the absence of coordination is the binding constraint on value creation. That point arrives at different times for different organisations. It cannot be announced into existence by a board resolution or a vendor contract. It is an organisational condition, and it is legible to those who know what to look for.</p><p>The organisations that will produce the most durable returns from AI over the next decade will not be those that committed earliest. They will be those that committed when the organisational conditions were present, and that knew, with enough precision, when those conditions had arrived.</p><p>Electricity was commercially viable in 1880. It took forty years to rebuild the factory around it. The lesson is not that the forty years were wasted. It is that the forty years were necessary. And that the organisations which redesigned themselves around electricity&#8217;s actual logic, rather than simply installing the technology into structures built for steam, were the ones that captured the full value of what the technology made possible.</p><p>The technology is not the constraint. It has not been the constraint for some time. The question every senior leader should be asking is not <em>how fast can we move?</em> It is <em>have we assembled the conditions that make moving productive?</em></p><p>Those are different questions. The second one is harder to answer. It is also the only one that matters.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Paul A. David, <em>The Dynamo and the Computer: An Historical Perspective on the Modern Productivity Paradox,</em> American Economic Review, Vol. 80, No. 2, May 1990, pp. 355&#8211;361. By 1900 electric motors accounted for less than 5 per cent of factory mechanical drive despite the commercial viability of the dynamo dating to the late 1870s.</p><p>&#178; Warren D. Devine Jr., <em>From Shafts to Wires: Historical Perspective on Electrification,</em> Journal of Economic History, Vol. 43, No. 2, June 1983, pp. 347&#8211;372. The unit drive system &#8212; in which individual motors powered each piece of equipment &#8212; became widely adopted in the 1920s, producing measurable productivity gains approximately four decades after the technology&#8217;s commercial availability.</p><p>&#179; Erik Brynjolfsson, Daniel Rock, and Chad Syverson, <em>The Productivity J-Curve: How Intangibles Complement General Purpose Technologies,</em> American Economic Review: Insights, Vol. 3, No. 3, September 2021. Documents the J-curve pattern in which general purpose technologies produce short-run disruption costs before medium-term performance gains, contingent on complementary organisational co-invention. Brynjolfsson has described the 30 to 40-year lag between factory electrification and measurable productivity gains as the central historical parallel for understanding AI&#8217;s current productivity trajectory.</p><p>&#8308; Aditya Challapally et al., <em>The GenAI Divide: State of AI in Business 2025,</em> MIT NANDA Initiative, July 2025. Analysis based on review of over 300 public AI deployments, interviews with representatives from more than 50 organisations, and a survey of 350 employees. The finding that approximately 95 per cent of enterprise generative AI pilots fail to deliver measurable P&amp;L impact has been contested on methodological grounds; the figure is treated here as directionally significant across the broader body of evidence rather than as a precise point estimate.</p><p>&#8309; Alex Singla, Alexander Sukharevsky, and Lareina Yee, <em>The State of AI in 2025: Agents, Innovation, and Transformation,</em> McKinsey QuantumBlack, November 2025. Survey of approximately 2,000 respondents. Approximately 5.5 per cent qualified as AI high performers attributing more than 5 per cent of EBIT to AI. High performers distinguished primarily by workflow redesign before tool selection and cross-functional leadership alignment before scaled commitment.</p><p>&#8310; IDC, <em>Worldwide AI Spending Guide,</em> 2025. Projected global enterprise AI spend of approximately $227 billion in 2025, encompassing software, hardware, and associated services.</p><p>&#8311; S&amp;P Global Market Intelligence, <em>2025 Enterprise AI Survey,</em> cited in industry analysis, July 2025. Survey of over 1,000 enterprises across North America and Europe. 42 per cent of companies abandoned most AI initiatives in 2025, up from 17 per cent in 2024. Average organisation scrapped 46 per cent of proof-of-concepts before production.</p>]]></content:encoded></item><item><title><![CDATA[What a £500K Consulting Engagement Cannot Predict About Your C-Suite's AI Position]]></title><description><![CDATA[At some point in the past three years, most enterprise leadership teams commissioned something: an AI strategy review, a readiness assessment, a structured advisory engagement.]]></description><link>https://quaie.io/p/what-a-500000-consulting-engagement-cant-predict-about-your-c-suites-ai-position</link><guid isPermaLink="false">https://quaie.io/p/what-a-500000-consulting-engagement-cant-predict-about-your-c-suites-ai-position</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 23 Mar 2026 08:02:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!30gp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!30gp!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!30gp!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg 424w, https://substackcdn.com/image/fetch/$s_!30gp!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg 848w, https://substackcdn.com/image/fetch/$s_!30gp!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!30gp!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!30gp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg" width="1183" height="887" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:887,&quot;width&quot;:1183,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:937238,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/190827785?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!30gp!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg 424w, https://substackcdn.com/image/fetch/$s_!30gp!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg 848w, https://substackcdn.com/image/fetch/$s_!30gp!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!30gp!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F141686a4-d8b9-4654-8977-e7b054e2a758_1183x887.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>At some point in the past three years, most enterprise leadership teams commissioned something: an AI strategy review, a readiness assessment, a structured advisory engagement. The brief was reasonable. The team was credible. The process ran for eight to twelve weeks. Fifteen to thirty people were interviewed. A framework emerged. A deck was presented. Recommendations were made.</p><p>The deck was useful. The process was not wasted. And the leadership team left the room less certain than the report suggested they should be.</p><p>The executives being interviewed knew they were being interviewed. They knew who had commissioned the engagement. They knew what a constructive answer looked like. The CFO who had privately decided that the AI investment case was not yet proven did not say so to a consultant sitting in a room booked by the CEO. The COO who believed the CTO was moving faster than the operating model could absorb did not say so in a structured interview being synthesised into a report the CTO would read. The CMO who had watched three AI pilots fail to reach production described the organisation&#8217;s AI ambition in terms that were accurate at the level of intent and misleading at the level of reality.</p><p>The pattern repeated across the room. The CHRO described an AI-enabled talent acquisition pilot as a strategic priority. The CMO spoke about personalisation at scale as the next twelve months&#8217; focus. The COO outlined plans to embed AI into three operational workflows by the end of the year. Each answer was true. Each answer was also constructed for the room it was delivered in. The consultant synthesising those answers into a single report had no instrument for detecting the distance between what each role said and what each role actually believed. The report reflected the organisation&#8217;s collective account of its AI position. It did not reflect the organisation&#8217;s actual state.</p><p>This is not dishonesty. It is the entirely rational behaviour of senior leaders operating in a politically complex environment, observed by an external team whose findings would be visible to their colleagues and their board. The consulting engagement captured the organisation&#8217;s official position on AI. It did not capture the role-level reality beneath it.</p><p>That distinction is not a marginal refinement. It is the difference between a measurement and a performance of measurement. And it is why organisations that commission thorough, expensive AI strategy engagements still find themselves blindsided when the programme they believed was aligned turns out not to be.</p><h3>What the engagement model cannot reach</h3><p>The structural limitation is not the consultant&#8217;s fault. It is a function of the instrument.</p><p>An interview is a social interaction. Social interactions between senior executives and external advisors are governed by well-understood norms around discretion, loyalty, and presentation. A CFO will tell a consultant what the CFO is prepared to have attributed to the CFO. That is categorically different from what the CFO actually believes about the pace of AI investment, the quality of the business case, and the likelihood that the organisation will reach coordinated commitment within the next twelve months.</p><p>A survey is different in kind, not just in method. A CFO or COO completing an eight-minute anonymous survey, alone, without their name attached to the response, is not performing for an audience. They are answering the question they were actually asked. The CFO who would never tell a consultant that they rate their confidence in the organisation&#8217;s AI strategy at two out of five will tell a survey that. The COO who believes the adoption gradient between their function and the CTO&#8217;s function is dangerously wide will say so when the answer goes into a dataset rather than into a slide that will be read in a boardroom.</p><p>This is not a methodological preference. It is a structural difference in what each instrument can access. The consulting engagement produces the leadership team&#8217;s official account of its AI position. The survey produces the leadership system&#8217;s actual state. Those are not the same thing. The distance between them is where AI programmes stall.</p><h3>The intelligence market is repricing around exactly this gap</h3><p>That gap is not a secret. The consulting and legacy intelligence industries are confronting it directly, and the financial markets are drawing their own conclusions.</p><p>Gartner&#8217;s stock fell approximately 71% from its November 2024 peak within months of reporting. Forrester, a company generating close to $400 million in annual revenue, is now valued by the market at approximately $105 million. Forrester&#8217;s strategy consulting division saw bookings fall more than 50% in 2025. The firm has since exited strategy consulting entirely. Gartner&#8217;s consulting revenue fell 12.8% in the fourth quarter of 2025 alone.&#185;</p><p>The case for the model rests on proprietary data, original analysis, and direct access to the researchers who produced it. The market is questioning whether the traditional delivery format is still the right vehicle for any of those things.</p><p>What is being disrupted is not intelligence. It is intelligence that can be approximated by a well-prompted AI working from publicly available sources. Aggregated, enterprise-level analysis of the kind that Gartner and Forrester have sold for decades is increasingly available for free, or close to it. The question enterprises are beginning to ask is not whether they need intelligence on AI adoption. It is whether the intelligence they are buying tells them something they could not find elsewhere.</p><p>Role-level data on how specific C-suite functions are positioned on the adoption spectrum, how far apart they are from each other, and whose evidence requirement is blocking coordinated commitment does not exist in any publicly available source. Neither does the answer to the question every CEO is privately asking: whether the alignment they perceive matches the alignment the CMO and COO are actually experiencing.</p><p>It does not exist in Gartner&#8217;s research library. It does not exist in McKinsey&#8217;s sector surveys. It does not exist in the slide deck from the last consulting engagement. It exists only in a dataset built from direct, anonymous, longitudinal fieldwork with the decision-makers themselves.</p><h3>What eight minutes produces that twelve weeks cannot</h3><p>The Q1 2026 Role Layer Dataset is being built with more than 150 senior decision-makers across ten C-suite functions completing an eight-minute survey between January and March 2026. No interviews. No stakeholder maps. No consultant in the room shaping the frame of the question. Each respondent answered alone, anonymously, in the language of their actual position rather than their official one.</p><p>What the dataset shows is not a leadership system moving in one direction. It is a system pulled apart, some roles committed, some stalled, some waiting for evidence that has not yet arrived.</p><p>Consider what that looks like in practice. A CTO reports being already committed to the next significant AI investment. The CFO, in the same organisation, reports a timeline of twelve to twenty-four months and names financial evidence as the blocker. The CEO, asked separately whether the leadership team is aligned on AI priorities, rates alignment at four out of five. The CFO rates it at two. Those three data points, collected anonymously across eight minutes, describe a leadership system in which the CTO is ready to scale, the CFO is not yet in a position to sanction it, and the CEO believes the gap does not exist. A consulting engagement interviewing all three would return a report describing an organisation with strong AI ambition and broad leadership support. The anonymous survey returns a different picture entirely: a lead role, a lag role, a misperceived alignment score, and a programme that will stall at exactly the point it appears to be succeeding.</p><p>What the dataset also reveals is the direction of travel. Which roles are committed to their next significant AI investment within six months. Which are at twelve to twenty-four months. Which have no current plans. The consulting engagement can capture where each role says it is today. It cannot capture the sequencing of commitment across the leadership system, because sequencing requires the same question asked of the same roles across multiple time periods, with answers given in conditions that remove the pressure to perform alignment.</p><p>That is what longitudinal anonymous fieldwork produces that a point-in-time engagement cannot. Not a better snapshot. A different kind of intelligence entirely. The consulting engagement tells you what the leadership team says about its AI position. The Role Layer Intelligence System tells you what the leadership system actually is, and where it is going.</p><h3>The question the engagement cannot answer</h3><p>Every CEO who has commissioned an AI strategy engagement has, at some point, left the room uncertain whether the alignment the report described was real. Whether the CFO who nodded through the recommendations had privately committed or was still waiting for evidence that had not yet arrived. Whether the COO&#8217;s apparent support reflected genuine conviction or the political calculation of a leader who knew which way the wind was blowing.</p><p>That uncertainty is not a failure of the engagement. It is the natural consequence of using an instrument that captures official positions rather than actual states. The consulting model was not designed to answer the question: what does each specific role in this leadership system actually believe, right now, about the pace, ownership, and direction of AI investment, and what would need to change for that belief to shift?</p><p>It was designed to answer a different question: what does this organisation&#8217;s leadership team collectively say when asked by an external advisor?</p><p>Those are not the same question. The distance between them is where AI programmes stall, where capital allocation decisions go wrong, and where the gap between the CEO&#8217;s conviction and the COO&#8217;s implementation reality quietly decides the outcome before anyone has started measuring it.</p><p>If you are a C-suite executive and have not yet contributed your perspective, you can access the survey <a href="https://quaie.io/p/contribute">here</a>. Your responses will be reflected in the next quarterly edition of The Role Layer Intelligence Quarterly</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Gartner and Forrester financial results: Gartner Q4 2025 earnings, reported February 2026. Full year 2025 revenue $6.5 billion, up 4%, with consulting revenue declining 12.8% in Q4 2025. Stock price fell approximately 71% from November 2024 peak to approximately $155. Forrester Q4 2025 earnings, reported February 2026. Full year revenue $396.9 million, down 8% from 2024. Strategy consulting bookings fell over 50% in 2025. Forrester subsequently exited strategy consulting entirely. Market capitalisation approximately $105 million as of February 2026. Source: SaaStr analysis of Gartner and Forrester Q4 2025 earnings, February 2026. Primary sources: Gartner Q4 2025 earnings release, investor.gartner.com, February 2026. Forrester Q4 2025 earnings call transcript, February 2026.</p><p>&#178; Quaie Q1 2026 fieldwork: Role Shift Index, Role Lead&#8211;Lag Ranking, Consensus Formation Time, Role Influence Index, Organisational Adoption Gradient, and Role Alignment Map measured across ten C-suite functions. </p>]]></content:encoded></item><item><title><![CDATA[The CTO-CMO AI Divide]]></title><description><![CDATA[Why the gap between your technology leader and your marketing leader matters more than your tech stack and what the data says to do about it.]]></description><link>https://quaie.io/p/the-cto-cmo-ai-divide</link><guid isPermaLink="false">https://quaie.io/p/the-cto-cmo-ai-divide</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 16 Mar 2026 08:01:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lABJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lABJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lABJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lABJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lABJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lABJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lABJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7511205,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/190817012?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lABJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lABJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lABJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lABJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fa663a8-c1b4-48f8-93f5-529f5ab4d903_7952x5304.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In Quaie&#8217;s Q1 2026 fieldwork across ten executive roles, the sharpest divergence emerging from the Q1 fieldwork is not between industries, company sizes, or revenue bands. It&#8217;s between two roles that sit in the same leadership team, attend the same board meetings, and are nominally evaluating the same AI initiatives. The CTO and the CMO reported confidence scores, adoption stages, and blockers so far apart that, without the role labels, you would not place them in the same organisation.</p><p>CTOs clustered at limited production use or scaled deployment. CMOs clustered at experimentation, showing the widest variance of any role in the cohort. CTO confidence in AI&#8217;s durable economic value sat at 4 out of 5. CMO confidence sat at 2 out of 5. CTOs cited integration complexity and security as their primary blockers. CMOs cited ROI uncertainty. These are not minor variations in emphasis. They are fundamentally different assessments of the same technology, formed simultaneously, by two people sitting in the same building.</p><p>The pattern is not accidental. Gartner&#8217;s survey of over 1,200 CIOs conducted in late 2024 found that delivering AI value had moved to the second-highest priority for technology leaders, behind only cybersecurity.&#185; In the same period, Gartner&#8217;s survey of 402 senior marketing leaders found that 65 per cent of CMOs said AI advances would dramatically change their role within two years, yet only 5 per cent of marketing leaders not yet piloting AI agents reported significant gains on business outcomes.&#178; Two functions, facing the same technology, arriving at radically different positions on readiness, confidence, and urgency. The external data confirms what Quaie&#8217;s fieldwork shows internally: the CTO/CMO divergence is not a personality conflict. It is a structural feature of how AI adoption spreads through a leadership system, and it has consequences that most CEOs are not equipped to see.</p><p>The CEO&#8217;s instinct is to read this as a communication problem, or as evidence that one of the two roles is not thinking clearly. The CTO concludes the CMO does not understand the technology. The CMO concludes the CTO does not understand the business. Both conclusions feel internally coherent. Neither is correct. The divergence is structural, and it will not resolve by improving the quality of their conversations.</p><p>The reason CTOs move first is not that they are more visionary or more willing to take risk. It is that their role context makes early action rational in ways that the CMO&#8217;s does not. The CTO controls technical infrastructure directly. When they experiment with AI tooling in engineering, the feedback loop is short: results are visible within days or weeks, the cost of failure is contained within their function, and no cross-functional approval is required to iterate. The conditions that make experimentation rational are all present: direct decision authority over the relevant workflow, fast feedback, and contained downside. It is worth noting that the CISO sits at the same production boundary as the CTO in the Q1 data, but for structurally different reasons: security leadership is compelled to engage with AI by defensive necessity, regardless of strategic conviction. Adoption position and conviction are not the same thing, and the distinction matters when reading the rest of the leadership distribution.</p><p>The CMO&#8217;s context is structurally different. The CMO is not an outlier in the adoption distribution. It sits precisely where the Q1 middle cluster sits, alongside the CHRO, CLO, and CDO roles that share a common feature: their evidentiary thresholds require multi-function alignment before any single deployment can be sanctioned. Marketing outcomes are influenced by variables outside the CMO&#8217;s control: competitive activity, brand equity, seasonal effects, channel dynamics. Attribution is contested. Authority over workflows is frequently shared with agencies and external partners who have their own views on AI. When a CMO deploys AI in campaign planning or audience segmentation, the feedback loop is longer, the measurement is harder, and the cost of a failed experiment is more visible. The CMO Council and Zeta Global&#8217;s 2024 survey of nearly 200 CMOs found that 40 per cent identified proving ROI and demonstrating attribution as the area most in need of improvement within their operations, and 31 per cent cited proving ROI as the primary challenge in gaining organisational adoption of new platforms.&#179; This is not a marketing-specific failing. It is a structural feature of the function: marketing outcomes are multi-touch, multi-variable, and contested in ways that engineering outcomes rarely are. The conditions that made the CTO&#8217;s early action rational simply do not exist for the CMO in the same form.</p><p>The Deloitte State of AI in the Enterprise 2026 report, drawing on 3,235 senior leaders across 24 countries, found that while two-thirds of organisations report productivity and efficiency gains from AI, revenue growth remains an aspiration for the majority: 74 per cent of organisations hope to grow revenue through AI in the future compared to just 20 per cent already doing so.&#8308; The CMO, whose primary accountability is commercial growth rather than operational efficiency, is working in a domain where AI&#8217;s most credible early wins are genuinely less relevant to the evidentiary standard they are expected to meet. A CTO who can demonstrate engineering efficiency gains has something to show. A CMO who needs to demonstrate revenue impact is waiting for evidence the market has not yet reliably produced.</p><p>Role context, not capability, determines who moves first. The CMO who is still evaluating AI twelve months after the CTO has deployed it is not slow. They are responding to a different set of signals, operating under different constraints, and applying a different evidentiary standard, one that is, given their context, entirely appropriate. It is also worth calibrating where the CMO sits in the full leadership picture: considerably ahead of the operational lag cluster, where the COO sits furthest back of any role in the dataset, held there by an evidentiary threshold for operational AI deployment that is higher, not lower, than the CFO&#8217;s threshold for investment approval.</p><p>The damage begins when the CEO reads this as underperformance.</p><p>When the CEO observes that the CTO is at scaled deployment and the CMO is at experimentation, the instinct is to apply pressure. More urgency. More budget. A stronger mandate. But the CMO&#8217;s position is not primarily a function of urgency or budget. It is a function of the conditions under which AI can be tested and validated in a marketing context. Applying pressure does not change those conditions. It produces activity without conviction: pilots that run because they were mandated, not because the CMO has the evidence base to believe they will hold. The Role Lead-Lag Ranking between these two roles show a widening temporal gap precisely when this pressure is applied: the CTO continues to advance while the CMO&#8217;s position becomes volatile rather than stable, oscillating between experimentation and limited deployment without crossing the threshold into something durable.</p><p>Gartner&#8217;s research on AI maturity makes this dynamic concrete. A 2024 Gartner survey of 432 organisations found that in high-maturity organisations, 57 per cent of business units trust and are ready to use new AI solutions, compared to just 14 per cent in low-maturity organisations.&#8309; The differentiator was not the technology deployed. It was the degree to which trust had been built across functions before scaling was attempted. CEOs who approved scaling before that trust had formed paid for it in stalled rollouts, reversed commitments, and the erosion of confidence that makes the next initiative harder to advance. This is the CMO&#8217;s caution, correctly read: not resistance, but an absence of the cross-functional trust that makes scaled adoption stable.</p><p>The ROI uncertainty the CMO cites as a blocker is therefore not an excuse. It is a legitimate constraint. And it carries consequences beyond the CMO&#8217;s own function. S&amp;P Global Market Intelligence data shows that 42 per cent of companies abandoned most of their AI initiatives in 2025, more than double the previous year&#8217;s rate, with total cost and unclear value cited as the primary reasons.&#8310; CEOs who scaled before the commercial functions had developed a credible value case were disproportionately represented in that figure. The CMO&#8217;s hesitation, where it prevents premature scaling into commercially unvalidated territory, is not the problem. It is, in many cases, the last line of defence against a deployment that will be reversed within eighteen months.</p><p>The gap is not a problem to be eliminated. It is a signal to be understood. Quaie&#8217;s Organisational Adoption Gradient measures it precisely: the distance between the most advanced and least advanced roles in a leadership system. In Q1, the CTO/CMO pairing was among its sharpest expressions. But the gradient is not the risk. What the CEO does with it is.</p><p>CEOs who treat the gap as a performance failure apply pressure and discover, six months later, that the CMO&#8217;s adoption has the appearance of progress without the substance. Initiatives described as in production turn out to be running on one team member&#8217;s enthusiasm, invisible to the CFO who would need to sustain them through a budget cycle. The Role Shift Index tracks this precisely: a CMO position that looks like adoption in a single quarter and reveals itself as fragility only when the next quarter&#8217;s reading arrives.</p><p>CEOs who treat the gap as structural information ask different questions. What does the CMO need in order for AI to be testable against clear commercial outcomes? Which workflows have short enough feedback loops to generate the evidence base required to build conviction? Who in the marketing function has the authority to own an AI initiative without depending on agency sign-off, and can that person be given the conditions the CTO already enjoys: direct authority, fast feedback, contained downside? The Role Alignment Map asks the deeper question beneath all of these: not where each role sits on the adoption spectrum, but whether the CTO and CMO share a common interpretation of what AI is for, who owns it, and what success looks like. The CMO Council research confirms that 39 per cent of CMOs say functional alignment across the organisation needs to improve, with technology leadership a consistent source of friction.&#8311; That divergence in strategic interpretation is more consequential than the divergence in adoption stage. It determines whether the gap closes as evidence accumulates or hardens into the kind of structural misalignment that stalls everything downstream.</p><p>The Role Influence Index adds a final dimension that CEOs rarely account for. The CTO&#8217;s faster adoption and higher confidence make the role a natural catalyst in board conversations about AI, confirmed by Gartner&#8217;s finding that 63 per cent of CIOs planned to spend on AI and machine learning in 2025, making it among the most actively championed functions at the leadership table.&#185; But the CMO carries commercial authority the CTO does not. Revenue growth, customer acquisition, brand value: these are the outcomes the board ultimately cares about, and they belong to the CMO&#8217;s domain. If the CMO remains privately unconvinced while the CTO advocates publicly, the CEO risks committing to a programme with board-level endorsement but without commercial-leadership conviction. That is the configuration that produces initiatives which look healthy by activity metrics and stall at the first real commercial decision point.</p><p>The technology stack is the wrong frame for this problem. Which AI tools have been purchased, which platforms integrated, which use cases piloted: none of this determines whether the CTO and CMO will converge on shared conviction. The tools are available to both roles. What is not equally available is the role context that makes conviction rational. And no technology decision changes that.</p><p>CEOs who navigate this well share a common approach. They stop reading the CTO/CMO gap as a communication failure and start treating it as a structural feature of adoption that requires deliberate management. They identify the specific conditions the CMO needs to build the same kind of conviction the CTO already has, and they invest in creating those conditions: tighter measurement frameworks, workflows with cleaner attribution, AI use cases selected because they can be validated against commercial outcomes without the attribution problem that makes broader marketing AI so difficult to assess. And they track the gap over time, quarter by quarter, to understand whether it is narrowing toward genuine convergence or hardening into the kind of structural misalignment that stalls everything downstream.</p><p>The gap between your CTO and CMO is not a dysfunction. It is a measurement. What you do with it is a choice.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><ol><li><p>CIO AI priorities: Gartner C-level Communities Leadership Perspective Survey, 2025. Survey of approximately 1,200 CIOs. Delivering AI value ranked as the second-highest priority for CIOs in 2025, behind cybersecurity and risk management. 63 per cent of CIOs planned to spend on AI and machine learning in 2025.</p></li><li><p>CMO AI adoption and outcomes: Gartner survey of 402 senior marketing leaders, conducted August through October 2025. 65 per cent of CMOs said AI advances would dramatically change their role within two years. Gartner survey of 413 marketing technology leaders, conducted June through August 2025: only 5 per cent of marketing leaders not yet piloting AI agents reported significant gains on business outcomes.</p></li><li><p>CMO ROI and attribution challenges: CMO Council and Zeta Global, CMO Intentions 2024 study. Survey of nearly 200 CMOs at B2B and B2C companies across North America and Europe. 40 per cent identified proving ROI and demonstrating attribution as the area most in need of improvement within marketing operations. 31 per cent cited proving ROI as the primary challenge in gaining organisational adoption of new platforms.</p></li><li><p>Revenue growth as AI aspiration: Deloitte AI Institute, State of AI in the Enterprise 2026. Survey of 3,235 senior leaders, August through September 2025, 24 countries. 74 per cent of organisations reported hoping to grow revenue through AI in the future; 20 per cent were already doing so. Two-thirds reported productivity and efficiency gains.</p></li><li><p>AI maturity and cross-functional trust: Gartner survey of 432 organisations, Q4 2024. In high-maturity organisations, 57 per cent of business units trusted and were ready to use new AI solutions, compared to 14 per cent in low-maturity organisations. High-maturity organisations scored 4.2 to 4.5 on the Gartner AI Maturity Model; low-maturity organisations averaged 1.6 to 2.2.</p></li><li><p>AI initiative abandonment rate: S&amp;P Global Market Intelligence, 451 Research survey, 2025. 42 per cent of companies abandoned most AI initiatives in 2025, up from 17 per cent the previous year. Total cost and unclear value cited as primary reasons.</p></li><li><p>CMO functional alignment gap: CMO Council and Zeta Global, CMO Intentions 2024. 39 per cent of CMO respondents said functional alignment across the organisation needed to improve.</p></li><li><p>Quaie Q1 2026 fieldwork: Adoption stage, confidence in durable value, preparedness, and perceived blockers measured across ten executive roles (CEO, CTO/CIO, COO, CFO, CMO, CRO, CDO, CISO, CHRO, CLO). CTO roles clustered at limited production use or scaled deployment; CMO roles showed the widest variance of any group. CTO confidence 4 out of 5; CMO confidence 2 out of 5. CTO primary blockers: integration complexity and security. CMO primary blocker: ROI uncertainty. </p></li><li><p>Quaie&#8217;s six analytical constructs (the Role Lead-Lag Ranking, Organisational Adoption Gradient, Role Shift Index, Role Alignment Map, Role Influence Index, and Consensus Formation Time) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in preceding essays in this series</p><p>.</p></li></ol>]]></content:encoded></item><item><title><![CDATA[The Most Expensive Mistake in AI Adoption Is Moving Before You’re Aligned]]></title><description><![CDATA[The most expensive mistake in AI adoption is not choosing the wrong tool.]]></description><link>https://quaie.io/p/the-most-expensive-mistake-in-ai-adoption</link><guid isPermaLink="false">https://quaie.io/p/the-most-expensive-mistake-in-ai-adoption</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 09 Mar 2026 08:02:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1B8R!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1B8R!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1B8R!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1B8R!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1B8R!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1B8R!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1B8R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9939565,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/189638718?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1B8R!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg 424w, https://substackcdn.com/image/fetch/$s_!1B8R!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg 848w, https://substackcdn.com/image/fetch/$s_!1B8R!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!1B8R!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff61b04a5-7b1b-4d13-ba50-fd378a40f1da_5998x2998.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>The most expensive mistake in AI adoption is not choosing the wrong tool. It is not underinvesting. It is not moving too slowly. It is committing before the roles responsible for sustaining that commitment have reached shared conviction about what they are committing to and discovering, quarters later, that the initiative was running on one function&#8217;s confidence and another function&#8217;s compliance.</p><p>This pattern is so consistent, across so many organisations and so many technology cycles, that it deserves to be treated as structural rather than incidental. Premature commitment is not a failure of ambition or intelligence. It is the predictable result of an organisation mistaking executive conviction for organisational alignment, and deploying capital against the first while the second has not yet formed.</p><p>The costs are asymmetric in a way that most leadership teams underestimate.</p><p>Acting too late, waiting for alignment when competitors have already moved, carries opportunity cost. But that cost is typically bounded and recoverable. An organisation that enters six months behind its peers, with broad internal alignment and clear evidence, can close the gap more quickly than an organisation that entered early with fractured conviction. The history of enterprise technology adoption supports this. The early movers in cloud migration, ERP deployment, and digital transformation were not always the winners. Frequently, the organisations that entered second or third, having learned from the pioneers&#8217; mistakes and built broader internal consensus before committing, achieved better outcomes at lower cost.&#185;</p><p>Acting too early carries a different kind of cost, and it compounds. The initiative launches with one function&#8217;s conviction and another&#8217;s compliance. Early friction is interpreted as an implementation problem rather than a misalignment problem. Resources are committed to fix what appears to be a technical challenge but is actually a coordination challenge. By the time the real constraint is recognised, the organisation has spent capital, burned political goodwill, and, most damagingly, created the impression among sceptical roles that their concerns were justified all along. The next AI initiative starts not from zero but from a deficit of trust.</p><p>A missed opportunity leaves the organisation where it was. A failed commitment leaves it somewhere worse, with a depleted budget, a sceptical workforce, and a leadership team whose credibility on AI has been damaged. Recovery from the first requires only a decision. Recovery from the second requires rebuilding conviction that was destroyed by the last attempt.</p><p>This is not hypothetical. Two of the most consequential AI failures of the past decade illustrate the pattern at different scales and in different domains and both are instructive precisely because the technology worked. The failures were organisational.</p><p>Zillow committed $3.75 billion in credit facilities and a $20 billion revenue target to an algorithmically driven home-purchasing programme. The concept was sound. The Zestimate platform, trained on millions of home sales, would predict property values, and Zillow would purchase homes directly, renovate, and resell at a profit. The CEO, Rich Barton, set the pace. The board supported it. The market rewarded it. The workforce grew 32 per cent in nine months.&#178;</p><p>But the conviction was not shared evenly across the functions that needed to sustain it. The data science team knew the algorithm&#8217;s limitations. The Zestimate had been designed to estimate current market value, a fundamentally different problem from predicting what a home would sell for three to six months later in a market that might have shifted. The operations team was struggling with capacity, labour shortages and supply chain disruptions meant properties sat in inventory longer, increasing exposure to market shifts. The finance function was absorbing the risk: between July and October 2021, revolving credit facilities expanded from $1.5 billion to $3.75 billion, and as late as seventeen days before the purchasing pause, Zillow issued a $700 million debt note toward the programme.</p><p>Most tellingly, managers on the ground were manually overriding the algorithm to buy more aggressively, not because they were reckless, but because the growth target demanded volume, and volume required winning bids, and winning bids required paying more than the model recommended. The machine learning system&#8217;s guardrails were being bypassed not by a technical failure but by an organisational one.</p><p>When the market shifted in Q3 2021, the result was a $421 million loss in a single quarter, the complete shutdown of the iBuying division, and the elimination of 25 per cent of the workforce, roughly two thousand people. Nearly $10 billion in market capitalisation was erased in days.&#179;</p><p>The conventional diagnosis frames this as an algorithm problem. Barton himself used this language. But the evidence does not support it as the primary cause. Opendoor and Offerpad, operating in the same markets with similar algorithmic approaches, navigated the same period without comparable losses. Opendoor reported a profitable Q3 with positive margins. The algorithm was not the distinguishing factor. The organisational conditions surrounding the algorithm were. One function&#8217;s conviction had outrun the alignment of every other function required to sustain it. The gradient between the CEO&#8217;s growth ambition and the data science team&#8217;s confidence, the operations team&#8217;s capacity, and the finance team&#8217;s risk tolerance was steep and there was no instrument in place to make that gradient visible before capital was deployed against it.</p><p>IBM Watson Health tells the same story at a different scale and over a longer timeline.</p><p>Between 2015 and 2016, IBM spent approximately $4 billion acquiring Truven Health Analytics, Merge Healthcare, Explorys, and Phytel, firms whose combined datasets covered hundreds of millions of patient records, insurance claims, clinical data, and medical imaging. The strategic logic was compelling: expose Watson&#8217;s cognitive computing capabilities to massive healthcare data, and patterns invisible to human clinicians would emerge. The division grew to 7,000 employees. The ambition was explicit: transform cancer treatment, democratise elite medical expertise, reshape how medicine was practised globally.&#8308;</p><p>The commitment was driven by executive conviction. IBM&#8217;s leadership, buoyed by Watson&#8217;s Jeopardy! performance and early research partnerships with Memorial Sloan Kettering, believed the technology was ready for clinical deployment at scale. The capital followed that conviction, billions in acquisitions, thousands in headcount, partnerships with hospitals across multiple countries.</p><p>What the leadership could not see, because no instrument existed to make it visible, was the gradient between that conviction and the readiness of every other function required to sustain the commitment. The clinical function had not validated Watson&#8217;s recommendations at the standard required for medical practice. A 2017 investigation found internal documents describing unsafe and incorrect treatment recommendations. MD Anderson Cancer Center&#8217;s partnership alone cost $62 million before being shut down, with audits revealing the system had been trained on outdated data.&#8309; The data science function knew the limitations, Watson&#8217;s ability to process structured genetic data was genuine, but its capacity to interpret the unstructured complexity of clinical medicine was nowhere near what the sales function was promising. The $4 billion in acquired data was never successfully integrated, the companies sat in separate systems, with different formats, different quality standards, and different clinical contexts. And the compliance function was not in a position to provide the oversight that clinical AI required, in one of the most heavily regulated domains in any economy.</p><p>IBM sold Watson Health to Francisco Partners in 2022 for approximately $1 billion.&#8310; But the financial loss, while substantial, was not the deepest cost. Watson Health became the reference case for AI overreach in healthcare. Hospitals that had invested in Watson partnerships carried scepticism into every subsequent AI conversation. The broader sector&#8217;s appetite for AI adoption was dampened for years, not because the technology lacked potential, but because the most prominent commitment had been premature, and the failure was public.</p><p>This is the compounding mechanism. Premature commitment does not just fail in the present. It taxes the future. It makes the next initiative harder to advance, the next budget harder to secure, the next cross-functional conversation more cautious. The organisation does not return to its starting position. It starts from a deficit.</p><p>Early signals from Quaie&#8217;s Q1 2026 fieldwork across ten executive roles show the pre-conditions for this pattern already present in the cohort, not at Zillow or IBM scale, but in the structural dynamics that produce the same outcome.&#8311; The most common response on value confidence was &#8220;too early to tell.&#8221; Only a minority reported high confidence, and that confidence concentrated almost entirely among roles already at scaled deployment. The Organisational Adoption Gradient, the distance between the most confident and least confident roles, was wide. CTOs reported high confidence and advanced deployment. CMOs reported low confidence and early experimentation. CFOs flagged insufficient evidence for capital commitment. CHROs raised workforce readiness concerns that no other role had addressed.</p><p>The roles signalling most appetite for accelerated commitment were the CTO and CEO, the functions closest to the technology&#8217;s potential and the strategic pressure to act on it. The roles most likely to report carrying unresolved risk from commitments already made were the CFO and CHRO. The CFO&#8217;s risk was financial: expenditure committed against returns that had not yet materialised. The CHRO&#8217;s risk was human: workforce implications arriving from automation decisions made by other functions, without the planning inputs needed to manage them.&#8312; Neither had set the pace. Both were absorbing consequences that originated elsewhere in the organisation.</p><p>This is the Zillow anatomy at more modest scale. One function&#8217;s conviction outrunning the alignment of the others. The Role Alignment Map makes this directly measurable: not just where roles sit on the adoption spectrum, but whether they share a common interpretation of AI&#8217;s strategic priorities and ownership. Early signals confirm that for most organisations in the cohort, that shared interpretation has not yet formed. The Role Influence Index adds a further dimension: the roles driving adoption decisions were not always the roles best positioned to assess the full organisational risk. A highly influential CTO or CEO pushing for scale before the CFO, CHRO, and CLO have reached equivalent conviction is not leading consensus formation. The organisation may believe it is committed while the functions that carry the largest unresolved risks remain privately unconvinced. The conditions for failure form before any outcome is visible, in the gap between roles that current enterprise-level intelligence cannot see.</p><p>The question for any leadership team sitting in the pre-consensus phase is not whether to invest in AI. It is whether the investment matches the organisational conditions, or whether capital is being deployed against a conviction that has not yet become shared.</p><p>The diagnostic is not complicated. In which functions has AI usage become predictable and owned and in which is it still dependent on individual champions? What are the specific blockers each role cites, and are they the same concerns, or fundamentally different ones requiring different responses? Is the gap between the most advanced and least advanced roles narrowing or widening? If the budget doubled tomorrow, which functions could absorb it productively and which would convert it into activity without durability?</p><p>These are uncomfortable questions because they surface disagreements that leadership teams often prefer to leave implicit. They are also the questions that determine whether capital allocation produces compounding value or compounding waste.</p><p>The most expensive mistake in AI adoption is not moving too slowly. It is moving before you know whether the roles responsible for sustaining the commitment are aligned. Zillow did not lack ambition. IBM did not lack investment. Both lacked a way of seeing, before they committed, how far apart the functions required to sustain that commitment actually were.</p><p>The distance between conviction in the room and alignment across the organisation is where the most consequential risk sits. Measuring that distance, before capital is deployed against it, is not caution. It is the most rational investment an organisation can make.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s Ongoing Research Series, examining how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Second-mover advantage in enterprise technology adoption: The pattern of later entrants achieving better outcomes is documented across ERP, cloud migration, and digital transformation cycles. Panorama Consulting Group&#8217;s longitudinal ERP reports (2010&#8211;2020) found that more than 70 per cent of ERP implementations failed to meet objectives, with organisations that sequenced adoption by function and built cross-functional alignment before committing achieving materially better outcomes. McKinsey Digital estimated $100 billion in failed cloud migrations (&#8221;Cloud&#8217;s trillion-dollar prize is up for grabs,&#8221; February 2021), with premature enterprise-wide commitment a recurring factor.</p><p>&#178; Zillow iBuying programme: Zillow Group public filings, SEC filings, and earnings call transcripts, 2019&#8211;2021. $20 billion revenue target: Rich Barton&#8217;s statements on earnings calls. 32 per cent workforce growth in nine months: Zillow Group reporting. Credit facility expansion from $1.5 billion to $3.75 billion, and $700 million debt note issued 1 October 2021 (seventeen days before purchasing pause): Zillow Group SEC filings.</p><p>&#179; Zillow iBuying collapse: Q3 2021 loss of $421 million on Zillow Offers. Complete shutdown of iBuying division. Elimination of approximately 2,000 employees (25 per cent of workforce). Market capitalisation loss of approximately $10 billion. Source: Zillow Group Q3 2021 earnings release and SEC filings. Opendoor reported positive margins in the same quarter: Opendoor Technologies Q3 2021 earnings.</p><p>&#8308; IBM Watson Health acquisitions: Approximately $4 billion in acquisitions, 2015&#8211;2016. Truven Health Analytics ($2.6 billion), Merge Healthcare ($1 billion), Explorys, and Phytel. Division grew to approximately 7,000 employees. Source: IBM public filings, SEC filings, and press reporting.</p><p>&#8309; IBM Watson Health clinical failures: Internal documents describing unsafe and incorrect treatment recommendations: STAT investigation, 2017. MD Anderson Cancer Center partnership, $62 million spent, shut down 2017: University of Texas audit. Multiple partners scaling back or discontinuing oncology projects by 2018: reported across healthcare and technology press.</p><p>&#8310; IBM Watson Health sale: Sold to Francisco Partners for approximately $1 billion, reported January 2022. Source: Wall Street Journal, Bloomberg, and IBM public announcement.</p><p>&#8311; Quaie Q1 2026 fieldwork: Early signals from confidence, preparedness, adoption stage, and perceived blocker data across ten executive roles (CEO, CTO/CIO, COO, CFO, CMO, CRO, CDO, CISO, CHRO, CLO). High confidence concentrated among roles at scaled deployment. &#8220;Too early to tell&#8221; the most common response on value confidence. </p><p>&#8312; Role-level risk distribution: Early fieldwork signals indicate CTO and CEO roles most likely to advocate for accelerated commitment, with CFO and CHRO roles most likely to report carrying unresolved risk from commitments made by other functions. Blocker distribution: ROI uncertainty (CEO, CMO), integration complexity and security (CTO), insufficient evidence for capital commitment (CFO), workforce readiness (CHRO), regulatory exposure (CLO). Source: Quaie Q1 2026 fieldwork, early signals.</p><p>Quaie&#8217;s six analytical constructs referenced in this essay (the Organisational Adoption Gradient, Role Alignment Map, Role Influence Index, Role Shift Index, Role Lead-Lag Ranking, and Consensus Formation Time) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in preceding essays in this series.</p>]]></content:encoded></item><item><title><![CDATA[What ERP Taught Us About AI and What Leaders Have Already Forgotten ]]></title><description><![CDATA[In the early 1990s, enterprise resource planning software was going to transform how organisations operated.]]></description><link>https://quaie.io/p/what-erp-taught-us-about-ai-and-what-leaders-have-already-forgotten</link><guid isPermaLink="false">https://quaie.io/p/what-erp-taught-us-about-ai-and-what-leaders-have-already-forgotten</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 02 Mar 2026 08:02:33 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!7jpI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7jpI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7jpI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7jpI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7jpI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7jpI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7jpI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg" width="1254" height="836" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:836,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:629719,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/188795162?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7jpI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!7jpI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!7jpI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!7jpI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb70a0f13-a18d-4c23-ab54-2d2462135947_1254x836.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In the early 1990s, enterprise resource planning software was going to transform how organisations operated. The technology was ready. The vendors were confident. The business case was compelling. And for the next decade, more than seventy per cent of ERP implementations failed to meet their objectives, with average cost overruns approaching twice the original budget.&#185; Not because the software was bad. Because organisations treated a coordination problem as a technology problem, and discovered, expensively, repeatedly, over years, that the difference matters.</p><p>Thirty years later, the same pattern is unfolding with artificial intelligence. The technology is further ahead than the organisations attempting to absorb it. The gap between technical capability and organisational readiness is widening, not narrowing. And the leaders best positioned to recognise what is happening are the ones who lived through ERP and have not yet connected the parallels. The fog of hype around ERP has long since lifted. What remains is a well-documented record of how organisations adopt general-purpose technology that touches every function, and how consistently they underestimate what that requires.</p><p>ERP systems promised a single integrated platform across finance, operations, HR, procurement, and supply chain. The technology delivered on that promise. What it required in return was something vendors mentioned in passing and organisations discovered in practice: every function had to agree on shared processes, shared data definitions, shared workflows, and shared accountability. A finance team that had always owned its own reporting structure now needed to reconcile that structure with operations. A procurement function that had built its processes around local flexibility now needed to standardise across the enterprise. An HR team that had never shared data with supply chain now needed to operate on a common platform with common rules.</p><p>Each of these was a coordination problem, not a technical one. And each was owned by a different role, with different incentives, different priorities, and a different definition of what a successful implementation would look like.</p><p>The result was predictable in hindsight and painful in practice. Implementations budgeted for eighteen months took three to five years. Projects scoped at one cost came in at two or three times the estimate. Hershey&#8217;s $112 million ERP deployment cut testing phases to meet an aggressive deadline and failed on launch, with transactions unable to flow across its CRM, ERP, and supply chain systems.&#178; FoxMeyer, a major pharmaceutical distributor, saw its ERP implementation contribute to the company&#8217;s bankruptcy.&#179; These were not small companies making amateur mistakes. They were sophisticated organisations undone by the same structural problem: the technology worked, but the functions responsible for making it work could not align fast enough to make it stick.</p><p>The organisations that succeeded did something specific. They sequenced. They identified which functions were ready to move first, let those functions stabilise, and used the evidence from early adopters to build the case for follow-on functions. They accepted that different roles would move at different speeds and treated lag as a structural feature rather than a failure of commitment. They measured alignment across functions before committing capital to the next phase. And they treated timing as a decision variable, understanding that moving too early in a function that was not ready carried costs that compounded across the entire programme.</p><p>The organisations that failed did the opposite. They set enterprise-wide deadlines. They mandated simultaneous adoption. They confused executive ambition with organisational readiness. And they spent years unwinding the consequences.</p><p>The parallels with AI adoption are not approximate. They are structural.</p><p>AI, like ERP, is a general-purpose capability that touches every function. Marketing uses it differently from engineering. Finance evaluates it against different criteria than operations. The CEO must reconcile perspectives from roles that are experiencing different realities, operating under different constraints, and reaching different conclusions from the same evidence.</p><p>AI, like ERP, is being sold on a timeline that reflects the technology&#8217;s readiness, not the organisation&#8217;s. Vendors promise rapid deployment. Pilots show quick wins. The pressure to scale is intense. But the organisational coordination required to move from a successful pilot to a durable, cross-functional capability operates on a fundamentally different timescale, and no amount of urgency changes that.</p><p>AI, like ERP, produces its sharpest failures not when the technology breaks but when alignment breaks. Ninety-five per cent of enterprise generative AI pilots fail to deliver measurable financial returns.&#8308; According to IDC, for every thirty-three prototypes a company builds, four reach production.&#8309; Nearly two-thirds of organisations remain stuck in the pilot stage. According to S&amp;P Global Market Intelligence, forty-two per cent of companies abandoned most of their AI initiatives in 2025, more than double the previous year&#8217;s rate.&#8310; BCG&#8217;s widely cited research puts the ratio at ten per cent algorithms, twenty per cent technology and data, seventy per cent people, processes, and cultural change.&#8311; The dominant blockers are not technical. They are misalignment on value, risk, ownership, and timing across the roles responsible for making it work.</p><p>And AI, like ERP, is being treated as a deployment problem when it is a coordination problem. The difference is not semantic. A deployment problem responds to better tools, more investment, faster timelines. A coordination problem responds to sequencing, alignment, and the patient accumulation of shared conviction across functions. Applying deployment solutions to a coordination problem does not accelerate progress. It widens the gap between the roles that have moved and the roles that have not, which is precisely the gap that stalls everything downstream.</p><p>Quaie&#8217;s Q1 2026 fieldwork with senior decision-makers is surfacing this dynamic at a level of resolution that the ERP era lacked. When we measure AI adoption readiness across executive roles, the hypothesis is that CTOs will report high confidence and advanced adoption while CMOs report low confidence and early-stage experimentation, with the gap between the most advanced and least advanced roles exceeding every other variable measured, a divergence the Role Shift Index is designed to track over time and across quarters. The Role Alignment Map adds a complementary view: where the Role Shift Index tracks divergence in adoption stage, the Alignment Map tracks whether roles share a common interpretation of AI&#8217;s strategic priorities and ownership. In the ERP era, this was the dimension that most consistently separated successful implementations from failed ones, not whether functions had deployed the technology, but whether they had converged on a shared understanding of what it was for and who was responsible for it.</p><p>The blocker distribution is likely to tell the same story from a different angle. ROI uncertainty is expected to dominate among CEOs and CMOs. CTOs are more likely to cite integration complexity and security concerns. If that pattern holds, the organisation will not be facing one problem but several, distributed across the people responsible for solving them, with no shared view of where those problems sit relative to each other. The Organisational Adoption Gradient, the distance between the most advanced and least advanced roles, is designed to make this spread visible rather than allowing it to be concealed within an enterprise-level average. The Role Influence Index reveals a further dimension relevant to the ERP parallel: in both eras, the roles with the greatest formal authority over technology decisions were not always the roles with the greatest influence over whether adoption succeeded. Understanding which roles act as catalysts, validators, or gatekeepers, and whether that influence structure is consistent with the coordination the organisation needs, is as important now as it was then.</p><p>This is the ERP failure pattern, repeating with different technology and the same organisational dynamics.</p><p>The leaders who navigated ERP successfully learned something that most AI adoption frameworks have not yet absorbed: the technology is the easy part. The hard part is getting different roles, with different authorities, different risk tolerances, and different definitions of success, to converge on a shared commitment, and doing so in the right sequence, at the right pace, with the right evidence at each stage.</p><p>That lesson cost organisations billions of pounds and a decade of difficult implementation to learn. It is available for free to any leadership team willing to apply it to AI.</p><p>The starting point is the same now as it was then. Map readiness by role, not by department. Identify which functions are prepared to move and which are waiting for evidence, governance clarity, or budget justification. Sequence investment so that early movers generate proof that follow-on roles can evaluate, rather than mandating simultaneous adoption that creates friction between functions moving at different speeds. And treat the gap between roles as information, not failure. It tells you where alignment must form before commitment becomes rational. The Role Lead-Lag Ranking makes this sequencing visible, tracking the temporal distance between roles as they move through adoption stages, and revealing whether the organisation is converging toward shared conviction or pulling further apart. Consensus Formation Time estimates how long that convergence is likely to take, giving leaders a forward-looking view of their decision timeline rather than a backward-looking account of what has already been deployed.</p><p>The question is whether leaders will apply what the ERP era taught, or whether the lesson will need to be relearned from scratch, expensively, repeatedly, over years, because the urgency of the technology obscured the patience the organisation required.</p><p>The technology has changed. The organisational problem has not.</p><div><hr></div><p><em>This is the first essay in Quaie&#8217;s ongoing research into how organisations decide to adopt AI, role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; ERP implementation failure rates and cost overruns: Panorama Consulting Group, annual ERP reports (2010&#8211;2020). Panorama&#8217;s longitudinal research consistently found that 55&#8211;75 per cent of ERP implementations failed to meet objectives, with average cost overruns of 59&#8211;189 per cent depending on measurement year and methodology. Gartner research on ERP implementation outcomes corroborates this range.</p><p>&#178; Hershey&#8217;s ERP failure: Hershey Foods implemented a $112 million SAP R/3, Siebel CRM, and Manugistics supply chain system in 1999. The company compressed a 48-month implementation schedule to 30 months to meet a deadline. The system went live in July 1999, the beginning of the peak Halloween ordering season, and failed to process orders correctly. Hershey reported a 19 per cent decline in third-quarter profits. Reported widely; primary coverage in CIO Magazine, Wall Street Journal, and subsequent Harvard Business School case studies.</p><p>&#179; FoxMeyer Drug bankruptcy: FoxMeyer, the fourth-largest pharmaceutical distributor in the United States, filed for bankruptcy in 1996 following a failed SAP R/3 and Delta III warehouse automation implementation. The company&#8217;s bankruptcy trustee subsequently sued SAP, Andersen Consulting (now Accenture), and other parties. The case is widely studied in information systems literature as an example of catastrophic ERP failure. See: Scott, J.E. and Vessey, I. (2002), &#8220;Managing Risks in Enterprise Systems Implementations,&#8221; Communications of the ACM, 45(4).</p><p>&#8308; 95 per cent of generative AI pilots failing to deliver measurable financial returns: Reported across multiple analyst sources, 2024&#8211;2025. Gartner predicted in July 2024 that at least 30 per cent of generative AI projects would be abandoned after proof of concept by the end of 2025 (Gartner Data &amp; Analytics Summit, Sydney, July 2024).</p><p>&#8309; IDC prototype-to-production ratio: IDC research findings on enterprise AI deployment, cited across industry reporting, 2024&#8211;2025. For every 33 AI prototypes built, approximately 4 reached production deployment.</p><p>&#8310; S&amp;P Global Market Intelligence: 42 per cent of companies abandoned most AI initiatives in 2025. S&amp;P Global Market Intelligence, 451 Research survey, published 2025.</p><p>&#8311; BCG AI adoption composition: Boston Consulting Group, &#8220;From Potential to Profit: Closing the AI Impact Gap&#8221; (AI Radar 2025), January 2025. Survey of 1,803 C-level executives across 19 markets. BCG&#8217;s related publications cite approximately 70 per cent of AI challenges stemming from people, processes, and cultural change rather than technology.</p><p>Quaie&#8217;s six analytical constructs referenced in this essay (the Role Shift Index, Organisational Adoption Gradient, Role Alignment Map, Role Influence Index, Role Lead-Lag Ranking, and Consensus Formation Time) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in subsequent essays in this series.</p>]]></content:encoded></item><item><title><![CDATA[The Incentive Problem Behind AI’s Biggest Blind Spot]]></title><description><![CDATA[Artificial intelligence may be the first major technology where every participant in the ecosystem has a financial incentive to misdiagnose the primary constraint on its adoption.]]></description><link>https://quaie.io/p/the-incentive-problem-behind-ai-biggest-blind-spot</link><guid isPermaLink="false">https://quaie.io/p/the-incentive-problem-behind-ai-biggest-blind-spot</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 23 Feb 2026 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RCrF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RCrF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RCrF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RCrF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RCrF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RCrF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RCrF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg" width="5362" height="3225" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3225,&quot;width&quot;:5362,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1670027,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/188803290?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e870e20-f473-4fee-8fae-36761076faf2_5362x3578.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RCrF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RCrF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RCrF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RCrF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F00fedfae-fcfa-4d55-adcb-98c34584273d_5362x3225.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Artificial intelligence may be the first major technology where every participant in the ecosystem has a financial incentive to misdiagnose the primary constraint on its adoption.</p><p>Venture capitalists need AI adoption to accelerate. Their fund cycles depend on it. A typical VC fund operates on a seven-to-ten-year horizon, deploying capital into companies whose valuations assume rapid market penetration. A thesis that organisational AI adoption follows a generational timescale, ten to twenty years of slow, uneven, structurally constrained transformation, is not a thesis any fund can take to its limited partners. The incentive is to believe in speed, invest accordingly, and interpret every enterprise AI contract as evidence that the acceleration is happening.</p><p>AI companies need adoption to accelerate for a more immediate reason: revenue. Their growth models are built on the assumption that enterprises will move from pilot to production to enterprise-wide deployment on a timeline measured in quarters. When ninety-five per cent of pilots fail to deliver measurable returns and two-thirds of organisations remain stuck in experimentation, the vendor&#8217;s instinct is to treat this as a sales execution problem or a product gap, something the next feature release or the next integration partnership will fix.&#185; The possibility that the constraint is structural, seated in the organisation itself, and operates on a timescale no product roadmap can compress is not a possibility the quarterly earnings call is designed to surface.</p><p>Media outlets need the acceleration narrative for a different but equally binding reason: attention. Urgency drives engagement. &#8220;AI is transforming everything&#8221; generates clicks, subscriptions, and advertising revenue. &#8220;AI adoption will take a decade and the primary bottleneck is organisational coordination&#8221; does not. The result is a media environment that systematically amplifies speed and suppresses friction, not out of dishonesty, but because the economics of attention select for urgency over accuracy.</p><p>Consultancies face a subtler version of the same problem. Their revenue depends on organisations believing that AI transformation is achievable within the scope of an engagement, typically six to eighteen months. A consultancy that tells a client &#8220;this will take five to ten years and the binding constraint is alignment across your leadership team, not your technology stack&#8221; is a consultancy that just lost a project. The incentive is to scope the problem as solvable within the budget cycle, even when the evidence suggests it is not.&#178;</p><p>Analysts and research firms occupy the final corner of the ecosystem. Their business model depends on enterprises paying for intelligence that informs near-term decisions. Annual surveys measuring enterprise AI adoption at the aggregate level, what percentage of companies have deployed, what percentage plan to invest, serve this function. They provide a snapshot that confirms the market is moving. What they do not provide, because their methodology is not designed to capture it, is the internal coordination dynamics that determine whether any individual organisation&#8217;s adoption will succeed or stall.&#179; The unit of analysis is the enterprise. The unit of decision is the role. The gap between these two is where the most consequential information sits, and where no one in the current ecosystem has an incentive to look.</p><p>This is not a conspiracy. It is a structure. Every participant is acting rationally within their own incentive framework. The VC is optimising for fund returns. The vendor is optimising for revenue growth. The media outlet is optimising for engagement. The consultancy is optimising for project scope. The analyst firm is optimising for renewal rates. Each produces insight that is genuinely useful within its frame. None of them is lying. But the sum of their individual rationalities produces a collective blind spot: nobody in the ecosystem is structured to see, measure, or report on the organisational coordination problem that the evidence consistently identifies as the primary constraint.</p><p>The evidence is substantial. BCG estimates that seventy per cent of the AI challenge sits in people, processes, and cultural change, not technology or algorithms.&#8308; OpenAI&#8217;s own enterprise research concludes that organisational readiness, not model performance, is the binding constraint.&#8309; According to S&amp;P Global Market Intelligence, forty-two per cent of companies abandoned most of their AI initiatives in 2025, more than double the previous year.&#8310; According to IDC, for every thirty-three AI prototypes a company builds, four reach production.&#8311; These are not statistics that describe a technology problem. They describe a coordination problem, one that unfolds inside organisations, between roles, over time.</p><p>The incentive problem produces contradictions that would be visible if anyone were structured to notice them. Goldman Sachs offers the sharpest example. In 2023&#8211;2024, Goldman&#8217;s technology division under CIO Marco Argenti was building internal AI infrastructure with deliberate caution, zero production generative AI use cases nearly a year after ChatGPT&#8217;s launch. Simultaneously, Goldman&#8217;s macro research division published a widely cited report questioning whether industry-wide AI spending would ever generate adequate returns.&#8312; The same institution was, from one division, cautiously building AI capability, and from another, publicly questioning whether anyone&#8217;s AI spending was justified. This is not incoherence. It is the incentive problem made visible within a single firm: the technology division&#8217;s incentive (build carefully, contain risk) and the research division&#8217;s incentive (publish contrarian analysis that attracts attention) produced positions that were individually rational and collectively contradictory. If this tension is present inside Goldman Sachs, one of the most analytically sophisticated institutions in the world, it is present everywhere.</p><p>When you measure AI adoption at the role level rather than the enterprise level, the coordination problem becomes visible. Quaie&#8217;s Q1 2026 fieldwork across ten executive roles is designed to surface exactly this. The hypothesis is that the sharpest divergence in AI readiness will not be between companies, sectors, or revenue bands, but between executive roles within the same cohort. CTOs are expected to report high confidence and advanced deployment. CMOs are likely to report low confidence and early-stage experimentation. CFOs are expected to flag insufficient evidence for capital commitment. CHROs are likely to raise workforce readiness concerns that no other role has addressed. The blockers each role cites are anticipated to be fundamentally different, ROI uncertainty for CEOs and marketing leaders, integration complexity and security for technology leaders, evidentiary thresholds for finance, regulatory exposure for legal.&#8313; If that pattern holds, the organisation will not be facing one problem but several, distributed across the people responsible for resolving them, invisible at any level of analysis that aggregates across roles. The Role Alignment Map is designed to make this directly measurable: not just where roles sit on the adoption spectrum, but whether they share a common interpretation of AI&#8217;s strategic priorities and ownership. The Role Influence Index adds a further dimension: which roles are acting as catalysts or gatekeepers in the adoption process, and whether those influence patterns are consistent with the coordination structure the organisation believes it has in place.</p><p>This is the gap the ecosystem cannot see, not because the people in it are not intelligent, but because seeing it would require them to act against their own incentives. A VC who acknowledges that adoption is generational must restructure their investment thesis. A vendor who acknowledges that the constraint is organisational must admit their product cannot solve it alone. A consultancy who acknowledges the timescale must tell clients that the engagement will not produce transformation within the budget cycle. A media outlet who acknowledges the slow pace must sacrifice the urgency that sustains its business model. Each of these is a rational position to avoid.</p><p>The result is that the most important signal in AI adoption, whether the roles inside an organisation are converging toward shared conviction or diverging into misalignment, goes unmeasured, unreported, and unpriced. Capital is deployed against a diagnosis that mistakes a coordination problem for a technology problem. Billions flow into tools, platforms, and pilots while the organisational dynamics that determine whether any of it succeeds operate in the background, unexamined.</p><p>History suggests this will eventually self-correct. The ERP era produced a similar pattern, vendors oversold timelines, consultancies scoped engagements too narrowly, and organisations spent a decade learning that the technology was the easy part. The correction came slowly, driven by accumulated evidence that could no longer be ignored. AI adoption is following the same trajectory, with the same incentive structures producing the same blind spots, and the same eventual reckoning waiting at the end.</p><p>The question is not whether the correction will come. It is whether leaders will wait for the market to catch up with reality, or whether they will seek out the signal the ecosystem is not structured to provide. The information exists. The coordination dynamics inside organisations, role-level confidence, cross-functional alignment, adoption sequencing, consensus formation, can be measured through instruments designed for exactly this purpose, tracked over time, and used to inform better sequencing, alignment, and timing decisions.&#185;&#8304; The reason this intelligence has not existed until now is not that it is impossible to produce. It is that no one in the current ecosystem had the incentive to produce it.</p><p>The incentive problem is not the technology&#8217;s fault. It is not the organisation&#8217;s fault. It is a structural feature of how the AI ecosystem is built, and it will persist until someone outside that structure measures what everyone inside it is paid to overlook.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; 95 per cent of enterprise generative AI pilots failing to deliver measurable returns: Reported across multiple analyst sources, 2024&#8211;2025. Gartner predicted at least 30 per cent of generative AI projects would be abandoned after proof of concept by end of 2025 (Gartner Data &amp; Analytics Summit, Sydney, July 2024). Two-thirds of organisations stuck in pilot or experimentation stage: corroborated across McKinsey, BCG, and Deloitte survey data, 2024&#8211;2025.</p><p>&#178; Consultancy engagement timescales and AI transformation: BCG&#8217;s own research (AI Radar 2025, January 2025) found that only 25 per cent of organisations reported significant value from AI despite 75 per cent ranking it as a top-three priority. The gap between stated priority and demonstrated value is the structural challenge that engagement-length scoping cannot resolve.</p><p>&#179; Annual AI adoption surveys: McKinsey Global Survey on AI (2024, 2025 editions), BCG AI Radar 2025 (1,803 C-level executives, 19 markets), Deloitte State of AI in the Enterprise (2026 edition, 3,235 leaders, 24 countries). Each aggregates at the enterprise or sector level. None disaggregates by executive role within the enterprise. None tracks the same respondents across consecutive periods.</p><p>&#8308; BCG AI adoption composition: Boston Consulting Group, &#8220;From Potential to Profit: Closing the AI Impact Gap&#8221; (AI Radar 2025), January 2025, and related BCG publications citing approximately 70 per cent of AI challenges stemming from people, processes, and cultural change.</p><p>&#8309; OpenAI enterprise research: OpenAI&#8217;s enterprise deployment findings, reported 2024&#8211;2025. OpenAI&#8217;s enterprise team has publicly stated that the primary barriers to enterprise AI value are organisational, not technical.</p><p>&#8310; S&amp;P Global Market Intelligence: 42 per cent of companies abandoned most AI initiatives in 2025, more than double the previous year. S&amp;P Global Market Intelligence, 451 Research survey, published 2025.</p><p>&#8311; IDC prototype-to-production ratio: For every 33 AI prototypes built, approximately 4 reached production deployment. IDC research findings, cited across industry reporting, 2024&#8211;2025.</p><p>&#8312; Goldman Sachs internal contradiction: Goldman Sachs technology division under CIO Marco Argenti: deliberate AI infrastructure development with zero production generative AI use cases nearly a year after ChatGPT launch, reported in Financial Times, Bloomberg, and Goldman technology division communications, 2023&#8211;2025. Goldman Sachs Global Investment Research report &#8220;Gen AI: Too much spend, too little benefit?&#8221; published June 2024. The coexistence of cautious operational development and publicly sceptical macro research within the same institution illustrates the incentive problem at the institutional level.</p><p>&#8313; Quaie Q1 2026 fieldwork: Confidence, preparedness, adoption stage, and perceived blockers are being measured across ten executive roles (CEO, CTO/CIO, COO, CFO, CMO, CRO, CDO, CISO, CHRO, CLO). The anticipated blocker distribution, ROI uncertainty concentrating among CEO and CMO roles, integration complexity among CTOs, evidentiary thresholds among CFOs, workforce readiness among CHROs, and regulatory exposure among General Counsel, is consistent with BCG AI Radar 2025, which identified people, processes, and cultural change as the dominant AI challenge, and with McKinsey Global Survey on AI (2024), which found trust and explainability as primary barriers among non-technical leadership roles. Full methodology at quaie.io.</p><p>&#185;&#8304; Quaie&#8217;s six analytical constructs (the Role Shift Index, Role Lead-Lag Ranking, Organisational Adoption Gradient, Consensus Formation Time, Role Alignment Map, and Role Influence Index) are designed to measure the coordination dynamics the current ecosystem is not structured to capture. Described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026).</p>]]></content:encoded></item><item><title><![CDATA[The Case for Predictive Intelligence Over Retrospective Analytics]]></title><description><![CDATA[Most intelligence about AI adoption tells you what already happened.]]></description><link>https://quaie.io/p/the-case-for-predictive-intelligence-over-retrospective-analytics</link><guid isPermaLink="false">https://quaie.io/p/the-case-for-predictive-intelligence-over-retrospective-analytics</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 16 Feb 2026 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!WL1t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WL1t!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WL1t!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WL1t!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WL1t!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WL1t!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WL1t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2834531,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/188334557?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WL1t!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WL1t!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WL1t!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WL1t!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55e5d867-3fef-462b-8ad6-0a417ad13678_6000x3375.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Most intelligence about AI adoption tells you what already happened. Which companies deployed. Which tools gained traction. Which sectors spent the most. How many enterprises report using AI in production.</p><p>This is useful in the way that reading last quarter&#8217;s earnings report is useful. It confirms what occurred. It does not help you decide what to do next.</p><p>The distinction between retrospective analytics and predictive intelligence is not about sophistication or technology. It is about what is being measured. Retrospective analytics measures outcomes. Predictive intelligence measures the conditions from which outcomes emerge. The first tells you where the market landed. The second tells you where it is forming, before it lands.</p><p>For leaders navigating AI adoption, this distinction has practical consequences that are easy to underestimate.</p><p>A retrospective report tells you that a percentage of enterprises in your sector have deployed AI in some form.&#185; It does not tell you which roles inside those enterprises are confident the deployment will last. It does not tell you whether the decision was shared across functions or driven by a single champion whose departure would put the entire programme at risk. It does not tell you whether the organisation reached genuine consensus or simply exhausted its appetite for evaluation. All of these factors will determine whether that figure holds, grows, or quietly erodes over the next twelve months. None of them are captured by outcome measurement.</p><p>The reason is structural, not methodological. Outcomes are the end of a process. By the time they are measurable, the decisions that produced them are locked in. The organisation has already committed capital, allocated headcount, restructured workflows. If the underlying decision dynamics were flawed, if alignment was assumed rather than built, if one role&#8217;s conviction was mistaken for organisational readiness, those flaws will surface eventually. But they will surface as execution problems or adoption failures, not as what they actually were: decision-formation problems that were invisible because nobody was measuring the conditions under which the decision was made.</p><p>IBM Watson Health illustrates the pattern at scale. Between 2015 and 2016, IBM spent approximately $4 billion acquiring Truven Health Analytics, Merge Healthcare, Explorys, and Phytel to build a healthcare AI division that reached 7,000 employees. The technology was real. The investment was enormous. But the internal alignment required to deliver on the promise, across clinical teams, data governance, regulatory compliance, and commercial operations, had not formed. MD Anderson Cancer Center&#8217;s partnership alone cost $62 million before being shut down in 2017, with internal documents describing unsafe treatment recommendations. IBM eventually sold the division to Francisco Partners for approximately $1 billion.&#178; The post-mortem focused on technology limitations and market readiness. What it missed was that the conditions preceding the commitment, role-level alignment, cross-functional consensus, shared conviction about timing, were already misaligned before the capital was deployed. A retrospective report tells you IBM Watson Health failed. Predictive intelligence would have shown the conditions under which failure was forming.</p><p>This is the gap that predictive intelligence occupies. Not prediction in the sense of forecasting specific outcomes, which implies a precision that early-stage data cannot support. Prediction in the sense of observing the conditions that reliably precede certain outcomes, and making those conditions visible while there is still time to act on them.</p><p>The conditions that matter in AI adoption are well defined, even if they are rarely tracked.</p><p>Role-level confidence: how convinced are the people who need to act? The Role Shift Index tracks where each of ten executive roles sits on the adoption spectrum, and, critically, whether that position holds, advances, or reverts across quarters. A role showing stable high confidence is a different signal from one showing volatile confidence, even if both report the same score in a single reading.</p><p>Alignment across functions: do they share a common assessment of value, risk, and timing? The Organisational Adoption Gradient measures the distance between the most advanced and least advanced roles. When that gradient is wide, the enterprise-level average is concealing divergence that will surface as friction at the next decision point. The Role Alignment Map provides a complementary measure: where the Gradient captures divergence in adoption stage, the Alignment Map captures divergence in strategic interpretation, whether roles share a common view of AI priorities, ownership, and direction. Both conditions are measurable before outcomes appear, and both need to be tracked to understand whether an organisation&#8217;s alignment is genuinely forming or merely assumed.</p><p>Adoption sequence: are the right roles moving first, or is the organisation attempting to force a sequence that creates friction? Role Lead-Lag Rankings track the temporal distance between roles as they move through adoption stages, showing whether pairs of functions are converging toward shared conviction or pulling further apart. The Role Influence Index adds a further dimension: which roles are acting as catalysts, validators, or gatekeepers, and whether those influence patterns are consistent with the sequencing the organisation is attempting to execute. A mismatch between formal authority and actual influence is one of the conditions that most reliably predicts sequencing friction.</p><p>Consensus formation: has the organisation converged enough to make commitment rational, or is it acting on momentum alone? Consensus Formation Time estimates how many quarters it will take for roles to reach sufficient alignment for committed action, giving leaders a forward-looking decision timeline rather than a backward-looking deployment report.</p><p>Each of these conditions is observable before outcomes appear. Quaie&#8217;s Q1 2026 fieldwork across ten executive roles is designed to establish whether all four are already producing measurable signal at the experimentation stage, before any organisation has committed at scale. The hypothesis is that confidence will diverge sharply by role, that alignment gaps will be visible early, that adoption sequence will follow a pattern rooted in role context rather than organisational mandate, and that consensus will not yet have formed across the cohort. If that pattern holds, none of these findings will describe outcomes. All of them will describe the conditions from which outcomes will emerge over the coming quarters.&#179;</p><p>To make this concrete: consider an organisation where the CTO reports high confidence in AI&#8217;s durable value and has moved into scaled deployment, while the CMO reports low confidence and remains at experimentation. A retrospective report, published in twelve months, might note that this organisation&#8217;s AI programme succeeded in engineering and stalled in marketing. It would attribute this to differences in use case maturity or team capability. What it would miss entirely is that the divergence in conviction between these two roles was already visible a year earlier, in the conditions that preceded the outcome. A leader with access to that signal in real time could have intervened: investing in the evidence the CMO needed to build confidence, adjusting the sequence of rollout, or choosing to wait for alignment rather than pushing for scale across functions that were not ready.</p><p>That is the practical difference between knowing what happened and seeing what is forming.</p><p>This is what separates predictive intelligence from the retrospective analytics that currently dominate the market. It is not that retrospective research is wrong. The best of it goes beyond reporting outcomes and attempts to explain why they occurred, tracing results back to the organisational conditions that produced them. But even explanatory retrospective analysis arrives after the decision window has closed. Understanding why a deployment succeeded or failed eighteen months ago is valuable for learning. It is not valuable for the leader who needs to decide, this quarter, whether their organisation&#8217;s current level of alignment justifies the commitment they are being asked to make.</p><p>By the time a report tells you that a certain percentage of enterprises deployed AI and a certain percentage of those deployments were successful, the window for the most consequential decisions has already closed. The decision about whether to commit was made months earlier. The decision about which roles to fund and in what sequence was made earlier still. The decision about whether to push for speed or wait for alignment was made at the very beginning. These are the decisions that determine outcomes. And they are made in the absence of intelligence about the conditions that predict success, because that intelligence does not exist in the retrospective model.</p><p>The alternative is to build intelligence that operates upstream of outcomes. That tracks how decisions form rather than where they land. That measures confidence, alignment, sequence, and consensus while they are still in flux, while there is still time for a leader to intervene, adjust, wait, or commit based on where the conditions actually point.</p><p>This requires a different model of research. Not larger surveys or faster publication cycles, though both help. It requires a shift in what is being measured. From outcomes to decision dynamics. From company-level aggregates to role-level signals. From annual snapshots to quarterly longitudinal tracking. From reporting what happened to surfacing what is forming.</p><p>None of this is theoretical. The instruments exist. What remains is to observe whether the signals they produce hold, shift, or reverse over subsequent quarters, which is how directional intelligence becomes predictive intelligence.</p><p>The market for AI adoption research is large and growing. Most of it will continue to tell leaders what already happened, with increasing precision and decreasing relevance to the decisions they face today. A smaller category of intelligence will focus on what is forming rather than what has formed, on the conditions that precede outcomes rather than the outcomes themselves.</p><p>The leaders who benefit most will not be the ones with the most data about the past. They will be the ones with the clearest view of the present, and the earliest visibility into what the present implies about what comes next.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Retrospective AI adoption surveys: McKinsey Global Survey on AI (2025) reported 88 per cent of respondents using AI in at least one business function, with approximately one-third at enterprise-level scaling. BCG AI Radar 2025 (January 2025, 1,803 C-level executives) found 75 per cent ranked AI as a top-three priority, 25 per cent reported significant value. Deloitte State of AI in the Enterprise (2026 edition, 3,235 leaders, 24 countries). Each reports outcomes and stated intentions. None measures the decision-formation conditions, role-level confidence, cross-functional alignment, adoption sequence, consensus formation, that precede and predict those outcomes.</p><p>&#178; IBM Watson Health: IBM public filings, SEC filings, and press reporting. Approximately $4 billion in acquisitions (Truven Health Analytics, Merge Healthcare, Explorys, Phytel), 2015&#8211;2016. Division reaching approximately 7,000 employees. MD Anderson Cancer Center partnership, $62 million spent, closed 2017; internal documents describing unsafe treatment recommendations reported by STAT, 2017. University of Texas audit documented the failure. Sale to Francisco Partners for approximately $1 billion, reported 2022. See also Chapter 10 of The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) for extended analysis of premature capital allocation.</p><p>&#179; Quaie Q1 2026 fieldwork: Role-level confidence, alignment, adoption stage, and perceived blockers are being measured across ten executive roles (CEO, CTO/CIO, COO, CFO, CMO, CRO, CDO, CISO, CHRO, CLO). The fieldwork is designed to establish whether the four conditions described in this essay, confidence divergence by role, visible alignment gaps at experimentation stage, role-context-driven adoption sequence, and pre-consensus state across the cohort, are already present and measurable before outcomes appear. Methodology described in full at quaie.io.</p><p>Quaie&#8217;s six analytical constructs (the Role Shift Index, Organisational Adoption Gradient, Role Lead-Lag Ranking, Consensus Formation Time, Role Alignment Map, and Role Influence Index) each measure a specific condition preceding AI adoption outcomes: role-level confidence trajectory, cross-functional adoption-stage distance, adoption sequencing dynamics, consensus formation timeline, strategic alignment across the leadership system, and relative role influence over adoption decisions respectively. Described in full in The Role Layer and in preceding essays in this series, particularly &#8220;Why AI Adoption Needs a Reference Layer&#8221; and &#8220;What Becomes Visible Only After Multiple Quarters of AI Data.&#8221;</p>]]></content:encoded></item><item><title><![CDATA[What Becomes Visible Only After Multiple Quarters of AI Data]]></title><description><![CDATA[A single quarter of data tells you where things stand. Two quarters tell you what moved. Three quarters begin to tell you what holds.

The difference between a signal and a pattern is repetition. And repetition requires time that most organisations, and most research, are unwilling to commit to.]]></description><link>https://quaie.io/p/what-becomes-visible-only-after-multiple-quarters-of-ai-data</link><guid isPermaLink="false">https://quaie.io/p/what-becomes-visible-only-after-multiple-quarters-of-ai-data</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 09 Feb 2026 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!wBaz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wBaz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wBaz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wBaz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wBaz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wBaz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wBaz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg" width="1456" height="835" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:835,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:13509991,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/186596149?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!wBaz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wBaz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wBaz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wBaz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd0101d4d-2146-44d0-9560-20a567a21616_5917x3392.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A single quarter of data tells you where things stand. Two quarters tell you what moved. Three quarters begin to tell you what holds.</p><p>The difference between a signal and a pattern is repetition. And repetition requires time that most organisations, and most research, are unwilling to commit to.</p><p>This is not a minor limitation. It is the central constraint in understanding AI adoption. The dynamics that matter most, the ones that determine whether an organisation&#8217;s AI efforts will sustain or stall, are not visible in any individual reading. They emerge only when you observe the same dimensions across consecutive periods and ask whether the picture is converging or diverging, stabilising or reverting, building toward something durable or cycling through phases that repeat without progressing.</p><p>Consider the most basic question a leader can ask about AI adoption: is our organisation making progress? A single quarter of data can show that certain roles are experimenting, that confidence varies, that some functions are further along than others. All of this is useful. None of it answers the question, because progress is not a position. It is a trajectory. And a trajectory requires at least two points.</p><p>With two quarters of data, the question becomes answerable in ways that a single reading cannot support. A role that reported high confidence in Q1 and maintains it in Q2 is showing a different pattern from one that reported high confidence in Q1 and dropped in Q2. The first suggests that confidence is earned and durable. The second suggests it was provisional, perhaps driven by a successful pilot that did not replicate, or by enthusiasm that faded as implementation challenges became clearer. Both looked identical in Q1. Only the second reading reveals which trajectory the organisation is actually on.</p><p>The same logic applies to alignment. Two roles that diverge in Q1 may converge in Q2, suggesting the organisation is working through a natural phase of adjustment. Or they may diverge further, suggesting a structural misalignment that is hardening rather than resolving. A leader who can see which of these patterns is unfolding has a fundamentally different basis for action than one who can only see the current gap.</p><p>With three or more quarters, something more powerful becomes available. Patterns that looked like noise begin to resolve into signal, or are confirmed as noise and can be set aside with confidence. The distinction matters enormously. In any dataset, particularly one measuring something as volatile as early-stage AI adoption, there will be fluctuations that mean nothing. A role whose confidence drops by half a point in a single quarter may be experiencing a genuine shift or may simply be reflecting the mood of the moment. Two quarters of decline is more informative. Three quarters of decline is a trend that warrants attention and action.</p><p>This is where longitudinal intelligence separates from everything else available in the market. Not because it produces more data, but because it produces a fundamentally different kind of data. Direction. Momentum. Convergence. Reversion. Stabilisation. These are the dynamics that determine whether an organisation&#8217;s AI adoption is on a sustainable path or an unstable one. None of them are visible in a single reading, no matter how large the sample or how sophisticated the analysis.</p><p>There are specific dynamics that only longitudinal observation can surface, and they are the ones that matter most for decision-making.</p><p>The first is stabilisation versus reversion. Early leaders in AI adoption either consolidate their behaviour over subsequent quarters, embedding AI into their operating rhythm, or they fall back into experimentation. Both look the same in the first quarter. Only repeated observation distinguishes the role that has genuinely crossed the threshold from the one that appeared to cross it temporarily. The Role Shift Index tracks this, mapping where each of ten executive roles sits on the adoption spectrum quarter by quarter. A role that holds its position across two or three quarters is showing stabilisation. A role that advances in Q1 and retreats in Q2 is showing reversion. The distinction is invisible in any single reading. For leaders allocating budget and making staffing decisions on the basis of early adoption signals, it is the difference between investing in something durable and investing in something that will unwind.</p><p>The second is durability of alignment. In any organisation, misalignment between roles is a natural phase of adoption. The question that matters is whether it resolves or persists. Some gaps close naturally as roles accumulate shared experience and evidence. Others harden over time, with each role becoming more entrenched in its position as it accumulates confirming evidence.&#185; The Organisational Adoption Gradient measures the distance between the most advanced and least advanced roles. The Role Lead-Lag Rankings track how that distance changes over time, whether specific pairings (CTO and CFO, CMO and COO) are converging or diverging across quarters. The Role Alignment Map adds a distinct but complementary view: where the Gradient measures divergence in adoption stage, the Alignment Index measures divergence in strategic interpretation, whether roles share a common view of AI priorities, ownership, and direction. Both dimensions of alignment matter, and longitudinal observation is what makes it possible to distinguish between the two and track how each evolves. Knowing which pattern your organisation is on changes everything about how you intervene. A gap that is closing needs patience. A gap that is hardening needs action. A single quarter cannot tell you which you are facing.</p><p>The third is compression of decision cycles. Over time, some organisations get faster at moving from experimentation to commitment. The distance between &#8220;we&#8217;re testing this&#8221; and &#8220;this is how we operate&#8221; shortens. Other organisations do not compress. They repeat the same evaluation cycle quarter after quarter without progressing. Consensus Formation Time is designed to capture this, estimating how many quarters it will take for an organisation&#8217;s roles to reach sufficient convergence for committed action, and tracking whether that estimate is shortening or lengthening as new data arrives. The Role Influence Index contributes here as well: as the dataset deepens across quarters, it becomes possible to observe whether influence patterns are shifting, whether the roles that drove early adoption decisions continue to do so, or whether influence is redistributing as AI moves from experimentation toward operational deployment. These dynamics are invisible in a snapshot. They are clearly visible across three or four quarters, and they are among the strongest predictors of whether an organisation will reach the kind of durable, scaled AI adoption that produces real economic value.</p><p>Each of these dynamics requires patience to observe. None can be inferred from a single wave of research, regardless of methodology or sample size. This is not a criticism of snapshot research. It is a statement about what time makes visible that nothing else can.&#178;</p><p>From Q2 2026 onward, Quaie will begin validating Q1 signals against subsequent decisions: identifying where early patterns held, where they shifted, and where organisational trajectories diverged from what the baseline suggested. This is when directional intelligence begins to become predictive. Not because the methodology changes, but because the dataset accumulates enough temporal depth to distinguish between what persists and what was passing through.</p><p>The value of this intelligence compounds in a way that is unusual for research products. Most reports depreciate the moment they are published. The findings are current for a quarter, then superseded. Longitudinal intelligence works differently. Each new quarter does not replace the previous one. It transforms it. A Q1 finding that seemed ambiguous becomes interpretable in light of Q2. A Q2 pattern that looked like an anomaly is confirmed or rejected by Q3. The dataset does not just grow. It deepens, and each layer of depth makes the existing layers more valuable.</p><p>This is why a single quarter, including Q1, is best understood not as a conclusion but as a foundation. Its value lies less in what it answers definitively today than in what it makes possible to track, compare, and understand with increasing confidence over time.</p><p>The intelligence gets more decisive quarter by quarter, as patterns move from early indication to reliable signal. That process has now begun.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Hardening of misalignment over time: The pattern of entrenched role positions is consistent with organisational behaviour research on escalation of commitment and confirmation bias in decision-making. In the AI adoption context, the anticipated divergence between technically proximate roles (CTO, CIO) and commercially proximate roles (CMO, CRO) reflects a structural difference in the evidence each function accumulates: integration progress confirms the technical case, while absence of commercial proof deepens commercial scepticism. Without deliberate intervention, these positions tend to reinforce rather than resolve. See also Quaie&#8217;s essay &#8220;Where Misalignment Blocks AI Progress&#8221; for extended analysis, and Goldman Sachs&#8217;s experience of managing internal divergence between its technology division and macro research function.</p><p>&#178; Snapshot versus longitudinal research methodologies: The major annual AI adoption surveys, McKinsey Global Survey on AI (published annually since 2017), BCG AI Radar (published annually), and Deloitte State of AI in the Enterprise (published biennially), each provide valuable cross-sectional data. Their structural limitation is temporal: they capture a single reading per publication cycle, do not track the same respondents across periods, and aggregate to the enterprise or sector level rather than disaggregating by executive role. See also Quaie&#8217;s preceding essay &#8220;Why Snapshots of AI Adoption Mislead Leaders&#8221; for extended analysis of the implications.</p><p>Quaie&#8217;s six analytical constructs (the Role Shift Index, Role Lead-Lag Ranking, Organisational Adoption Gradient, Consensus Formation Time, Role Alignment Map, and Role Influence Index) are each designed to capture a specific longitudinal dynamic. The Role Shift Index tracks stabilisation versus reversion. The Organisational Adoption Gradient and Role Lead-Lag Ranking track durability of adoption-stage alignment. The Role Alignment Map tracks durability of strategic alignment. Consensus Formation Time tracks compression of decision cycles. The Role Influence Index tracks how influence patterns shift as adoption matures. Described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026).</p>]]></content:encoded></item><item><title><![CDATA[Why Snapshots of AI Adoption Mislead Leaders]]></title><description><![CDATA[Once a year, a research firm publishes a report on AI adoption.]]></description><link>https://quaie.io/p/why-snapshots-of-ai-adoption-mislead-leaders</link><guid isPermaLink="false">https://quaie.io/p/why-snapshots-of-ai-adoption-mislead-leaders</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 02 Feb 2026 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!UCZD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UCZD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UCZD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UCZD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UCZD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UCZD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UCZD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg" width="1456" height="926" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:926,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5930797,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/186459969?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UCZD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg 424w, https://substackcdn.com/image/fetch/$s_!UCZD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg 848w, https://substackcdn.com/image/fetch/$s_!UCZD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!UCZD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7930491-7952-4212-9b5f-10e31849e569_4845x3083.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Once a year, a research firm publishes a report on AI adoption. It surveys a few thousand executives, aggregates their responses, and produces a set of findings: what percentage have deployed AI, what percentage plan to, what the top use cases are, what the main barriers seem to be.</p><p>These reports are cited in board decks, referenced in strategy documents, and largely forgotten within a quarter. Not because they are wrong, but because they are static. They capture a moment. They cannot show whether that moment is the beginning of something, the peak of something, or a blip that will reverse by the next survey.&#185;</p><p>This is the fundamental limitation of snapshot research. It tells you where things stand. It cannot tell you where things are heading, because direction requires at least two points in time.</p><p>The problem starts with what snapshots choose to measure. Most aggregate at the company or sector level. They report that a certain percentage of enterprises in a given industry have adopted AI. This produces a clean headline. It produces a very poor decision input for any specific leader trying to understand what is happening inside their own organisation.</p><p>The reason is that averages compress the signal that matters most.</p><p>Quaie&#8217;s Q1 2026 fieldwork is designed to make that compression visible. The hypothesis is that when confidence is measured across ten executive roles, the variance within a single role across the cohort will tell a richer story than any central tendency. CMO confidence, for instance, is likely to range across much of the scale, from deep scepticism at one end to high commitment at the other. A snapshot methodology would average these into something like &#8220;moderate confidence&#8221; and move on. That average would be technically accurate and practically meaningless. It would obscure the fact that some CMOs are deeply sceptical while others are highly committed. The divergence within the role is the finding. The average hides it.</p><p>The same compression is likely to appear across nearly every measure the fieldwork captures. Confidence, preparedness, adoption stage, perceived blockers: in each case, the variance between roles within the same cohort is expected to tell a richer story than the central tendency. An organisation whose CTO reports high confidence and whose CMO reports low confidence is in a fundamentally different position from one where both report moderate confidence. Quaie&#8217;s Organisational Adoption Gradient quantifies this distance, the spread between the most advanced and least advanced roles, and the expectation going into Q1 is that the gradient will be wide enough to confirm that enterprise-level averages are concealing the divergence that actually determines whether adoption holds or stalls. A snapshot that summarises both organisations as &#8220;moderate&#8221; has lost the signal entirely.</p><p>Consider what a leader does with that lost signal. They read that their sector shows &#8220;moderate confidence&#8221; in AI adoption. They look at their own organisation and see a similar picture at the surface level. They conclude they are roughly in line with peers. What they cannot see is that their CTO is significantly more advanced than their CMO, that the gap between those two roles is wider than in most peer organisations, and that this specific pattern of divergence tends to predict friction in the next phase of adoption. The snapshot gave them reassurance. The underlying data, had it been preserved at the role level, as the Role Shift Index preserves it, tracking where each of ten executive roles sits on the adoption spectrum, would have given them a warning.</p><p>The second problem with snapshots is subtler but equally damaging. They create a false sense of stability.</p><p>A report published in January that shows 60 per cent adoption rates will be treated as current intelligence until the next report arrives, often twelve months later. During that interval, roles shift. Confidence fluctuates. Initiatives that looked promising in January may have stalled by June. Alignment that appeared to be forming may have fractured. But the number persists, because no updated measurement exists to replace it. Leaders plan against a figure that describes where things were, not where things are.</p><p>In a domain as volatile as early-stage AI adoption, this is not a minor distortion. It is a structural one. Decisions made on the basis of static intelligence assume the landscape has not changed since the last measurement. In a mature, slow-moving market, that assumption might hold for a year. In AI adoption, where a single quarter can see meaningful shifts in role-level confidence, adoption stage, and blocker composition, it rarely does.</p><p>Longitudinal measurement addresses both problems. By tracking the same dimensions across consecutive quarters, it becomes possible to distinguish between signals that persist and those that revert.</p><p>A role that reports high confidence in Q1 and again in Q2 is showing a different pattern from one that reports high confidence in Q1 and moderate confidence in Q2. The first suggests stabilisation. The second suggests volatility. Both looked identical in the Q1 snapshot. Only the second reading reveals which pattern the organisation is actually on. The Role Lead-Lag Rankings between roles make this visible, tracking the temporal distance between functions as they move through adoption stages, showing whether pairs of roles are converging toward shared conviction or pulling further apart. The Role Influence Index adds a further layer here: as the pattern of influence across roles shifts between quarters, it can reveal whether the roles driving adoption decisions are changing, and whether that shift is bringing the leadership system closer to or further from the conditions required for coordinated commitment.</p><p>The same logic applies to alignment. Two roles that diverge in Q1 may converge in Q2, suggesting the organisation is working through a natural phase of adjustment. Or they may diverge further, suggesting a structural misalignment that is hardening rather than resolving. A single reading cannot distinguish between these trajectories. Two readings begin to. Three readings make the distinction reliable. The Role Alignment Map makes this directly observable, tracking whether the leadership system is converging on shared strategic priorities and ownership, or fracturing as individual roles form increasingly divergent interpretations. Consensus Formation Time builds on this logic, estimating how many quarters it will take for an organisation&#8217;s roles to reach sufficient convergence for committed action, giving leaders a forward-looking timeline rather than a backward-looking position.</p><p>This is why the value of longitudinal intelligence compounds in a way that snapshot research cannot. Each additional quarter does not simply add another data point. It transforms the existing data by providing context that was previously invisible. A Q1 finding that seemed ambiguous becomes interpretable in light of Q2. A pattern that looked like noise resolves into signal, or is confirmed as noise and can be set aside.</p><p>There is a reason financial markets do not rely on annual surveys of investor sentiment. The information would be stale before it was published. Markets require continuous signal because positions change, confidence shifts, and the spread between participants is where risk and opportunity sit. Nobody would manage a portfolio on the basis of a single annual reading of market conditions. Yet this is roughly how most organisations navigate AI adoption: one survey per year, aggregated to the sector level, with no visibility into role-level dynamics or quarter-over-quarter movement.&#178;</p><p>AI adoption is not a financial market. But it shares a characteristic that matters here: the important dynamics are not in the position but in the movement. Where an organisation sits at any given moment is less informative than whether it is converging or diverging, accelerating or stalling, building consensus or quietly losing it.</p><p>Snapshots measure position. Longitudinal intelligence measures movement. For leaders making consequential decisions about AI under genuine uncertainty, the difference between those two things is not academic. It is the difference between knowing where you were and understanding where you are heading.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Annual AI adoption surveys referenced: McKinsey Global Survey on AI (2024) reported 78 per cent of respondents using AI in at least one business function; the 2025 edition reported 88 per cent, with approximately one-third reporting enterprise-level scaling. BCG AI Radar 2025 (January 2025, 1,803 C-level executives across 19 markets) found 75 per cent ranked AI as a top-three priority, 25 per cent reported significant value. Deloitte State of AI in the Enterprise (2026 edition, 3,235 leaders surveyed August&#8211;September 2025, 24 countries). Each is published annually or biennially. None disaggregates by executive role within the enterprise. None tracks the same respondents across consecutive periods.</p><p>&#178; Financial markets analogy: Yield curves, employment data, and leading economic indicators are published at frequencies ranging from daily to monthly precisely because the dynamics they measure are not static. The Federal Reserve publishes employment data monthly. Treasury yield curves update continuously. The contrast with annual AI adoption surveys, measuring a domain that is arguably more volatile than labour markets in its current phase, illustrates the structural limitation of snapshot methodology.</p><p>Quaie&#8217;s six analytical constructs (the Role Shift Index, Role Lead-Lag Ranking, Organisational Adoption Gradient, Consensus Formation Time, Role Alignment Map, and Role Influence Index) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026).</p>]]></content:encoded></item><item><title><![CDATA[How Organisations Actually Reach Consensus on AI]]></title><description><![CDATA[There is a moment in most AI adoption processes where experimentation gives way to commitment. Budget is allocated. Ownership is assigned. The organisation decides to act.

That moment is rarely a decision in any formal sense. It is the result of a process that unfolds across roles, over time, and usually more slowly than anyone involved would prefer. Understanding how that process works is important because most organisations act as though it has already completed when it hasn&#8217;t.]]></description><link>https://quaie.io/p/how-organisations-actually-reach-consensus-on-ai</link><guid isPermaLink="false">https://quaie.io/p/how-organisations-actually-reach-consensus-on-ai</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 26 Jan 2026 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!RXRQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RXRQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RXRQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RXRQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RXRQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RXRQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RXRQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg" width="1365" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1365,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:537282,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/186459660?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RXRQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!RXRQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!RXRQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!RXRQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4425591b-e33a-42d2-80a0-ea16e5c42a5c_1365x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There is a moment in most AI adoption processes where experimentation gives way to commitment. Budget is allocated. Ownership is assigned. The organisation decides to act.</p><p>That moment is rarely a decision in any formal sense. It is the result of a process that unfolds across roles, over time, and usually more slowly than anyone involved would prefer. Understanding how that process works is important because most organisations act as though it has already completed when it hasn&#8217;t.</p><p>A board-level endorsement is treated as alignment. A successful pilot is treated as proof of readiness. A CTO&#8217;s conviction is treated as organisational confidence. In practice, none of these are consensus. They are inputs to a process that may or may not have run its course. The gap between these inputs and genuine cross-functional agreement is where many AI initiatives quietly come apart.</p><p>The consequences are well documented in retrospect, though rarely diagnosed correctly at the time. Initiatives that launched with enthusiasm and stalled at the budget review. Programmes that secured investment but lost momentum when a second function failed to engage. Deployments that succeeded technically but were abandoned because the organisation could not agree on who owned the outcomes or how to measure them. In each case, the post-mortem tends to focus on execution. The tool wasn&#8217;t right. The team wasn&#8217;t ready. The business case was weak. What is less often acknowledged is that the organisation committed before confidence had converged across the roles that needed to support the commitment. The problem was not execution. It was timing.</p><p>Zillow&#8217;s iBuying collapse illustrates the pattern at scale. The company committed $3.75 billion in credit facilities and a $20 billion revenue target to an algorithmically driven home-purchasing programme. The technology worked. The models were sophisticated. But the internal alignment required to operate a programme of that ambition, across pricing, risk management, operations, and market assessment, had not formed. When the models failed to account for market conditions that operational roles had flagged, Zillow lost $421 million in a single quarter, cut 25 per cent of its workforce, and exited the business entirely.&#185; The failure was not technical. It was premature commitment at organisational scale, action taken before the roles responsible for sustaining it had reached shared conviction about the conditions under which it could work.</p><p>Quaie&#8217;s Q1 2026 fieldwork across ten executive roles is designed to provide a direct view of where consensus currently sits among senior decision-makers. The hypothesis is that it has not yet formed in most organisations. Among the signals the fieldwork is designed to surface: whether &#8220;too early to tell&#8221; emerges as the modal response on value confidence, whether high confidence concentrates almost entirely among roles already at scaled deployment, and whether mean confidence and preparedness scores across all roles sit materially below the midpoint of the scale. The Organisational Adoption Gradient, the distance between the most confident and least confident roles, is expected to be wide enough to confirm that enterprise-level averages are concealing the divergence that actually determines whether commitment is rational.</p><p>If those patterns hold, they will not describe organisations on the verge of coordinated action. They will describe organisations in the pre-consensus phase, where experimentation is active and interest is high but shared conviction has not converged to the point where committing significant capital, restructuring teams, or scaling across functions is rational.</p><p>This does not mean all action is premature. The distinction matters and is often lost in the urgency narrative that surrounds AI adoption. Localised experimentation within a single function, where the role has authority and short feedback loops, is rational even in the absence of broader consensus. A CTO running AI tooling in engineering does not need the CMO&#8217;s agreement to proceed. A COO testing AI-assisted operations does not need finance to sign off on a pilot budget.</p><p>What requires consensus is the next tier of commitment. Scaling across functions. Allocating capital that draws on shared budgets. Restructuring workflows that span multiple roles. Hiring or reorganising teams around AI as a core operating capability. These decisions require a degree of cross-role agreement that early-stage fieldwork consistently suggests does not yet exist in most organisations. Proceeding without it is not bold. It is premature, and the costs tend to surface in ways that are difficult to reverse.</p><p>This matters because the costs of premature action and delayed action are not symmetric. This asymmetry is under-appreciated and worth examining carefully.</p><p>Acting too early, before alignment has formed across the roles that need to support a commitment, tends to produce initiatives that are technically functional but organisationally unsupported. They survive as long as their internal champion drives them forward. When that champion moves on, or when the initiative requires buy-in from a function that was never genuinely convinced, it stalls. The cost is not just the failed investment itself. It is the erosion of confidence across the organisation. People remember the initiative that was launched before the ground was ready. That memory makes the next initiative harder to advance, the next budget harder to secure, the next cross-functional conversation more cautious. Premature action does not just fail in the present. It taxes the future.&#178;</p><p>Acting too late, after consensus has formed and competitors have moved, carries opportunity cost. But that cost is typically bounded and recoverable. An organisation that enters a market or adopts a capability six months after its competitors can still compete effectively. The advantage lost by waiting is real but rarely existential. An organisation that enters before it is internally aligned may spend those six months not competing but unwinding a premature commitment, resolving the internal friction it created, and rebuilding the confidence it eroded.</p><p>The prevailing narrative around AI adoption emphasises urgency. Move fast. Don&#8217;t get left behind. First-mover advantage. These pressures are real, and they are felt acutely in boardrooms and leadership teams. But they tend to compress the pre-consensus phase rather than support it. Organisations feel pressure to act before they have established the internal conditions that make action sustainable. The urgency is externally imposed. The readiness, or lack of it, is internal.</p><p>The signals that genuine consensus is forming are observable, if you know what to look for. Confidence begins to converge across roles rather than concentrating in one, visible in Quaie&#8217;s Role Shift Index as the positions of different roles move closer together on the adoption spectrum. The language in internal discussions shifts from &#8220;testing&#8221; and &#8220;exploring&#8221; to &#8220;planning&#8221; and &#8220;resourcing.&#8221; Assumptions about impact, cost, and responsibility narrow rather than continue to diverge. Ownership questions get resolved rather than deferred to the next quarterly review. The Role Lead-Lag Rankings between key pairings, CTO and CFO, CMO and COO, begin to narrow rather than widen. The Role Alignment Map provides a direct read on whether this convergence is genuine: not just whether roles are moving along the adoption spectrum at similar speeds, but whether they are forming a shared interpretation of AI&#8217;s strategic priorities and ownership.</p><p>The signals that action remains premature are equally observable. One function pushes for scale while others remain unconvinced. Responsibility for outcomes is contested or deliberately left vague. Budget is allocated on the basis of momentum rather than agreement. The organisation describes itself as &#8220;committed to AI&#8221; while key roles privately describe themselves as &#8220;still evaluating.&#8221; There is a gap between the public position and the internal reality, and that gap is where premature commitment lives. The Role Influence Index is relevant here too: where one role carries disproportionate influence over adoption decisions, the organisation is at particular risk of mistaking that role&#8217;s conviction for collective readiness. A highly influential CTO who pushes for scale before other roles have converged is not leading consensus formation. The organisation may be describing itself as committed while several of the roles that matter most to sustained execution remain privately unconvinced.</p><p>The Q1 fieldwork is designed to establish which set of signals is more prevalent across the cohort, and whether the pre-consensus pattern anticipated here holds in practice. Consensus takes time. It forms through exposure, evidence, and repeated conversation across roles. Quaie&#8217;s Consensus Formation Time is designed to estimate how many quarters that convergence will take, giving leaders a forward-looking view of their decision timeline rather than a backward-looking account of what has already been deployed. The question is not whether that time is passing, but whether it is being used to build alignment deliberately or simply passing while the distance between roles widens without anyone measuring it.</p><p>Knowing where your organisation sits relative to genuine consensus is more useful than knowing how fast it is moving. Speed without convergence is not progress. It is exposure.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Zillow iBuying collapse: Zillow Group public filings, earnings calls, and financial reporting, 2019&#8211;2021. $20 billion revenue target, $3.75 billion in credit facilities, Q3 2021 loss of $421 million, 25 per cent workforce reduction (approximately 2,000 employees): Zillow Group SEC filings and earnings transcripts. Rich Barton&#8217;s statements on earnings calls: public record. See also Chapter 7 of The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) for extended analysis.</p><p>&#178; Asymmetric costs of premature versus delayed action: The pattern is consistent with historical evidence from ERP implementations, where premature commitment, scaling before cross-functional alignment had formed, produced cost overruns averaging twice the original budget and implementation timelines extending from 18 months to 3&#8211;5 years (Panorama Consulting Group, annual ERP reports, 2010&#8211;2020). See also Quaie&#8217;s essay &#8220;What ERP Taught Us About AI and What Leaders Have Already Forgotten&#8221; for extended analysis of the parallel.</p><p>Quaie&#8217;s constructs referenced in this essay (the Organisational Adoption Gradient, Role Lead-Lag Ranking, Role Shift Index, Role Alignment Map, Role Influence Index, and Consensus Formation Time) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in preceding essays in this series.</p>]]></content:encoded></item><item><title><![CDATA[Where Misalignment Blocks AI Progress]]></title><description><![CDATA[When AI initiatives stall, the instinct is to look for technical failure. The model underperformed. The data wasn&#8217;t ready. Integration proved harder than expected.

In practice, the more common cause is quieter. Different roles reached different conclusions about the same initiative, and nobody surfaced the gap until momentum had already stalled.]]></description><link>https://quaie.io/p/where-misalignment-blocks-ai-progress</link><guid isPermaLink="false">https://quaie.io/p/where-misalignment-blocks-ai-progress</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 19 Jan 2026 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VoZC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VoZC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VoZC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VoZC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VoZC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VoZC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VoZC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg" width="1254" height="837" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:837,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1163794,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/186458919?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VoZC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VoZC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VoZC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VoZC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F869e4330-5ea6-4730-8c52-8d8e5cf46f4c_1254x837.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>When AI initiatives stall, the instinct is to look for technical failure. The model underperformed. The data wasn&#8217;t ready. Integration proved harder than expected.</p><p>In practice, the more common cause is quieter. Different roles reached different conclusions about the same initiative, and nobody surfaced the gap until momentum had already stalled.</p><p>This pattern is so consistent that it deserves to be treated not as an occasional setback but as a structural feature of how organisations adopt AI. Misalignment between roles is not a failure of communication or leadership. It is the predictable result of different functions evaluating the same situation through different lenses, with different risk tolerances, different time horizons, and different definitions of what success looks like. The CTO evaluating technical feasibility is asking a different question from the CMO evaluating commercial impact, which is a different question from the CFO evaluating capital justification, which is a different question again from the CEO evaluating strategic exposure. Each question is legitimate. Each produces a different answer. And those answers diverge before anyone in the organisation necessarily realises they have.</p><p>The challenge is that misalignment is hard to see from the inside. Each role&#8217;s position feels internally coherent. The CTO who has moved into production use sees an organisation that is making progress and wonders why other functions aren&#8217;t keeping pace. The CMO who is still evaluating sees an organisation that is moving too fast without sufficient proof of commercial value and wonders why technical teams seem indifferent to that concern. The CFO sees capital being committed without the evidentiary standard they would apply to any other investment of equivalent scale. The CHRO sees workforce implications that nobody else has raised. The General Counsel sees regulatory exposure that the operational roles have not accounted for. The CEO who senses tension between them may not be able to locate exactly where the gap sits, how wide it has become, or whether it is narrowing or widening over time.</p><p>Each of these perspectives is rational. That is precisely what makes the problem so persistent. Misalignment does not feel like misalignment from the inside. It feels like other people not understanding the situation as clearly as you do.</p><p>From the outside, looking across roles simultaneously, the picture is different.</p><p>Quaie&#8217;s Q1 2026 fieldwork is designed to make that picture visible across ten executive roles. The hypothesis is that divergence between roles will emerge as the dominant pattern across nearly every measure captured. CTOs at scaled deployment are expected to report both high confidence and high preparedness. CMOs at the experimentation stage are likely to report substantially lower scores on both. The gap between the most advanced and least advanced roles within the same cohort may prove wider than the gap between adoption stages, suggesting that role context shapes readiness more than organisational maturity does. Quaie&#8217;s Organisational Adoption Gradient is designed to quantify this spread precisely, making visible the divergence that enterprise-level averages conceal.</p><p>The blocker distribution is likely to tell the same story from a different angle. ROI uncertainty is expected to concentrate among CEO and CMO roles. CTOs are more likely to cite integration complexity and security concerns. CFOs are likely to flag insufficient evidence for capital commitment. CHROs are expected to raise workforce readiness questions that no other role has addressed. General Counsel is likely to cite regulatory uncertainty and liability exposure.&#185; If that pattern holds, the organisation is not facing one shared constraint that can be addressed with a single intervention. It is facing several simultaneously, distributed unevenly across the people responsible for resolving them. The CTO wants to solve integration problems. The CMO wants to see ROI evidence. The CFO wants both resolved before releasing budget. The CHRO wants to know what happens to the workforce. General Counsel wants governance in place before deployment expands. The CEO wants all of these answered before committing further capital. None of them is wrong. But the absence of a shared view of where these concerns sit relative to each other means the organisation oscillates between priorities rather than converging toward a resolution. This is precisely the condition the Role Alignment Map is designed to surface: not merely where roles sit on the adoption spectrum, but whether the leadership system shares a common interpretation of AI&#8217;s strategic priorities, ownership, and direction.</p><p>This is what makes misalignment so resistant to the usual fixes. It is not a single disagreement that can be resolved in a meeting or a workshop. It is a set of parallel concerns, each legitimate, each owned by a different function, each pulling the organisation in a slightly different direction. A steering committee can coordinate activity. It cannot manufacture shared conviction where conviction has not yet formed.</p><p>The conventional response to this kind of friction is to push harder. Escalate decisions. Set deadlines. Create accountability structures. These interventions sometimes produce movement in the short term, but they tend to compress disagreement rather than resolve it. Roles comply without aligning. Activity continues without conviction. The initiative moves forward on paper while confidence remains fragmented underneath.</p><p>The result is a pattern that most leadership teams will recognise: an AI programme that looks healthy by activity metrics but stalls when it reaches a decision point that requires genuine cross-functional commitment. Budget review. Scaling decision. Governance sign-off. These moments expose whether alignment is real or performative, and the answer often surprises the people involved. The programme that everyone assumed was on track turns out to have been running on one function&#8217;s conviction and another function&#8217;s compliance.</p><p>Goldman Sachs offers an instructive case. Under CIO Marco Argenti, Goldman took a deliberately sequenced approach to AI adoption, building internal infrastructure, testing tools within contained functions, and declining to scale until the organisation&#8217;s own evidence supported it. Nearly a year after ChatGPT&#8217;s launch, Goldman had zero production generative AI use cases. This was not inertia. It was a deliberate decision to wait for alignment to form across functions before committing at scale. The same institution&#8217;s macro research division, meanwhile, published a widely cited report questioning whether AI spending across the industry would ever generate adequate returns.&#178; The tension between these two positions, operational caution and analytical scepticism housed within the same firm, illustrates exactly how misalignment manifests even in organisations that are managing it deliberately.</p><p>What makes misalignment measurable rather than merely observable is that its signals appear early. They do not wait for a failed deployment or a missed milestone to become visible. The divergence between roles in both confidence and the nature of blockers cited is likely to be present during pilots, well before any organisation has attempted full integration. The friction that will slow or stall future deployment decisions is forming in the gap between how different roles are experiencing the same early-stage initiatives. The Role Lead-Lag Ranking is designed to track whether roles are converging toward shared conviction or pulling further apart. The Role Influence Index adds a further dimension: where one role exerts disproportionate influence over adoption decisions, misalignment between that role and its functional dependents carries greater organisational weight than the same gap between lower-influence roles. Understanding which roles are acting as gatekeepers or validators helps identify where unresolved divergence is most likely to stall progress.</p><p>This has implications for how organisations assess their own readiness. Most AI readiness assessments operate at the organisational level: does the company have the data, the tools, the talent, the budget? These are necessary conditions. They are not sufficient ones. An organisation can have all four and still stall if the roles responsible for acting on them do not share a common assessment of risk, value, and timing. Readiness that exists in one function but not in others is not organisational readiness. It is local capability masquerading as collective preparedness.</p><p>The more useful diagnostic is role-level. Where has one function moved ahead of shared agreement? Which roles are carrying risk that other roles have not acknowledged? Is budget being allocated on the basis of genuine convergence, or on the basis of one function&#8217;s conviction outweighing another&#8217;s hesitation? Is the organisation describing itself as aligned because alignment has been measured, or because nobody has asked the question directly? The Role Shift Index provides the baseline for each of these questions, mapping where each role sits on the adoption spectrum and making visible the gaps that enterprise-level metrics compress away.</p><p>These questions are uncomfortable because they surface disagreement that organisations prefer to leave implicit. But implicit disagreement does not resolve itself. It compounds. The gap between a CTO&#8217;s confidence and a CMO&#8217;s scepticism does not naturally close over time without deliberate intervention. It tends to widen, because each role continues to accumulate evidence that confirms its own position. The CTO sees the tool working and becomes more confident. The CMO sees the absence of commercial proof and becomes more sceptical. The CFO sees budget flowing without the returns that would justify it and becomes more cautious. Both are responding rationally to the evidence available to them. The problem is not that any of them is wrong. It is that none can see the others&#8217; evidence clearly enough to update their own view.</p><p>Misalignment is not a problem to be eliminated. It is a natural phase of adoption that every organisation passes through on the way from experimentation to commitment. The question is not whether it will appear, but whether it will be surfaced and managed deliberately, or left to harden into a structural constraint that no amount of technical capability can overcome.</p><p>The organisations that stall are not usually the ones that lack ambition or talent. They are the ones where friction between roles went unacknowledged long enough to become the defining constraint. Seeing where that friction sits is the first step toward resolving it.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Blocker distribution by role: The anticipated pattern of ROI uncertainty concentrating among CEO and CMO roles, with integration concerns among CTOs and evidentiary concerns among CFOs, is consistent with BCG AI Radar 2025 (January 2025, 1,803 C-level executives), which found that approximately 70 per cent of AI challenges stem from people, processes, and cultural change rather than technology, and with McKinsey Global Survey on AI (2024), which identified trust and explainability concerns as primary barriers among non-technical leadership roles.</p><p>&#178; Goldman Sachs AI adoption: Goldman&#8217;s deliberate sequencing under CIO Marco Argenti reported in Financial Times, Bloomberg, and Goldman Sachs technology division communications, 2023&#8211;2025. Zero production generative AI use cases nearly a year after ChatGPT launch: Argenti&#8217;s public remarks. Goldman Sachs Global Investment Research report &#8220;Gen AI: Too much spend, too little benefit?&#8221; published June 2024. The coexistence of operational caution and analytical scepticism within the same institution illustrates structured misalignment management.</p><p>Quaie&#8217;s constructs referenced in this essay (the Organisational Adoption Gradient, Role Lead-Lag Ranking, Role Shift Index, Role Alignment Map, and Role Influence Index) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in preceding essays in this series.</p><p> </p>]]></content:encoded></item><item><title><![CDATA[Which Roles Lead AI Adoption and Which Follow]]></title><description><![CDATA[Inside most organisations, AI adoption does not move as a single wave. It moves in sequence. One role begins. Others observe. Some follow when they see evidence. Some wait longer. The order is less random than it appears.]]></description><link>https://quaie.io/p/which-roles-lead-ai-adoption-and-which-follow</link><guid isPermaLink="false">https://quaie.io/p/which-roles-lead-ai-adoption-and-which-follow</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 12 Jan 2026 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!lNS_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lNS_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lNS_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lNS_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lNS_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lNS_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lNS_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg" width="1254" height="836" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:836,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:523843,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/186444054?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lNS_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lNS_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lNS_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lNS_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F023bd345-ef66-4c75-b778-ac21415e3d2b_1254x836.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Inside most organisations, AI adoption does not move as a single wave. It moves in sequence. One role begins. Others observe. Some follow when they see evidence. Some wait longer. The order is less random than it appears.</p><p>There is a persistent assumption in how organisations talk about AI adoption: that it is, or should be, a coordinated effort. Leadership sets a direction. Teams execute. Progress is measured collectively. When adoption stalls, the diagnosis tends to focus on resistance, or lack of vision, or insufficient investment. The possibility that the stall is a sequencing problem rather than a commitment problem is rarely considered.</p><p>But adoption has a natural sequence. Certain roles move first because their context makes early action rational. Others hold back because their context makes caution rational. Neither group is wrong. They are responding to different signals, operating under different constraints, and evaluating risk against different criteria. The question is not how to get everyone moving at the same speed. It is how to understand the order in which roles naturally engage, and to work with that order rather than against it.</p><p>Quaie&#8217;s Q1 2026 fieldwork is designed to make this sequence visible across ten executive roles. The hypothesis is that CTO and CIO roles will show the most advanced adoption stages, with the highest proportion reporting limited production use or scaled deployment. COO and CDO roles are likely to show similar forward positioning, operationally close to workflows where AI creates immediate leverage. CEO roles are expected to cluster at experimentation. CMO roles may show the widest variance of any group, ranging from no active initiatives at one end to scaled deployment at the other. Quaie&#8217;s Role Lead-Lag Ranking is designed to make this map visible, tracking the temporal distance between roles as they move through adoption stages, revealing whether the organisation is converging toward shared conviction or diverging away from it.</p><p>The instinct is to read this as a performance ranking. CTOs are ahead. CMOs are behind. CEOs need to catch up. But that reading misses what the pattern is actually showing. It is not a league table. It is a map of how adoption propagates through an organisation, shaped by the structural characteristics of each role.</p><p>CTOs move first because they sit closest to operational leverage and risk containment. They control technical infrastructure or own digital workflows directly. Their feedback loops are short: when they experiment with AI tooling, they can observe results within days or weeks, adjust their approach, and decide whether to continue or stop. When an experiment fails, the cost is contained within their function. They do not need cross-functional approval to iterate. This combination of direct authority, fast feedback, and contained downside makes early action rational for these roles in a way that it simply isn&#8217;t for others.&#185; This structural advantage is also what the Role Influence Index captures: the CTO&#8217;s direct ownership of tooling decisions positions the role as a primary catalyst in the adoption sequence, with outsized influence over the pace at which the wider leadership system moves.</p><p>CEOs cluster at experimentation not because they are slow or disengaged, but because their role in the adoption process is fundamentally different. A CEO&#8217;s job at this stage is not to initiate adoption. It is to validate it. They need to see proof from the roles closer to operations before committing the organisation&#8217;s direction and capital. A CEO who moves ahead of that proof is taking a different kind of risk from a CTO who experiments within their own function. The CTO risks a failed tool. The CEO risks a failed strategy. The asymmetry explains the difference in pace, and it is entirely rational on both sides.</p><p>CMOs are among the most structurally interesting cases in the sequence. The wide variance anticipated in the Q1 fieldwork reflects the fact that the CMO&#8217;s position in the sequence is not fixed. It depends on context.</p><p>In some organisations, the CMO is an early mover. This tends to happen where marketing automation and customer personalisation create direct operational leverage, where the CMO has strong control over the relevant workflows, and where the feedback loops between AI-assisted activity and measurable outcomes are relatively tight. In these contexts, the CMO looks more like a CTO: close to the workflow, able to iterate quickly, positioned to see results.</p><p>In other organisations, the CMO is a follower. This tends to happen where creative work is central to the marketing function, where authority over workflows is shared with agencies and external partners, and where the outcomes that matter most are difficult to attribute cleanly. In these contexts, the CMO is waiting for evidence from technical teams before committing. This is not hesitation. It is a different role context producing a different, and perfectly rational, position in the sequence.</p><p>CFOs occupy a structurally distinct position. They are rarely early movers in AI adoption, not because finance is conservative by nature, but because the CFO&#8217;s decision criteria require evidence that does not yet exist when early movers are experimenting. A CFO evaluating an AI investment applies the same evidentiary standard they would apply to any capital allocation of equivalent scale. Until the roles closer to operations have stabilised and produced measurable returns, the CFO&#8217;s caution is not a blocker. It is a rational response to insufficient proof.&#178;</p><p>CHROs and General Counsel sit further back still. Their concerns, workforce displacement, reskilling requirements, regulatory exposure, liability, are legitimate and largely unaddressed by the roles moving ahead of them. The CHRO cannot evaluate AI&#8217;s workforce implications until the operational roles have clarified what AI will actually be used for. General Counsel cannot assess regulatory risk until the scope of deployment is visible. These roles are structurally dependent on earlier movers for the inputs they need to act. Treating their position as resistance misreads the sequence entirely.</p><p>This has practical consequences for how organisations plan AI rollout.</p><p>The most common mistake is attempting to move all roles simultaneously. A board-level directive to &#8220;accelerate AI adoption&#8221; creates pressure across every function at once. But the functions are not equally positioned to respond. Technical roles may already be in production use. Commercial roles may still be evaluating feasibility. Finance may be waiting for evidence of durable value that doesn&#8217;t exist yet because the roles that would produce it haven&#8217;t finished stabilising. HR and Legal may be waiting for clarity on scope that the operational roles have not yet provided.</p><p>When pressure is applied uniformly, it doesn&#8217;t accelerate adoption. It creates friction. Roles that are not ready to move are forced into activity that lacks conviction. Roles that have already moved feel constrained by functions that haven&#8217;t caught up. The organisation experiences a sense of stalling that has nothing to do with capability and everything to do with attempting to run a relay as a sprint.</p><p>The alternative is to recognise the natural order and work with it.</p><p>This means funding the roles that are ready to move and letting them generate proof. It means understanding that proof, not instruction, is what pulls follow-on roles forward. A CTO who has stabilised AI use in engineering creates evidence that a CFO can evaluate against financial criteria. A COO who has moved into production use creates a reference point that a CMO in a different context can learn from. The proof generated by early movers reduces uncertainty for the roles that follow. It gives them something concrete to assess rather than a strategic narrative to trust on faith.</p><p>Pull beats push. Adoption accelerates when leading roles generate evidence that followers can use. Forced rollout reverses this dynamic and turns a sequencing challenge into a political one.</p><p>It also means accepting that lag is not the same as resistance. In many cases, follow-on roles are waiting for legitimate inputs: governance clarity from Legal, budget justification from Finance, workforce transition plans from HR, evidence of durable value from an adjacent function. These are reasonable dependencies. Treating them as obstacles to be overcome, rather than conditions to be met, poisons the relationship between early movers and later adopters and makes future coordination harder.</p><p>The organisations that navigate this well tend to share a common trait. They do not try to eliminate the gap between early movers and followers. They make the gap visible. They track which roles have moved, which are waiting, and what those waiting roles need in order to act. The Role Shift Index provides the baseline, where each role sits today. The Organisational Adoption Gradient quantifies the spread between the most advanced and least advanced roles. The Role Lead-Lag Ranking shows whether that spread is narrowing or widening over time. And the Role Alignment Map charts whether the leadership system shares a common interpretation of AI&#8217;s strategic direction, a distinct question from where each role sits on the adoption spectrum, and one that determines whether the relay produces coordinated organisational commitment or simply a collection of isolated functional advances. Together, these constructs treat adoption as a relay where each handoff depends on the previous runner finishing their leg, not as a race where everyone starts at the same time.</p><p>What the Q1 fieldwork is designed to establish is whether the sequence anticipated here holds in practice, how long the lag between roles typically runs, and whether specific sequences produce better outcomes than others. That last question requires multiple quarters of observation, precisely what Quaie&#8217;s Consensus Formation Time is designed to estimate as longitudinal data accumulates. The structural logic of who moves first, and why, is already visible. Whether it holds consistently across the cohort is what the data will confirm.</p><p>Understanding who moves first, and why, is the beginning of understanding how adoption actually propagates.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; CTO as structural early mover in enterprise technology adoption: The pattern of technology-proximate roles leading adoption is consistent with historical precedent. Samsung&#8217;s response to ChatGPT in 2023 illustrates the dynamic, with employees in technical roles beginning to use ChatGPT within weeks, uploading proprietary source code and meeting transcripts, before the organisation had established governance. The company subsequently banned the tool, followed by similar restrictions at JPMorgan, Amazon, Bank of America, Deutsche Bank, Goldman Sachs, and Accenture. Reported by Bloomberg, May 2023, and across financial press.</p><p>&#178; CFO evidentiary standards for AI investment: BCG AI Radar 2025 (January 2025, 1,803 C-level executives) found that only 25 per cent of organisations reported significant value from AI, despite 75 per cent ranking it as a top-three priority. The gap between strategic priority and demonstrated value is the evidentiary challenge CFOs face when evaluating AI capital allocation.</p><p>Quaie&#8217;s six analytical constructs (the Role Shift Index, Role Lead-Lag Ranking, Consensus Formation Time, Role Influence Index, Organisational Adoption Gradient, and Role Alignment Map) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in the preceding essay in this series, &#8220;Why AI Adoption Needs a Reference Layer.&#8221;</p>]]></content:encoded></item><item><title><![CDATA[Where AI Creates Repeatable Value and Where It Doesn't]]></title><description><![CDATA[Most organisations can point to where AI is being used. Fewer can say where it has stabilised.

The difference is not semantic. Activity and value are easy to confuse in the early stages of adoption. Pilots expand. Tools get embedded in workflows. Teams report productivity improvements. Usage dashboards show upward curves. From a distance, this looks like progress. And sometimes it is.]]></description><link>https://quaie.io/p/where-ai-creates-repeatable-value-and-where-it-doesnt</link><guid isPermaLink="false">https://quaie.io/p/where-ai-creates-repeatable-value-and-where-it-doesnt</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 05 Jan 2026 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZNgB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZNgB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZNgB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ZNgB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ZNgB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ZNgB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZNgB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg" width="1253" height="836" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:836,&quot;width&quot;:1253,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:815311,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/186441469?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZNgB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!ZNgB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!ZNgB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!ZNgB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F663e4cde-67c5-4fd9-8d08-22feaae1fe49_1253x836.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Most organisations can point to where AI is being used. Fewer can say where it has stabilised.</p><p>The difference is not semantic. Activity and value are easy to confuse in the early stages of adoption. Pilots expand. Tools get embedded in workflows. Teams report productivity improvements. Usage dashboards show upward curves. From a distance, this looks like progress. And sometimes it is.</p><p>But until behaviour stabilises within a role, until a function is using AI in a way that is predictable, owned, and no longer dependent on individual enthusiasm or a single champion&#8217;s energy, what you&#8217;re observing is experimentation, not value creation. The distinction matters because organisations routinely make scaling decisions on the basis of experimental signal, treating early activity as evidence that value has formed when it has only been glimpsed.</p><p>Experimentation is valuable. It generates learning, surfaces possibilities, builds familiarity. It is also inherently volatile. A team that adopted a tool last quarter may have moved on this quarter. A workflow that showed promise under one manager may not survive a reorganisation. A use case that produced impressive results in a pilot may fail to replicate when ownership transfers from the person who built it to the team expected to maintain it. Activity without stability is signal without structure.</p><p>What changes the picture is when a role begins to treat AI not as something it is testing but as something it operates through. The variance in usage drops. Ownership becomes clear. The function stops debating whether to use AI for a given task and starts debating how to improve the way it&#8217;s used. Conversations shift from &#8220;should we try this&#8221; to &#8220;how do we do this better.&#8221; This is where repeatable economic value begins, and it is visible in behaviour before it shows up in any financial metric.</p><p>The question for leaders is not whether AI is being used. It is whether that usage has crossed the threshold from experimentation into something durable enough to build on.</p><p>This is precisely what Quaie&#8217;s Q1 2026 fieldwork is designed to measure. When we ask ten executive roles about adoption stage and confidence in durable value, the hypothesis is that the results will not distribute evenly across the cohort. Some roles are likely to cluster toward experimentation. Others will have moved into limited production use. A smaller number may report scaled deployment. The more significant question is not who is furthest ahead, but how sharply confidence in durable value tracks adoption stage. This is the relationship the Role Shift Index is designed to capture, placing each role on the adoption spectrum and tracking how that position shifts over time. If the pattern holds as expected, roles at scaled deployment will report substantially higher confidence that their AI initiatives will produce lasting economic value than roles still at the experimentation stage. And that gap is likely to map to role more cleanly than to any other variable, including company size, revenue band, or the specific AI applications being used. Quaie&#8217;s Organisational Adoption Gradient, the distance between the most advanced and least advanced roles, is designed to make this divergence visible rather than allowing it to be concealed within an enterprise-level average.</p><p>This suggests something that most AI benchmarking misses entirely. Value does not emerge evenly across an organisation and then get recognised. It concentrates in specific roles first. And the roles where it concentrates are the ones whose context allows experimentation to convert into something durable: decision authority over the relevant workflow, short enough feedback loops to iterate quickly, and proximity to outcomes that can be measured without ambiguity.</p><p>Marketing and customer service are the two functional areas most commonly identified in early research as having crossed from experimentation into formal approval or dedicated budget.&#185; The reasons are structural. Both areas involve high-frequency, repeatable tasks where AI can be tested against clear performance baselines. A marketing team running AI-assisted campaign optimisation can see results within days. A customer service operation using AI for triage and response can measure impact within weeks. The feedback loop is tight enough for experimentation to stabilise relatively quickly.</p><p>But even within these areas, confidence in durable value is likely to remain uneven. The crossing point from &#8220;we&#8217;re trying this&#8221; to &#8220;this is how we work now&#8221; is not a clean threshold. It is a gradual stabilisation that is easier to see in retrospect than in the moment. A function can be well into production use and still harbour uncertainty about whether the value will persist through a budget review, a leadership change, or a shift in strategic priorities.</p><p>This is why measuring adoption by what has been deployed misses the point. Deployment is a moment. Stabilisation is a process. And the process looks different depending on which role you&#8217;re observing.</p><p>Consider two roles in the same organisation. A CTO deploys AI tooling across engineering and sees rapid uptake. Technical teams are comfortable with new tools. Feedback loops are short. The CTO has direct authority over the workflow, and when something doesn&#8217;t work, the team can adjust without waiting for cross-functional approval. Within a quarter, usage has stabilised. The experimentation phase is over.</p><p>The same organisation&#8217;s CMO deploys AI in campaign planning and sees a different trajectory. Creative teams are less comfortable with AI-assisted workflows. Measurement is harder because marketing outcomes are influenced by variables outside the CMO&#8217;s control. Authority over the workflow is shared with agencies and partners who have their own views on AI. Six months in, usage is inconsistent. Some team members have adopted it. Others have reverted to previous methods. The CMO describes it as &#8220;in progress.&#8221; In practice, it is still experimental.</p><p>Both roles deployed AI. One is approaching repeatable value. The other is still in experimentation, even if nobody describes it that way internally. The Role Lead-Lag Ranking between these two roles would show a widening temporal gap, the CTO pulling further ahead while the CMO&#8217;s position remains static, a divergence invisible to any enterprise-level metric. This dynamic is also captured by the Role Influence Index, which measures the relative influence each leadership role exerts over adoption decisions. The CTO&#8217;s direct authority over tooling and workflow makes the role a primary catalyst; the CMO&#8217;s shared authority with external partners and creative functions positions it closer to validator or conditional adopter, which in part explains the slower path to stabilisation.</p><p>The practical implication is uncomfortable but important. Early wins in one function do not predict success elsewhere. What is working in marketing is working because of marketing&#8217;s specific role context. Finance has a different context, different blockers, and a different path to stabilisation. Operations has another. The CHRO faces yet another, workforce readiness questions that no other role is addressing, with feedback loops measured in quarters rather than days. The instinct to generalise (&#8221;AI is working here, so let&#8217;s accelerate it everywhere&#8221;) misreads what is actually happening. It mistakes a role-specific outcome for an organisational one.</p><p>The more useful question for any leadership team is not &#8220;where are we using AI?&#8221; but &#8220;where has AI use become predictable and owned?&#8221; Where is the variance dropping? Where has a function stopped experimenting and started operating? Where is confidence earned through repeated use rather than assumed on the basis of a promising pilot? Alongside these questions sits a related one that the Role Alignment Map is designed to answer: whether the leadership system as a whole shares a common interpretation of where AI is creating value, who owns it, and what the strategic priorities are. Adoption-stage divergence and strategic misalignment are not the same problem, and they do not resolve through the same interventions.</p><p>These are the places where budget should follow. Not because activity is highest, but because behaviour has stabilised enough to suggest the value will persist.</p><p>The Q1 fieldwork will test whether the concentration pattern anticipated here holds in practice, and whether it is concentrating by role before it concentrates by sector, by company size, or by use case.</p><p>That is worth watching.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Marketing and customer service as leading functional areas for stabilised AI use: Consistent with McKinsey Global Survey on AI (2024), which identified customer service, marketing, and software engineering as the three most common functions for generative AI deployment. BCG AI Radar 2025 (January 2025, 1,803 C-level executives) corroborates marketing and customer operations as early value-concentration areas.</p><p>&#178; Microsoft Copilot adoption-to-value gap: Microsoft reported 70 per cent of Fortune 500 companies purchasing Copilot licences by late 2024 (Microsoft earnings calls). Gartner found fewer than 5 per cent had moved beyond limited pilot (Gartner research, mid-2024). The gap between platform purchase and stabilised organisational use illustrates why deployment metrics alone do not capture value formation.</p><p>Quaie&#8217;s six analytical constructs (the Role Shift Index, Role Lead-Lag Ranking, Consensus Formation Time, Role Influence Index, Organisational Adoption Gradient, and Role Alignment Map) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in subsequent essays in this series.</p>]]></content:encoded></item><item><title><![CDATA[Why AI Adoption Needs a Reference Layer]]></title><description><![CDATA[Across mature markets, participants share a common reference point. Capital markets have yield curves. Labour markets have employment data. Supply chains have lead-time indices. These instruments don&#8217;t tell participants what to do. They tell them where they are.]]></description><link>https://quaie.io/p/why-ai-adoption-needs-a-reference-layer</link><guid isPermaLink="false">https://quaie.io/p/why-ai-adoption-needs-a-reference-layer</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 29 Dec 2025 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Dl1-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Dl1-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Dl1-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Dl1-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Dl1-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Dl1-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Dl1-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg" width="1254" height="837" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/df187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:837,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:911008,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/186301465?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Dl1-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Dl1-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Dl1-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Dl1-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdf187182-3484-45ff-a243-32c4b28ff182_1254x837.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Across mature markets, participants share a common reference point. Capital markets have yield curves. Labour markets have employment data. Supply chains have lead-time indices. These instruments don&#8217;t tell participants what to do. They tell them where they are.</p><p>AI adoption has no equivalent. Not at the level that matters, which is the decision level.</p><p>There is no shortage of data on what organisations have deployed, which tools are gaining traction, how much is being spent. Analyst firms publish this data annually. Vendors publish it quarterly. The technology press publishes it daily. What none of them measure is how organisations are actually reaching their decisions about AI. Who inside the organisation is convinced. Who is hesitant. Whether those positions are converging or pulling further apart. And whether the organisation is approaching the kind of internal alignment that makes committed action rational, or drifting further from it without realising.</p><p>This absence has consequences that are easy to underestimate.</p><p>Without a reference layer, every organisation navigates AI adoption in isolation. Each leadership team treats its internal dynamics as unique. The CTO who is three stages ahead of the CMO assumes this is a local problem, specific to their organisation&#8217;s culture or structure. The CMO who can&#8217;t get budget for a programme that is already proving value in another function assumes the blocker is political. The CFO who has seen no business case that meets the evidentiary standard they would apply to any other investment of equivalent scale assumes the timing is wrong. The CEO who senses tension between them but can&#8217;t locate exactly where the gap sits assumes the team needs more time, or a better business case, or a different vendor.</p><p>Some of these assumptions will turn out to be correct. But without a way to compare against external signal, there is no mechanism for distinguishing between a problem that is genuinely local and a pattern that is structural. And if the pattern is structural, the response needs to be fundamentally different from the response to a local problem. You don&#8217;t fix a structural misalignment with a better business case. You fix it by understanding where in the organisation confidence, conviction, and readiness have diverged, and by addressing those gaps deliberately.</p><p>This is where the existing intelligence landscape falls short. Not because the research is bad, but because it operates at the wrong altitude.</p><p>Annual surveys capture what happened. They tell you that a certain percentage of enterprises deployed AI in a given year.&#185; They do not tell you which roles inside those enterprises were confident the deployment would last. They do not tell you whether the decision to deploy was shared across functions or driven by a single champion. They do not tell you whether the organisation had reached genuine consensus or simply run out of patience with the evaluation phase. These are not minor details. They are the dynamics that determine whether a deployment sustains or unwinds within eighteen months.</p><p>Platform data shows usage. It tells you how many seats are active, how often a tool is accessed, which features are being used.&#178; What it cannot show is whether the people using the tool believe it is creating durable value, whether their managers share that belief, or whether the budget behind it will survive the next planning cycle. Usage without conviction is experimentation. It looks like adoption until it stops.</p><p>Vendor narratives tell you what is possible. Case studies tell you what worked somewhere, once. Board presentations tell you what the CEO has been told. None of these are reference points. They are positions, advanced by interested parties, with no external benchmark against which to evaluate them.</p><p>What is missing is a continuously updated, role-based view of how organisations are deciding about AI right now. Not what they bought. Not what they deployed. But what they believe, intend, and are prepared to commit to. And critically, whether those beliefs are shared across the roles that need to act on them, or whether they diverge in ways that will slow progress before it becomes visible in outcomes.</p><p>This is the gap Quaie&#8217;s Q1 2026 fieldwork is designed to close. The hypothesis is that when you ask ten executive roles the same questions about AI readiness, from CEO and CTO to CFO, CHRO, and General Counsel, role will emerge as the primary axis of divergence, more significant than company size, revenue band, or sector. A CTO and a CMO sitting in the same organisation, looking at the same AI initiatives, are likely to report fundamentally different levels of confidence, cite fundamentally different blockers, and describe fundamentally different levels of preparedness. A CFO and a CHRO, asked about the same technology investment, will probably frame the question in terms so different it is difficult to recognise as the same conversation. If that pattern holds consistently across the cohort, the implication is significant: any intelligence that aggregates across roles, reporting an enterprise average or a sector benchmark, is compressing precisely the signal that leaders need to see.</p><p>This is why the reference layer that AI adoption requires looks different from what currently exists.</p><p>It needs to operate at the role level, not the company level. The Role Shift Index tracks where each of ten executive roles sits on the adoption spectrum, not as a single reading but as a position that shifts over time, making visible the pace and direction of movement within each function.</p><p>It needs to surface divergence as information rather than smoothing it into a consensus that has not actually formed. The Organisational Adoption Gradient measures exactly this. It captures the distance between the most advanced and least advanced roles, quantifying the internal spread that enterprise averages conceal.</p><p>It needs to capture sequencing. Which roles move first, which follow, and whether the gap between them is narrowing or widening. Role Lead-Lag Ranking tracks the temporal distance between roles as they move through adoption stages, revealing whether an organisation is converging toward shared conviction or diverging away from it.</p><p>And it needs a measure of when alignment has reached the threshold that makes committed action rational. Consensus Formation Time estimates how many quarters it will take for an organisation&#8217;s roles to reach sufficient convergence. This gives leaders a forward-looking view of their decision timeline rather than a backward-looking account of what has already been deployed.</p><p>But timing and sequencing alone are not enough. Leaders also need to understand whether their organisations are interpreting the opportunity in similar ways. The Role Alignment Map measures the degree to which leadership roles share a common interpretation of AI strategy, ownership, and organisational direction. It reveals whether a leadership system is moving toward coordinated commitment or remaining fragmented, a distinct question from where each role sits on the adoption spectrum, and one that determines whether convergence is genuine or performative.</p><p>Finally, adoption decisions inside enterprises are rarely symmetrical. Some roles initiate change, others validate it, and some hold the authority that determines whether investment proceeds. The Role Influence Index measures the relative influence of leadership roles on adoption decisions, identifying which functions act as catalysts, validators, or gatekeepers as AI moves from experimentation toward operational deployment.</p><p>None of this can be reconstructed after the fact. A quarter that passes without capturing decision context is a quarter of signal permanently lost. Time is not just a dimension of this data. It is the moat.</p><p>There is a reason mature markets develop shared reference points. Not because they simplify decisions, but because they reduce the cost of navigating under uncertainty. When participants can see the same structural picture, misjudgements become cheaper and corrections happen faster.</p><p>AI adoption is still in its formative phase. The most consequential decisions facing leaders right now are not about which tools to use. They are about when to scale, where to focus, how to sequence change across roles, and whether organisational confidence is sufficient to justify committing capital. These are judgement calls. And judgement, made in isolation, without reference to how those same dynamics are playing out across comparable organisations, degrades in ways that are invisible until the consequences arrive.</p><p>What is beginning to emerge is a different kind of intelligence, one that treats AI adoption as a living decision system rather than a static trend to be benchmarked. It is slower than hype, more restrained than forecasts, and built to become more valuable over time rather than less.</p><p>The essays that follow this one examine the specific dynamics through which that intelligence operates: where value is stabilising, which roles lead and which follow, where misalignment creates friction, and when consensus makes action rational.</p><p>Together, they describe a reference layer that does not yet exist at scale. Building it is the work.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; Annual AI adoption surveys: McKinsey Global Survey on AI (2024) reported 78 per cent of respondents using AI in at least one business function; the 2025 edition reported 88 per cent. BCG AI Radar 2025 (January 2025, 1,803 C-level executives across 19 markets) found 75 per cent ranked AI as a top-three priority, but only 25 per cent reported significant value. Deloitte State of AI in the Enterprise (2026 edition, 3,235 leaders, 24 countries) reported similar adoption figures. None disaggregate by executive role within the enterprise.</p><p>&#178; Platform usage data limitations: Microsoft reported that 70 per cent of Fortune 500 companies had purchased Copilot licences by late 2024 (Microsoft earnings calls). Gartner found that fewer than 5 per cent had moved beyond limited pilot (Gartner research, mid-2024). The gap between purchase and sustained organisational use illustrates why platform data alone cannot serve as a reference layer for adoption.</p><p>Quaie&#8217;s six analytical constructs (the Role Shift Index, Role Lead-Lag Ranking, Organisational Adoption Gradient, Consensus Formation Time, Role Alignment Map, and Role Influence Index) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in subsequent essays in this series.</p>]]></content:encoded></item><item><title><![CDATA[AI Moves Fast. Organisations Don’t.]]></title><description><![CDATA[Artificial intelligence is the fastest-adopted consumer technology in history.]]></description><link>https://quaie.io/p/ai-moves-fast-organisations-dont</link><guid isPermaLink="false">https://quaie.io/p/ai-moves-fast-organisations-dont</guid><dc:creator><![CDATA[Simon MacTaggart]]></dc:creator><pubDate>Mon, 22 Dec 2025 08:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qEbO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qEbO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qEbO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qEbO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qEbO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qEbO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qEbO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg" width="1254" height="836" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/deb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:836,&quot;width&quot;:1254,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:886327,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://quaie.io/i/188615235?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qEbO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qEbO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qEbO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qEbO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdeb697f8-46c1-4151-9d50-5e567859e8b8_1254x836.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Artificial intelligence is the fastest-adopted consumer technology in history. More than a billion people used AI tools within three years of their mainstream availability, faster than the internet, faster than the personal computer, and faster than the smartphone. ChatGPT reached a hundred million users in two months.&#185; By 2025, more than a third of working American adults were using generative AI on the job.&#178; Something happened quickly, and the narrative built around that speed has become the dominant frame through which leaders evaluate their own progress: AI is moving fast, adoption is accelerating, and those who hesitate will be left behind.</p><p>The narrative is not wrong about the technology. It is wrong about the unit of analysis. Downloading a tool is not the same as reorganising a business around it. And once you shift attention from individual usage to organisational transformation, the picture inverts almost entirely.</p><p>Ninety-five per cent of enterprise generative AI pilots fail to deliver measurable financial returns.&#179; According to IDC, for every thirty-three prototypes a company builds, four reach production.&#8308; Nearly two-thirds of organisations remain stuck in the pilot stage. According to S&amp;P Global Market Intelligence, forty-two per cent of companies abandoned most of their AI initiatives in 2025, more than double the previous year&#8217;s rate.&#8309; These are not the numbers of a fast transformation temporarily encountering friction. They are the numbers of a slow transformation being mistaken for a fast one.</p><p>The instinct is to treat the gap as a problem of execution, something better tools, more investment, or stronger leadership will close within a few quarters. But the evidence points somewhere more uncomfortable. OpenAI&#8217;s own enterprise research concludes that the primary constraints are no longer model performance or tooling, but organisational readiness and implementation.&#8310; BCG&#8217;s widely cited finding puts the ratio at ten per cent algorithms, twenty per cent technology and data, seventy per cent people, processes, and cultural change.&#8311; When an AI company tells you the technology is not the bottleneck, it is worth believing them. And seventy per cent of the challenge sitting in people, processes, and culture means seventy per cent of the challenge operates on the timescale of organisational change, which is measured in years and decades, not quarters.</p><p>History confirms what the data suggests. Electricity took four decades to move from negligible to seventy per cent household adoption. The telephone took six. Even within recent memory, the pattern holds. Enterprises had websites by the early 2000s; most had not fundamentally restructured around digital capabilities until the mid-2010s. ERP systems were available in the 1990s; full organisational integration took a decade or more.&#8312; The common thread is that general-purpose technologies requiring deep organisational adaptation follow extended timelines regardless of how quickly the underlying capability matures. The technology arrives, early adopters experiment, results are mixed, structures resist, roles disagree, consensus forms slowly, and capital follows conviction at the pace conviction actually forms, which is never as fast as anyone would like.</p><p>AI fits this pattern with uncomfortable precision. What distinguishes it from faster adoption curves, cloud computing, mobile, is not complexity but distribution. Cloud was primarily an infrastructure decision that could be led by a single function. A CTO could migrate to cloud without the CMO needing to believe it was the right call. AI is different. Its value and its risk are distributed across the entire organisation. Marketing uses it for different purposes than engineering. Finance evaluates it against different criteria than operations. The CEO must reconcile these perspectives before committing direction and capital. No single function can adopt AI on behalf of the organisation the way IT adopted cloud on behalf of the enterprise. This makes AI adoption less like a technology upgrade and more like digitalisation, financialisation, or industrialisation, transformations that reshaped not just what organisations used but how they made decisions, allocated resources, and coordinated across functions. Those were generational processes. Not because the technology was slow, but because the human coordination required to absorb it was deep, cross-functional, and irreducibly complex.</p><p>This is precisely what Quaie&#8217;s Q1 2026 fieldwork is designed to make visible. When we measure AI adoption readiness across ten executive roles, from CEO and CTO to CFO, CHRO, and General Counsel, the hypothesis is that the sharpest divergence will not be between companies, or sectors, or revenue bands, but between roles within the same cohort. Role context, the specific evidence standards, risk tolerances, and organisational mandates that each function carries, is likely to shape readiness more than organisational maturity does. Quaie&#8217;s Organisational Adoption Gradient is designed to measure this distance precisely: the spread between the roles that have moved and the roles that have not. The Role Shift Index tracks the underlying movement that produces this gradient, mapping where each of the ten executive roles sits on the adoption spectrum and whether that position is advancing, holding, or reverting quarter by quarter.</p><p>The blocker distribution is likely to tell the same story from a different angle. ROI uncertainty may dominate among CEOs and CMOs. CTOs are more likely to cite integration complexity and security concerns. CFOs will probably require evidence before releasing capital. CHROs are likely to raise workforce readiness questions that no other role has yet addressed. If that pattern holds, the organisation is not facing a single constraint but several, distributed unevenly across the people responsible for resolving them. A CTO wants to solve an integration problem. A CMO wants to see commercial proof. A CFO wants both answered before releasing budget. A CHRO wants to know what happens to the workforce. Each position is rational. None of them can see the others clearly enough to converge without a mechanism for making the full picture visible. This is precisely what the Role Alignment Map is designed to provide: a measure of whether the leadership system shares a common interpretation of AI&#8217;s strategic priorities and ownership, making visible the gap between a leadership team that describes itself as aligned and one that has actually formed shared conviction.</p><p>This is the structural reality that the speed narrative obscures. An organisation does not adopt AI the way a person downloads an application. It adopts AI through a sequence of decisions made by different roles, each operating under different constraints, evaluating risk against different criteria, and reaching conviction at different speeds. Those roles must eventually converge before committed action becomes rational. That convergence is inherently slow, because it depends on evidence accumulating across functions, not enthusiasm concentrating in one.</p><p>The practical consequences are significant, and they cut against much of the advice currently circulating.</p><p>If AI adoption is generational, then the cost of moving wrong exceeds the cost of moving slowly. Premature commitment, scaling before alignment has formed, forcing rollout before roles have converged on shared conviction, carries compounding costs. This is the risk the Q1 fieldwork is designed to surface: capital allocation front-loaded in organisations that committed before internal alignment was in place, and initiatives that stalled not because they moved too slowly but because they moved before the decision was shared.</p><p>If AI adoption is generational, then sequencing matters more than speed. The order in which roles engage, who leads, who validates, who follows, determines whether adoption propagates through an organisation or fractures within it. Understanding that sequence requires knowing where confidence sits today, not where deployment stands. Quaie&#8217;s Role Lead-Lag Ranking tracks exactly this: the temporal distance between roles as they move through adoption stages, making visible whether the organisation is converging or diverging.</p><p>And if AI adoption is generational, then the intelligence leaders need cannot be a snapshot of what has been bought or deployed. It must be a continuously updated, role-based view of how decisions are forming, tracking conviction, alignment, and timing at the level where decisions are actually made. Capital markets have yield curves. Labour markets have employment data. AI adoption, the most consequential organisational transformation in a generation, has no equivalent. Every leadership team navigates in isolation, treating its internal dynamics as unique, unable to distinguish between a problem that is genuinely local and a pattern that is structural.</p><p>The organisations that navigate this well will not be the ones that moved fastest. They will be the ones that understood where they stood, moved in the right order, and committed when the evidence supported it. Patience is not a popular prescription in a market saturated with urgency. But urgency that outruns the structural pace of organisational change does not produce transformation. It produces expensive false starts and eroded confidence, the very conditions that make the next attempt harder.</p><div><hr></div><p><em>This essay is part of Quaie&#8217;s <a href="https://quaie.io/p/quaie-founding-essay-series">Founding Essay Series</a>, examining how organisations decide to adopt AI role by role, over time.</em></p><div><hr></div><p><strong>Notes and Sources</strong></p><p>&#185; ChatGPT reaching 100 million users in two months: Reported by Reuters, February 2023, based on data from analytics firms including Similarweb.</p><p>&#178; More than a third of working American adults using generative AI on the job by 2025: Pew Research Center, &#8220;AI in the Workplace&#8221; survey data, 2025. Multiple corroborating surveys from McKinsey (Global Survey on AI, 2024) and Salesforce (Generative AI Snapshot, 2024) report similar or higher figures.</p><p>&#179; 95 per cent of generative AI pilots failing to deliver measurable financial returns: Reported across multiple analyst sources, 2024&#8211;2025. Gartner predicted in July 2024 that at least 30 per cent of generative AI projects would be abandoned after proof of concept by end of 2025 (Gartner Data &amp; Analytics Summit, Sydney, July 2024).</p><p>&#8308; IDC prototype-to-production ratio: IDC research findings on enterprise AI deployment, cited across industry reporting, 2024&#8211;2025. For every 33 AI prototypes built, approximately 4 reached production deployment.</p><p>&#8309; S&amp;P Global Market Intelligence: 42 per cent of companies abandoned most AI initiatives in 2025. S&amp;P Global Market Intelligence, 451 Research survey, published 2025.</p><p>&#8310; OpenAI enterprise research on organisational readiness as primary constraint: OpenAI enterprise deployment findings, reported 2024&#8211;2025. OpenAI&#8217;s enterprise team has publicly stated that the primary barriers to enterprise AI value are organisational, not technical.</p><p>&#8311; BCG AI adoption composition: Boston Consulting Group, &#8220;From Potential to Profit: Closing the AI Impact Gap&#8221; (AI Radar 2025), January 2025. Survey of 1,803 C-level executives across 19 markets. BCG&#8217;s related publications cite approximately 70 per cent of AI challenges stemming from people, processes, and cultural change.</p><p>&#8312; ERP implementation timescales: Panorama Consulting Group, annual ERP reports (2010&#8211;2020). More than 70 per cent of ERP implementations failed to meet their objectives, with average timescales extending from planned 18-month schedules to 3&#8211;5 years. See also: Quaie&#8217;s essay &#8220;What ERP Taught Us About AI, and What Leaders Have Already Forgotten&#8221; for extended analysis.</p><p>Quaie&#8217;s constructs referenced in this essay (the Organisational Adoption Gradient, Role Shift Index, Role Lead-Lag Ranking, and Role Alignment Map) are described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026) and in subsequent essays in this series.</p>]]></content:encoded></item></channel></rss>