The Incentive Problem Behind AI’s Biggest Blind Spot
Artificial intelligence may be the first major technology where every participant in the ecosystem has a financial incentive to misdiagnose the primary constraint on its adoption.
Venture capitalists need AI adoption to accelerate. Their fund cycles depend on it. A typical VC fund operates on a seven-to-ten-year horizon, deploying capital into companies whose valuations assume rapid market penetration. A thesis that organisational AI adoption follows a generational timescale, ten to twenty years of slow, uneven, structurally constrained transformation, is not a thesis any fund can take to its limited partners. The incentive is to believe in speed, invest accordingly, and interpret every enterprise AI contract as evidence that the acceleration is happening.
AI companies need adoption to accelerate for a more immediate reason: revenue. Their growth models are built on the assumption that enterprises will move from pilot to production to enterprise-wide deployment on a timeline measured in quarters. When ninety-five per cent of pilots fail to deliver measurable returns and two-thirds of organisations remain stuck in experimentation, the vendor’s instinct is to treat this as a sales execution problem or a product gap, something the next feature release or the next integration partnership will fix.¹ The possibility that the constraint is structural, seated in the organisation itself, and operates on a timescale no product roadmap can compress is not a possibility the quarterly earnings call is designed to surface.
Media outlets need the acceleration narrative for a different but equally binding reason: attention. Urgency drives engagement. “AI is transforming everything” generates clicks, subscriptions, and advertising revenue. “AI adoption will take a decade and the primary bottleneck is organisational coordination” does not. The result is a media environment that systematically amplifies speed and suppresses friction, not out of dishonesty, but because the economics of attention select for urgency over accuracy.
Consultancies face a subtler version of the same problem. Their revenue depends on organisations believing that AI transformation is achievable within the scope of an engagement, typically six to eighteen months. A consultancy that tells a client “this will take five to ten years and the binding constraint is alignment across your leadership team, not your technology stack” is a consultancy that just lost a project. The incentive is to scope the problem as solvable within the budget cycle, even when the evidence suggests it is not.²
Analysts and research firms occupy the final corner of the ecosystem. Their business model depends on enterprises paying for intelligence that informs near-term decisions. Annual surveys measuring enterprise AI adoption at the aggregate level, what percentage of companies have deployed, what percentage plan to invest, serve this function. They provide a snapshot that confirms the market is moving. What they do not provide, because their methodology is not designed to capture it, is the internal coordination dynamics that determine whether any individual organisation’s adoption will succeed or stall.³ The unit of analysis is the enterprise. The unit of decision is the role. The gap between these two is where the most consequential information sits, and where no one in the current ecosystem has an incentive to look.
This is not a conspiracy. It is a structure. Every participant is acting rationally within their own incentive framework. The VC is optimising for fund returns. The vendor is optimising for revenue growth. The media outlet is optimising for engagement. The consultancy is optimising for project scope. The analyst firm is optimising for renewal rates. Each produces insight that is genuinely useful within its frame. None of them is lying. But the sum of their individual rationalities produces a collective blind spot: nobody in the ecosystem is structured to see, measure, or report on the organisational coordination problem that the evidence consistently identifies as the primary constraint.
The evidence is substantial. BCG estimates that seventy per cent of the AI challenge sits in people, processes, and cultural change, not technology or algorithms.⁴ OpenAI’s own enterprise research concludes that organisational readiness, not model performance, is the binding constraint.⁵ According to S&P Global Market Intelligence, forty-two per cent of companies abandoned most of their AI initiatives in 2025, more than double the previous year.⁶ According to IDC, for every thirty-three AI prototypes a company builds, four reach production.⁷ These are not statistics that describe a technology problem. They describe a coordination problem, one that unfolds inside organisations, between roles, over time.
The incentive problem produces contradictions that would be visible if anyone were structured to notice them. Goldman Sachs offers the sharpest example. In 2023–2024, Goldman’s technology division under CIO Marco Argenti was building internal AI infrastructure with deliberate caution, zero production generative AI use cases nearly a year after ChatGPT’s launch. Simultaneously, Goldman’s macro research division published a widely cited report questioning whether industry-wide AI spending would ever generate adequate returns.⁸ The same institution was, from one division, cautiously building AI capability, and from another, publicly questioning whether anyone’s AI spending was justified. This is not incoherence. It is the incentive problem made visible within a single firm: the technology division’s incentive (build carefully, contain risk) and the research division’s incentive (publish contrarian analysis that attracts attention) produced positions that were individually rational and collectively contradictory. If this tension is present inside Goldman Sachs, one of the most analytically sophisticated institutions in the world, it is present everywhere.
When you measure AI adoption at the role level rather than the enterprise level, the coordination problem becomes visible. Quaie’s Q1 2026 fieldwork across ten executive roles is designed to surface exactly this. The hypothesis is that the sharpest divergence in AI readiness will not be between companies, sectors, or revenue bands, but between executive roles within the same cohort. CTOs are expected to report high confidence and advanced deployment. CMOs are likely to report low confidence and early-stage experimentation. CFOs are expected to flag insufficient evidence for capital commitment. CHROs are likely to raise workforce readiness concerns that no other role has addressed. The blockers each role cites are anticipated to be fundamentally different, ROI uncertainty for CEOs and marketing leaders, integration complexity and security for technology leaders, evidentiary thresholds for finance, regulatory exposure for legal.⁹ If that pattern holds, the organisation will not be facing one problem but several, distributed across the people responsible for resolving them, invisible at any level of analysis that aggregates across roles. The Role Alignment Map is designed to make this directly measurable: not just where roles sit on the adoption spectrum, but whether they share a common interpretation of AI’s strategic priorities and ownership. The Role Influence Index adds a further dimension: which roles are acting as catalysts or gatekeepers in the adoption process, and whether those influence patterns are consistent with the coordination structure the organisation believes it has in place.
This is the gap the ecosystem cannot see, not because the people in it are not intelligent, but because seeing it would require them to act against their own incentives. A VC who acknowledges that adoption is generational must restructure their investment thesis. A vendor who acknowledges that the constraint is organisational must admit their product cannot solve it alone. A consultancy who acknowledges the timescale must tell clients that the engagement will not produce transformation within the budget cycle. A media outlet who acknowledges the slow pace must sacrifice the urgency that sustains its business model. Each of these is a rational position to avoid.
The result is that the most important signal in AI adoption, whether the roles inside an organisation are converging toward shared conviction or diverging into misalignment, goes unmeasured, unreported, and unpriced. Capital is deployed against a diagnosis that mistakes a coordination problem for a technology problem. Billions flow into tools, platforms, and pilots while the organisational dynamics that determine whether any of it succeeds operate in the background, unexamined.
History suggests this will eventually self-correct. The ERP era produced a similar pattern, vendors oversold timelines, consultancies scoped engagements too narrowly, and organisations spent a decade learning that the technology was the easy part. The correction came slowly, driven by accumulated evidence that could no longer be ignored. AI adoption is following the same trajectory, with the same incentive structures producing the same blind spots, and the same eventual reckoning waiting at the end.
The question is not whether the correction will come. It is whether leaders will wait for the market to catch up with reality, or whether they will seek out the signal the ecosystem is not structured to provide. The information exists. The coordination dynamics inside organisations, role-level confidence, cross-functional alignment, adoption sequencing, consensus formation, can be measured through instruments designed for exactly this purpose, tracked over time, and used to inform better sequencing, alignment, and timing decisions.¹⁰ The reason this intelligence has not existed until now is not that it is impossible to produce. It is that no one in the current ecosystem had the incentive to produce it.
The incentive problem is not the technology’s fault. It is not the organisation’s fault. It is a structural feature of how the AI ecosystem is built, and it will persist until someone outside that structure measures what everyone inside it is paid to overlook.
This essay is part of Quaie’s Founding Essay Series, examining how organisations decide to adopt AI role by role, over time.
Notes and Sources
¹ 95 per cent of enterprise generative AI pilots failing to deliver measurable returns: Reported across multiple analyst sources, 2024–2025. Gartner predicted at least 30 per cent of generative AI projects would be abandoned after proof of concept by end of 2025 (Gartner Data & Analytics Summit, Sydney, July 2024). Two-thirds of organisations stuck in pilot or experimentation stage: corroborated across McKinsey, BCG, and Deloitte survey data, 2024–2025.
² Consultancy engagement timescales and AI transformation: BCG’s own research (AI Radar 2025, January 2025) found that only 25 per cent of organisations reported significant value from AI despite 75 per cent ranking it as a top-three priority. The gap between stated priority and demonstrated value is the structural challenge that engagement-length scoping cannot resolve.
³ Annual AI adoption surveys: McKinsey Global Survey on AI (2024, 2025 editions), BCG AI Radar 2025 (1,803 C-level executives, 19 markets), Deloitte State of AI in the Enterprise (2026 edition, 3,235 leaders, 24 countries). Each aggregates at the enterprise or sector level. None disaggregates by executive role within the enterprise. None tracks the same respondents across consecutive periods.
⁴ BCG AI adoption composition: Boston Consulting Group, “From Potential to Profit: Closing the AI Impact Gap” (AI Radar 2025), January 2025, and related BCG publications citing approximately 70 per cent of AI challenges stemming from people, processes, and cultural change.
⁵ OpenAI enterprise research: OpenAI’s enterprise deployment findings, reported 2024–2025. OpenAI’s enterprise team has publicly stated that the primary barriers to enterprise AI value are organisational, not technical.
⁶ S&P Global Market Intelligence: 42 per cent of companies abandoned most AI initiatives in 2025, more than double the previous year. S&P Global Market Intelligence, 451 Research survey, published 2025.
⁷ IDC prototype-to-production ratio: For every 33 AI prototypes built, approximately 4 reached production deployment. IDC research findings, cited across industry reporting, 2024–2025.
⁸ Goldman Sachs internal contradiction: Goldman Sachs technology division under CIO Marco Argenti: deliberate AI infrastructure development with zero production generative AI use cases nearly a year after ChatGPT launch, reported in Financial Times, Bloomberg, and Goldman technology division communications, 2023–2025. Goldman Sachs Global Investment Research report “Gen AI: Too much spend, too little benefit?” published June 2024. The coexistence of cautious operational development and publicly sceptical macro research within the same institution illustrates the incentive problem at the institutional level.
⁹ Quaie Q1 2026 fieldwork: Confidence, preparedness, adoption stage, and perceived blockers are being measured across ten executive roles (CEO, CTO/CIO, COO, CFO, CMO, CRO, CDO, CISO, CHRO, CLO). The anticipated blocker distribution, ROI uncertainty concentrating among CEO and CMO roles, integration complexity among CTOs, evidentiary thresholds among CFOs, workforce readiness among CHROs, and regulatory exposure among General Counsel, is consistent with BCG AI Radar 2025, which identified people, processes, and cultural change as the dominant AI challenge, and with McKinsey Global Survey on AI (2024), which found trust and explainability as primary barriers among non-technical leadership roles. Full methodology at quaie.io.
¹⁰ Quaie’s six analytical constructs (the Role Shift Index, Role Lead-Lag Ranking, Organisational Adoption Gradient, Consensus Formation Time, Role Alignment Map, and Role Influence Index) are designed to measure the coordination dynamics the current ecosystem is not structured to capture. Described in full in the forthcoming book The Role Layer: The Missing Intelligence in Enterprise AI Adoption (Quaie Ltd, 2026).



