The numbers sit next to each other and demand an explanation.
Seventy percent of digital transformation projects fail to meet their objectives. That figure has been consistent across McKinsey, BCG, and Gartner research for a decade, and it is the reason entire consulting practices were built around change management, governance, and organisational readiness. Executives absorbed the lesson. They learned, eventually and expensively, that a technology rollout without a people strategy is not a transformation. It is an installation.
Now look at what is happening with AI. MIT's NANDA Initiative analysed 300 public AI deployments, surveyed hundreds of leaders, and concluded that 95% of enterprise AI pilots fail to deliver measurable business impact. Despite $30 to 40 billion invested across enterprise generative AI, the vast majority of organisations are seeing zero return on that investment.
The question worth sitting with is not why AI pilots fail. That much is documented. The question is why they fail at a rate so dramatically worse than the technology wave that came before, and whether the answer has been hiding in plain sight.
Digital transformation, for all its failures, produced a body of hard-won organisational learning. The failures were expensive enough and public enough that the lessons became mainstream. By the mid-2010s, even organisations with poor track records in technology delivery understood that a CRM rollout was not an IT project. It was a behavioural change programme that happened to involve software. You needed executive sponsorship visible beyond the kick-off slide. You needed line managers owning adoption, not just an IT team owning deployment. You needed change management running alongside technical delivery, not appended as a training programme after the system went live.
These lessons were hard to learn and, once learned, were genuinely applied. Companies built internal change management competencies. Consulting firms built large practices around it. Project governance frameworks evolved to require organisational readiness assessments alongside technical readiness. The 70% failure rate reflected the organisations that had not yet internalised this learning.
The 95% failure rate for AI pilots suggests that something happened during the handover from digital transformation to AI. The lesson did not transfer.
There is a plausible explanation for why the lesson did not transfer, and it has to do with how AI entered the enterprise.
Digital transformation arrived as a visible, organisation-wide mandate. There was a CTO, a steering committee, a business case, a programme board. Everyone understood that the company was changing how it operated. The stakes were legible across the organisation, which made the human change dimensions legible as well.
AI arrived differently for most companies. It arrived as a tool. ChatGPT gave individuals access to something powerful in a conversational interface. Microsoft embedded Copilot into products people already used. AWS, Google, and Azure started packaging AI capabilities as services a development team could call from existing infrastructure. The entry point was a developer, a data scientist, or an enthusiastic middle manager running an experiment, not a transformation programme with governance and accountability.
Prosci's change management research, based on a study of over a thousand professionals, found that 63% of AI transformation failures trace to human factors rather than technical issues. The technology, in most failed pilots, works. What fails is everything surrounding it.
The patterns are remarkably consistent across industries. A pilot is scoped around a technically interesting problem. A small team, usually from data science or a central AI lab, builds something that works in a controlled environment. There is a demo. The demo impresses. There is approval to scale.
Then scaling begins, and the problems emerge in sequence:
Digital transformation's 70% failure rate was achieved when companies had not yet developed the organisational muscle for managing technology-enabled change. The lesson was learned because the failures were impossible to hide. Billion-dollar ERP implementations that went live and paralysed operations. CRM rollouts that nobody used. E-commerce platforms that launched and immediately became abandonment crises.
Those failures were large enough, visible enough, and strategically consequential enough that boards, executives, and programme leaders could not attribute them to technology alone. The people dimension became undeniable.
AI pilot failures are often invisible. A pilot that does not reach production is not a visible crisis. It is a quiet budget line that gets absorbed into next year's plan. A team that built something technically impressive but organisationally unusable moves on to the next pilot. The 95% failure rate is made up of thousands of these invisible outcomes, distributed across functions and geographies, each one easy enough to rationalise individually.
This is why the lesson has not transferred. The feedback loop that forced digital transformation leaders to confront the human dimensions of their failures is weaker with AI. The failures are quieter. The pilots are smaller. The sunk costs are diffuse enough that nobody takes full accountability.
Successful AI implementations follow a resource split of roughly 10% on algorithms, 20% on infrastructure, and 70% on people and process. The typical failed pilot inverts that almost exactly.
The companies escaping what McKinsey has termed pilot purgatory — the state of endless promising experiments that never compound into business value — are doing something structurally different from the 95%.
The question is not which AI capability is most impressive. It is which specific business process, if redesigned around AI, would produce a measurable outcome that matters. This distinction sounds simple. In practice, it is the hardest discipline to maintain when vendors are demonstrating capabilities and competitors are announcing pilots and boards are asking whether the company is moving fast enough.
The organisations that successfully scale pilots have thought through the accountability structure before the system goes live, not after the first production incident. Who owns the outcome? Who handles escalations when the AI gets it wrong? How is performance measured? What triggers a pause? These questions need answers in advance, not in the aftermath.
The most technically elegant AI system that nobody uses is worth precisely nothing. The organisations that succeed are investing in understanding why people would or would not change their behaviour, and designing the intervention around that understanding. Line managers owning adoption, not just an AI lab owning deployment, is the consistent differentiator in the data.
Fortune's analysis from earlier this month named the dynamic plainly. Organisations are running 30, 50, sometimes hundreds of AI pilots simultaneously, scattered across functions, owned by individual enthusiasts, generating activity that looks like progress from a distance but is not compounding into enterprise capability. The companies doing this are spending real budget and generating real pilot fatigue: the erosion of trust that happens when the workforce watches another initiative fail to change anything.
The window to build something durable is not unlimited. The organisations that are in production, the 5%, are accumulating advantages that are structural: trained models that know their workflows, governance frameworks that are tested rather than theoretical, workforces that have genuinely changed how they operate. Those advantages widen every quarter.
For the organisations still in pilot purgatory, the bottleneck is not the technology. It never was. It is the same bottleneck that derailed digital transformation for a decade: the belief that deploying a system and transforming an organisation are the same exercise. They are not.
Digital transformation taught that lesson the hard way. AI is teaching it again, at three times the cost of the previous lesson, and considerably faster.
We help organisations move AI from promising experiments to production systems that actually change how work gets done.
Book a Scoping Call