Why Too Many Projects Fail and What Better Measurement Could Change
- James Garner
- 2 days ago
- 7 min read
Updated: 14 hours ago
A conversation with Doug Hubbard and Andreas Leed on measurement, risk, and the decisions that determine project success.

The Statistic Nobody Wants to Discuss
Only one in every two hundred projects finishes on time, within budget, and delivers the benefits outlined in the original business case. Yet the project management profession essentially treats this figure as inevitable. Rather than questioning the fundamental assumptions that produce this outcome, organisations accept it as the cost of doing business.
This alone warrants serious attention. But there is something more troubling beneath this statistic: the authors argue that this abysmal success rate is not due to poor project management. It is the result of bad project selection.
This is the starting point for Douglas Hubbard and Andreas Leed's recently published book, How to Measure Anything in Project Management. Their central claim is direct: if we measured uncertainty more accurately, we would choose different projects and achieve better results. The measurement problem is fundamentally a selection problem.
Two Experts Converging from Different Angles
Doug Hubbard, owner of Hubbard Decision Research, has spent 37 years wrestling with the quantification problem across multiple disciplines. Starting at Coopers & Librand (later PricewaterhouseCoopers), he refined his approach to measurement and decision analysis through four previous books. His foundational principle is straightforward: if something matters, it can be measured. That principle underpins everything that follows.
Andreas Leed, Head of Data Science at Oxford Global Projects, approaches the same problem from a markedly different starting point. With a background in political science and statistics, he has spent the past eight years at Oxford Global Projects extracting signal from noise in historical project data. His focus is practical: how long will this project actually take, and what drives success versus failure? The patterns he has uncovered in thousands of real projects suggest that conventional wisdom about project management is frequently incomplete at best and systematically wrong at worst.
What makes their collaboration compelling is that both arrived at similar conclusions from opposite starting points. When they found one another, the realisation became clear: the measurement problem and the project problem were fundamentally the same.
Measuring the Consequences Instead of the Drivers
A fundamental mismeasurement runs through most project management practices. Organisations obsess over what they can easily track: cost, schedule, and quality metrics. These are straightforward to measure. But they are also consequences, not causes. They tell you what happened, not why it happened or whether it was worth doing in the first place.
This connects directly to the iron triangle framework that dominates project management education. Time, cost, and quality are presented as the primary measures of success. Yet this misses something crucial: whether the project was worth doing at all. Benefits drive that calculation. But benefits are uncertain, fuzzy, and difficult to quantify. So they get ignored in favour of the metrics that appear more concrete.
In this conversation, the authors introduce what they call the "measurement inversion problem". Organisations measure the variables with the lowest information value (cost, because it is easy) whilst ignoring the variables with the highest uncertainty and highest impact (benefits). The result: decisions are made on incomplete and backwards-looking information.
Why 80 Per Cent of Approved Projects Would Never Have Been Approved
Perhaps the most damning finding in the book emerges from a straightforward analysis. Hubbard and Leed compared actual project portfolios to extensive research on decision makers' stated risk tolerance. They quantified what decision-makers claim they are willing to risk, then applied that literal standard to the projects those same decision-makers had actually approved.
The result: approximately 80 per cent of the projects approved would have been rejected if their stated risk tolerance had been applied consistently.
This does not indicate a crisis of project delivery. It suggests a crisis of project selection. Organisations are systematically saying yes to initiatives with marginal returns that do not align with their actual risk appetite. The counterintuitive implication: organisations should approve fewer projects, not more. Rather than filling the pipeline with mediocre opportunities, wait for the bigger, more rewarding ones.
Projects Fail Long Before They're Cancelled
Pharmaceutical companies have mastered mainly a principle that other industries have not: fail early. Projects are cancelled regularly across all sectors, but they are almost invariably cancelled long after they should have been. The signal that something is wrong appears well in advance, yet the project limps along until accumulated losses make cancellation unavoidable.
This failure is not tactical. It is strategic. Most project governance systems leave critical questions unanswered: What metrics actually matter? At what threshold should intervention occur? Under what circumstances would we actually stop? When these questions remain unanswered, projects fall victim to cost fallacy and organisational inertia.
The solution requires defining a set of quantitative intervention options in advance. Not ad hoc reactions to emergencies, but carefully designed decision rules specified during planning. Daily monitoring of high-information-value metrics feeds these decision rules. This transforms project governance from reactive crisis management into proactive option execution.
The Measurement Tools That Create False Confidence
Risk matrices appear in virtually every project governance framework. You know them: high, medium, low severity paired with high, medium, low probability. Colour coded red, yellow, green. They are ubiquitous, intuitive, and according to Hubbard, fundamentally misleading.
Illusion of precision without insight
A high, medium, or low assessment is not mathematics. It is an ordinal scale that creates the appearance of rigour without delivering genuine information. The moment you have identified a risk, express it in actual probability distributions and impact ranges.
Acceleration of bad answers
When you digitise a faulty process, you do not fix the process; you merely accelerate the production of bad answers. Software vendors, standards bodies, and certification frameworks continue to perpetuate these pseudomathematical approaches because they are quick to implement and intuitively satisfying.
Calls for industry change
The authors explicitly call for standards and vendors to stop promoting these tools. The conversation even touches on why project management certifications apparently confer no measurable benefit on project outcomes.
When Ecosystem Costs Are Invisible, Risk Gets Mispriced
Infrastructure projects reveal a particular blindness in how we account for costs. When a road project runs behind schedule, the costs extend beyond labour and materials. Traffic congestion persists longer. Neighbouring businesses suffer disruption. Yet these ecosystem impacts do not appear on the project manager's ledger. They exist outside the formal accountability boundary.
This accounting invisibility creates systematic distortion in risk assessment. Projects look more attractive because their actual costs are hidden. The solution requires that infrastructure decisions incorporate a complete economic analysis of ecosystem impacts, not merely direct project costs. Moreover, projects should demand higher required returns precisely because their actual risks are so substantial.
Committing Too Soon to Technology Can Be More Expensive Than Waiting
The authors introduce the concept of "technology regret": the expensive realisation months after commitment that waiting a bit longer would have yielded a dramatically better solution. For projects with multidecade lifespans and uncertain technology trajectories, this becomes a genuine decision problem.
The challenge is optimisation. You cannot wait indefinitely without forfeiting the benefits that current technology delivers. But neither can you commit too early without risking obsolescence. One practical approach: structure extensive infrastructure as independently justifiable phases. If technology advances, you have already captured benefits from what has been built.
AI, Project Rehearsals, and the Work Humans Should Keep
Towards the end of the conversation, the discussion turns to artificial intelligence and its implications for workforce development. This is where Hubbard articulates a nuance often absent from technology discussions: not everything should be automated, even if it could be.
Cybersecurity serves as a prime example. AI will be essential for defending against AI-powered threats. But cybersecurity decision-making itself, the human judgement protecting critical infrastructure, may be work that societies actively want to keep under human control. The same applies to AI safety, which is emerging as a significant career category not because organisations want to automate it, but because the risks demand dedicated human oversight.
The authors also discuss AI's potential in project planning itself: building high-resolution digital twins of projects and running thousands of simulations to test intervention options before projects begin. Which intervention strategies deliver the highest returns? Where should decision policies differ? Even scope changes can be tested quantitatively.
Why This Matters Now
Project professionals operate in an environment of accelerating uncertainty. Costs remain volatile. Technology landscapes shift mid-project. Supply chains face geopolitical shocks. Stakeholder expectations remain fundamentally misaligned with reality. Yet the orthodox tools of project management feel increasingly insufficient. Risk matrices provide false comfort. Gantt charts create an illusion of predictability that actual events routinely demolish.
The authors argue that the problem is not an insufficient process. The problem is inadequate thinking. It requires a clear understanding of what you are trying to achieve. It demands quantifying your risk tolerance and applying it consistently. It requires building decision gates in advance with clear metrics that trigger defined intervention options.
These are cultural changes, not technical ones. They require confronting uncomfortable truths about how projects are selected and decisions are made.
What You'll Discover in the Full Episode
This blog covers the primary arguments, but it represents only the beginning of what Doug and Andreas discuss. The full episode delves into substantially more: the specific quantitative methods that distinguish realistic project estimation from aspiration; how reference class forecasting is being complemented by AI pattern matching across billions of data points; why organisations consistently misapply fundamental statistical concepts; the mathematics of information value and why your intuitions about data sufficiency are almost certainly wrong; the psychological phenomenon known as the Hawthorne effect and how human behaviour shifts when measurement occurs; Goodhart's Law and the critical distinction between measurement and incentives; and the emerging concept of "meta projects" and how organisations should prioritise improving how they improve projects.
The conversation also ventures into more provocative territory. There is a candid discussion about AI safety, probability of doom scenarios, and whether social upheaval from AI adoption could occur within a two-year window. There is a discussion of workforce implications that extends far beyond simple automation rhetoric. And there is a thoughtful exploration of what kinds of work humans might want to preserve or create, even if machines could do them more efficiently.
If you work in project management, if you are tasked with evaluating which initiatives to fund in an environment of constrained resources, or if you are curious about how two people with decades of experience in measurement and decision-making think about the future of work and technology, this episode deserves your time. It is available on Project Flux, and we believe you will find it worth your full attention.

Comments