top of page
Search

When Everything Becomes Urgent, Nothing Gets Done: Why Your AI Strategy Needs a Hierarchy

  • Writer: James Garner
    James Garner
  • 1 day ago
  • 9 min read
ree

Project teams are burning out trying to learn everything and implement AI everywhere at once. But the real problem isn't the pace of technology; it's the lack of strategic clarity from leadership.


Imagine you're running a project team. It's November 2025, Last week, your CEO shared an article about how AI is transforming project management. This week, your team is asking about implementing new agent-based tools for scheduling.


Next week, there will be something else. Meanwhile, your people are exhausted. They're trying to keep up with every announcement, every new capability, every potential application. Some are taking courses. Others are experimenting in their spare time. Nobody quite knows which skills matter for your specific challenges. And ROI? Nobody's measuring that either.


The Burnout Cycle Nobody Talks About


This isn't an exaggeration. Project delivery professionals across the UK and beyond are reporting the same phenomenon: a creeping sense of overwhelm that comes not from doing their actual job, but from trying to stay current with AI developments that may never directly impact their work. They attend training courses. They read white papers. They experiment with tools in their lunch breaks. And yet, they feel no closer to understanding what they should actually be doing.


Making investment decisions without this clarity exacerbates the problem. Companies are spending money on training programmes without creating the strategic context for that training to matter. They're licensing new tools without understanding what business outcomes they should drive. They're asking teams to upskill in areas that sound important but aren't connected to any real problem they're trying to solve. The result is expensive burnout. Money spent. Time invested. Energy depleted. And still, the core challenge remains unsolved.


Dr. Raoul Gabriel-Urma, founder and group CEO of Cambridge Spark, articulated this tension during a recent conversation about AI education. His background is revealing. Before founding Cambridge Spark, he completed a PhD in compiler research and source code analysis. He has taught computer science. He has built companies. He understands both the technical depth of AI and the human reality of trying to navigate rapid change.


When asked what keeps him awake at night, his answer was disarmingly honest: the pace itself. "The speed, the pace, the pace of development, what's happening in the industry. What keeps me up at night is, my god, like what else is coming up that you know we all need to learn about and figure out and adopt, right?"


The False Narrative Around Skills Gaps


Here is where the conversation typically goes astray. The technology industry, including training providers, has latched onto the "AI skills gap" as both a diagnosis and a solution. There's a gap in your team's knowledge, the narrative goes, so you need training. That training will close the gap. Then your team will be ready for AI transformation. It's a neat story. It's also dangerously incomplete.


Dr. Raoul Gabriel-Urma pushes back gently but firmly against this framing. "I don't feel like we sell training, right? Like we help organisations and people succeed in this world, right?" The distinction matters. Success isn't measured in course completion or certifications earned. Success is measured in business outcomes delivered. "Success to me is not education. It's like an outcome. It's actually so I can deliver on some business outcome from my employer so I can deliver my personal career satisfaction."


This is the insight that reframes everything. You could train every member of your team in prompt engineering, Python, and the latest AI frameworks. But if leadership hasn't articulated what problem those skills are meant to solve, you've simply created a more exhausted workforce. They now know more but feel no clearer about what matters.


Strategy First, Then Capabilities, Then Tools


So what does clarity actually look like? Dr. Raoul Gabriel-Urma presents a deceptively simple framework called the value stake model. It cuts through the noise by forcing one essential question: how is AI going to create value for your specific organisation? There are only two answers. Either you make things more cost-efficient and capture that efficiency as profit, or you delight customers and increase their willingness to pay. That's it. Everything else is noise.


"I like to think about value creation using the value stake framework because it's so simple and really helps understanding," he explains. "If you got like a stake, you create value by either making things more cost-efficient, and you capture the value of the efficiency, or you delight customers, and by delighting customers, you increase their willingness to pay."


This framework is already visible in practice, though few organisations acknowledge it explicitly. Consider two airlines.


RYANAIR has chosen the cost-efficiency path. Their entire business model, their technology investments, their training priorities all align around being the cost leader in aviation. Every decision gets filtered through that lens.


Virgin Atlantic has chosen the customer delight path. Their investments go toward personalised experiences, customer care, differentiation. It's a different strategy.


Both are valid. However, you cannot pursue both strategies at the same time and expect to excel in either. The same logic applies to your project delivery organisation. Are you competing on cost or differentiation? That question must be answered first. Everything else follows from it. If you're competing on cost, your AI investments should focus on automation, efficiency, measurable hour savings. If you're competing on differentiation, your investments should focus on enhancing capability, improving outcomes, and delighting clients in ways competitors cannot replicate. This clarity immediately eliminates maybe seventy per cent of the decisions you thought you needed to make.


Vertical Productivity Over Horizontal Distraction


Once you know your strategy, the next distinction becomes critical. Dr. Raoul Gabriel-Urma divides productivity improvements into two types, and the difference explains why so many AI initiatives fail to deliver ROI.


  • Horizontal productivity sounds good but is nearly impossible to value. When you and a colleague use AI to summarise a document or draft an email faster, the work gets done slightly quicker. But that freed-up time doesn't automatically convert to measurable business value. You're still in your role, still doing similar work. What happens to those saved minutes? Nobody tracks it. It simply disappears.


  • Vertical productivity is fundamentally different. It's when a software engineer uses AI to eliminate writing unit tests manually, creating genuine space to spend more time with customers understanding their needs. Or when a quantity surveyor uses AI to generate preliminary cost estimates, freeing capacity for risk analysis and client strategy. The hours saved directly convert into higher-value activities.


"In a verticalised point of view," Gabriel-Urma explains, "say like a software engineer, you know, I no longer need to write a unit test manually. I no longer need to, like, create a boilerplate for, like, a little crud system, you know, around a database that is done. Right. As a result, you become more productive, and I can clearly associate the hours of software engineering time that I am no longer spending on this task. This is where ROI becomes visible. This is where burnout decreases because people stop doing tasks they find draining and start doing work that energises them".


The Leadership Problem That No Amount of Training Solves


Here's what keeps project delivery leaders awake: training investments fail not because the training is poor, but because leadership hasn't created the conditions for that training to matter. Teams come back from courses excited about what they've learnt. Then they return to organisations with no clear strategy, no experimentation space, and no permission to prioritise learning application over immediate delivery. The trained person becomes frustrated. The training becomes another item on a growing list of things that didn't work out.


Dr. Raoul Gabriel-Urma's advice on this is controversial but important. He argues that the most successful CEOs and leaders are those willing to get deeply involved in operational detail while also maintaining strategic altitude. They don't delegate entirely to specialists and retreat to the boardroom. They get close to work. They understand what their teams are actually doing. They go on sales calls. They review technical decisions. They get their hands dirty. This isn't micromanagement. It's leadership rooted in reality rather than assumption.


"The CEOs I've seen the most successful, including on the podcast, are the ones that are able to go super low-level detail. I go to go really deep in the company and then also go super high level. This ability to do that, go deep, go high, go fast and slow down when you need to. I think that's an underestimated ability." For project leaders navigating AI adoption, this translates directly. You need to understand what tools your teams are actually using. You need to know what problems they're trying to solve. You need to see where the real friction points are. Only then can you make intelligent decisions about training, tools, and technology investment.


Fundamentals Versus Features: The Balance Nobody Gets Right


The final piece of the puzzle is perhaps the most counterintuitive. Even as tools and platforms change weekly, investing in foundational knowledge becomes more important, not less. This seems backwards. But Gabriel-Urma's reasoning is sound. When AI generates code, the output is still Python or JavaScript or SQL. The underlying languages haven't changed. They've simply become more accessible. To evaluate whether AI-generated code is correct, you need to understand those fundamentals. To modify it. To debug it. To catch edge cases.


The same principle applies across domains. Project managers need to understand scheduling theory and resource constraints. Quantity surveyors need to grasp fundamental cost drivers and risk principles. Estimators need to know what the baseline assumptions are before they rely on AI estimates. These aren't nice-to-haves. They're prerequisites for being able to think critically about what AI is telling you and whether it makes sense in your specific context.


But here's the balance: you cannot possibly learn every new tool as it emerges. You shouldn't try. What you should learn is adaptability. The skill to pick up a new platform quickly. The mindset to say 'let me experiment with this and see if it solves the problem I'm actually trying to solve.' Gabriel-Urma articulates this clearly: "There's like knowledge and skills are long lasting, and then there are like new features, new platforms where you have to pick up. So really the skill there is adaptability. You need that balance between teaching fundamentals that are long lasting and also enabling students to be adaptable and pick up the latest tools."


What This Means for Your Team Right Now


If your project team is experiencing burnout from trying to keep up with every AI development, the solution isn't accepting that this is just the nature of modern work. The solution is leadership clarity. Gabriel-Urma suggests a practical four-step approach:


  • Decide your strategy first. Cost leadership or differentiation? This single decision cascades through everything else, eliminating vast swathes of irrelevant options.


  • Identify one specific, high-impact use case where AI can help you execute that strategy. Don't try to implement everything. Pick something concrete where you can measure the outcome.


  • Invest in capabilities that support that use case. This might mean training. It might mean tools. It might mean both. But it's targeted, not scattered.


  • Get close to work. Understand what's actually happening. Iterate based on reality, not projection.


This approach won't eliminate the overwhelm entirely. AI is genuinely moving fast. But it will transform that overwhelm from a source of paralysis into a source of opportunity. Your team will feel like they're learning things that matter. They'll experience ROI from their efforts. They'll have the energy to adapt when the next generation of tools arrives. They won't be operating in permanent crisis mode trying to chase everything at once.


Listen to the Full Conversation


This conversation with Dr. Raoul Gabriel-Urma goes far deeper than what's covered here. In the full episode, you'll discover insights that challenge conventional wisdom in ways you might not expect. He shares specific stories about what he encountered while teaching computer science, which directly informed how he built Cambridge Spark's approach to education and training. You'll hear his perspective on why some organisations successfully navigate AI implementation, while others stumble despite having similar budgets and resources.


But there's more. Dr. Raoul Gabriel-Urma explores some provocative territory: the misconception that learning AI will actually free up your time (spoiler: it won't, but not for the reasons you think), the real risks of social engineering and bias in AI models affecting the next generation of workers, and a genuinely surprising take on what your favourite programming language reveals about the future of work itself.


He also discusses the darker elements of AI adoption and what organisations should be thinking about as its capabilities become increasingly accessible to anyone with an internet connection and curiosity.


Most importantly, you'll get a sense of how someone who has worked at the intersection of technology, education, and business leadership thinks about these challenges. It's refreshingly free of hype and rooted in practical experience.


If you're a project delivery professional struggling to figure out where AI fits into your practice, or if you're a leader trying to help your team navigate this landscape without burning them out, this episode will shift how you think about the problem. It won't give you all the answers, nobody has those yet. But it will give you a framework for asking better questions and making more strategic choices about what you actually need to learn and implement.


Tune in now to hear the full discussion and discover how to build an AI strategy that works for your team.



 
 
 

Comments


bottom of page