top of page
Search

Mark Enzer on AI in Project Delivery: "If We Don't Define the Future, Someone Else Will"

  • Writer: James Garner
    James Garner
  • Jan 24
  • 6 min read

Updated: Jan 25

A PM's Council for Science and Technology member and APM leader discuss why project professionals must act now to shape human-AI collaboration, or risk having that future shaped for them.


Forty senior project professionals recently gathered inside Windsor Castle to debate artificial intelligence. The setting was historic; the questions were urgent. But the conversation that emerged wasn't about automation timelines or efficiency metrics. It centred on something more fundamental: agency.


Mark Enzer, a member of the Prime Minister's Council for Science and Technology and Visiting Professor at Imperial College, joined Gavin Spencer from the Association for Project Management (APM) to unpack findings from a white paper titled "AI in the Boardroom: A Project Leadership Wake-Up Call."


The paper emerged from APM's Windsor Project Summit, an exclusive gathering that brings together leaders from across the project profession to debate critical industry challenges.


What followed was a candid exploration of where AI sits in relation to human judgement and who gets to decide.



The Risk of Receiving a Future You Didn't Choose

Enzer doesn't shy away from uncomfortable observations. He believes there's a genuine risk that the project profession could "sleepwalk into a future" where technology dictates terms rather than serving human needs.


"I think we need to be quite resolute and quite clear that we want tech to serve us, us humans, and that we're still the master," he explains. Without deliberate action, Enzer suggests the industry risks receiving a future shaped by external interests rather than professional expertise.


The concern is rooted in a practical observation. At the individual level, AI is already proving useful. People report genuine productivity benefits in their daily work. Yet these gains aren't translating consistently into organisational or industry-level returns.


Why the disconnect? Enzer points to a foundational gap: most organisations haven't defined what they're actually trying to achieve with AI. Without that clarity, adoption becomes scattered experimentation rather than coordinated transformation.


"If we don't have an opinion as to where we're going to get, we're not going to get there," he argues. "In the absence of any direction like that, then the future that we receive is the future which is given to us rather than the future that we make for ourselves."

A Practical Framework for Shaping That Future

Enzer advocates for a structured roadmapping exercise built around three horizons:

  • Horizon 1: Understanding what near-term value you can release and what's achievable now

  • Horizon 2: Identifying the steps and capabilities needed to bridge from present to future

  • Horizon 3: Defining an imagined better future that guides all other decisions


This framing shifts AI adoption from a technology question to a strategic one.


The focus moves from "what can AI do?" to "what outcomes are we trying to achieve, and how might AI help us get there?"


Why Systems Thinking Matters More Than Ever

For Enzer, successful AI implementation connects directly to systems thinking. Projects, he points out, are themselves systems. Supply chains are systems. Organisations are systems of systems. Understanding these interconnections becomes far easier when anchored to a clear purpose.


"I think the thing that underpins it, the thing that starts to make sense of it all, is pulling towards a common, understood, shared outcome," he explains.


The complexity that makes systems thinking feel overwhelming to many practitioners often dissolves when you know what you're trying to achieve.


Enzer acknowledges that "people will say that systems thinking is very complicated and there are all these connections, and they're going to get spaghetti diagrams." His response: "If we simplify and just focus on the outcomes, then other things start to make sense."


The Case for Federation Over Centralisation

One of the more counterintuitive positions Enzer advances is his rejection of centralised approaches to data and technology. Rather than pursuing a single unified system, he argues for interoperability between multiple systems.


"I would stay away from that idea of having some kind of all-singing, all-dancing model of everything that's going to answer all our problems," he explains. "In the same way that we shouldn't be centralising all our data and having one big lake of everything."


His reasoning is pragmatic. The built environment industry is fragmented, but Enzer doesn't see this as a problem to solve. He views it as a reality to work with.


"Humans will always create silos and do their own little thing in their own little space," he observes.

The more effective approach is enabling connections between different systems rather than attempting to replace them with a single solution.


This federated architecture, he argues, applies not just across systems but across time: legacy data needs to interoperate with current systems, and current systems need to anticipate future ones.


The Unsexy Work That Makes Everything Else Possible

The conversation takes a practical turn when addressing data quality. Enzer admits this isn't the exciting message everyone wants to hear.


"It's like doing the cleaning or the dusting in your house," he says. "The trouble is that if we don't do that, then we won't get the benefits of AI. It's just as simple as that."


The current state of organisational data, according to Enzer, often isn't fit for purpose. Data sits in silos. Meaning gets lost in translation between systems. Without proper curation, AI remains "hungry for data that is fit for use" but unable to deliver its potential value.


For organisations feeling overwhelmed by decades of accumulated data, Enzer offers direct advice: start now, start somewhere, and let specific decisions guide your priorities.


"What data do we need for specific decisions? Because fundamentally, an awful lot of what this comes down to is making better decisions faster," he explains. "And that relies on information."


Three Starting Points for Data Readiness

Based on the discussion, project leaders can take immediate action:


  • Identify decision-critical data first. Rather than attempting to clean all historical data, focus on the information needed for your most important recurring decisions.

  • Establish standards for new data now. Even if legacy data remains messy, implementing quality standards today means you'll be in a stronger position in twelve months.

  • Address access and security alongside quality. Enzer emphasises that data access and data security are "big boring things" that nonetheless form the foundation for AI value.


Keeping Humans in Control

Throughout the discussion, both Enzer and Spencer return to a central theme: the irreplaceable role of human judgement.


Gavin Spencer envisions a future where "humans are working closely with the technology," equipped with the right skills to oversee AI decision-making and maintain control. The concern extends to accountability, ethics, and the kind of governance that technology alone cannot provide.


"You're always going to need a human to run a project," Spencer observes. "You're always going to need a human involved to have that human-to-human interaction."

The challenge lies in ensuring that human oversight remains meaningful as AI capabilities advance. Spencer emphasises the importance of "upskilling humans to be able to use not just the technology, but knowing what tool to use at that particular time."


Professional institutions, he suggests, have a role in ensuring AI adoption remains grounded in ethical principles. The work APM and similar bodies are doing to establish standards and guidance may prove as important as any technological development.


What This Means for Project Leaders

The white paper emerging from Windsor Castle arrives at a pivotal moment. Organisations across the built environment are experimenting with AI, often without coordinated strategy or clear success metrics.


What Enzer and Spencer offer is a framework for thinking about how to approach this transformation:


  • Start with outcomes, not technology

  • Embrace federation and interoperability over centralisation

  • Do the unglamorous work of sorting out your data

  • Maintain human agency and ethical grounding throughout


The future of AI in project delivery isn't predetermined. It will be shaped by the choices professionals make today about what kind of future they want to create.


Hear the Full Conversation

This article captures only a portion of a wide-ranging discussion.


The full podcast episode includes Enzer's thoughts on metamodernism as a philosophical framework for navigating technological change, his refreshingly honest admission about preferring book summaries to full texts, and Spencer's reflections on "The Four Agreements" as a guide for leadership in uncertain times.


You'll also hear their predictions for what the industry should look like by 2031, their views on humanoid robots entering construction, and a fascinating exchange about emergent behaviour in complex systems.


For project professionals thinking about how to position themselves and their organisations for what's coming, it's a conversation worth hearing in full. Catch the complete episode on the Project Flux podcast.


All content reflects our personal views and is not intended as professional advice or to represent any organisation.


 
 
 

Comments


bottom of page