top of page
Search

Finally: The UK Gets Serious About Programme Data Standards

  • Writer: James Garner
    James Garner
  • 3 days ago
  • 9 min read

Updated: 2 days ago

A new government standard for project and programme data might be the most foundational breakthrough of 2025, but it matters more than most AI announcements.


On 11 December 2025, while everyone was distracted by OpenAI's latest model release and Trump's AI executive order, the UK government quietly launched something that will actually improve project delivery: a new standard for programme and project data. No flashy demo. No viral moment. Just a methodical attempt to solve one of the most persistent problems in public sector delivery. Namely, that nobody can find, share, or compare project information because every department formats it differently.


If you've ever tried to aggregate programme data across government portfolios, you know the pain. Different naming conventions. Inconsistent categorisation. Incompatible formats. Missing fields. Duplicate records. Information is locked in departmental silos because nobody has agreed on common standards. The result is that portfolio-level visibility, cross-programme learning, and evidence-based decision making all become nearly impossible. Not because the data doesn't exist, but because nobody can use it.


The new standard won't fix everything overnight. But it's a massive step in the right direction, and it'll be interesting to see what the trial brings. More importantly, it signals that the government recognises data standards as foundational infrastructure rather than administrative overhead. That shift in perspective matters almost as much as the standard itself.


ree


Why Data Standards Matter More Than You Think

The structural truth about project delivery is that data management determines what's possible. You can have brilliant project managers, excellent stakeholder engagement, and rigorous governance. But if you can't aggregate data across projects, you can't identify patterns. If you can't compare performance, you can't learn from success. If you can't track dependencies, you can't manage risk. Standards enable all of that.


Think about what happens without common data standards. Project A tracks milestones using dates. Project B uses RAG status. Project C doesn't track milestones at all. When someone asks for a portfolio-level view of milestone achievement, you can't provide it without manually translating data from each project into a standard format. That takes time, introduces errors, and usually doesn't happen.


Or consider risk management. Project teams identify risks using different categorisation schemes, record them at other granularities, and assess probability and impact using incompatible scales. Portfolio-level risk analysis becomes guesswork because you're comparing apples, oranges, and occasionally small rocks.


Benefits realisation suffers similarly. Projects define success differently, measure outcomes using different metrics, and track benefits on various timescales. Aggregating benefit delivery across a portfolio requires heroic data manipulation that few organisations actually perform. The result is that the government can't answer basic questions like "which types of programmes deliver best value" or "what implementation approaches succeed most consistently."


The new data standard addresses these problems by defining common fields, formats, and structures that all government programmes must use. It creates the foundation for portfolio management, cross-programme learning, and evidence-based improvement. None of that is possible without agreed standards.


Becky Wood, the Chief Executive Officer of the National Infrastructure and Service Transformation Authority (NISTA) and Head of the Project Delivery Function, said: “By publishing this standard, we’re laying the foundations for better data. We’re unlocking the potential of even more digital and AI tools to boost productivity and transform how programmes and projects are delivered across government".

What the Trial Actually Involves

The announcement indicates that the government is launching a trial of the new standard. This suggests a staged rollout where selected programmes adopt the standard first, the government evaluates results, refines the approach based on learnings, and then mandates broader adoption. That's sensible. Imposing standards across government without testing creates the risk that the standard proves unworkable in practice.


The Trials also provide political cover. If the standard faces resistance from departments, the government can point to trial results demonstrating benefits. If the standard proves too onerous, the government can adjust before committing to full implementation. Trials reduce risk on both sides while generating the evidence needed to make the case for change.


The challenge with trials is ensuring they're genuinely representative. If the government already selects only well-run programmes with good data management, the trial might overestimate how easily the standard can be adopted more broadly. If trials include programmes with messy data and weak governance, results might underestimate the standard's value because implementation problems obscure benefits. Getting the trial selection right matters.


Similarly, trial duration affects conclusions. Too short, and you don't capture the full cycle of data collection, reporting, and use. Too long, and you delay the rollout unnecessarily while programmes continue to use incompatible approaches. Striking the right balance requires careful design.


Building on Previous Attempts

This isn't the government's first attempt at data standardisation. The Data Standards Authority has existed since 2020, working to improve the adoption of common data standards across government. The National Data Strategy outlined ambitions for treating data as a strategic asset. Various programmes and initiatives have tried to tackle data interoperability.


The difference this time is focus and scope. Rather than trying to standardise all government data at once, this initiative targets programme and project data specifically. That narrower scope makes success more achievable. It also addresses an area where standardisation delivers immediate value. Programme and project data directly inform delivery decisions. Better data means better decisions.


Previous standardisation efforts often struggled because the benefits accrued at the system level, while implementation costs hit individual departments. Departments had to change processes, retrain staff, and modify systems without seeing direct benefits to their own operations. This created resistance that stalled adoption.


A programme data standard potentially avoids that trap because the benefits are more direct. Project teams can benchmark performance against similar projects. Portfolio managers can identify struggling programmes earlier. Senior leadership gains visibility across the portfolio. Those benefits justify the implementation effort more clearly than abstract arguments about data interoperability.


Still, adoption won't be automatic. Departments will need support, guidance, and potentially funding to implement the standard. Some will embrace it immediately. Others will resist change. Getting to consistent adoption across government requires more than publishing a standard. It requires sustained change management.


The Implementation Challenge

Publishing a data standard is the easy part. Getting hundreds of project teams across dozens of departments to actually use it consistently is harder. Several implementation challenges emerge:


Legacy systems often can't accommodate new data structures without significant modification. Projects might need system updates, custom integrations, or even replacement tools to support the standard. That requires investment and time that some programmes won't have.


Staff training is essential but time-consuming. Project managers, business analysts, and data teams need to understand the standard, know how to apply it, and recognise why it matters. Training hundreds or thousands of people while they're delivering projects creates capacity pressure.


Governance mechanisms must ensure compliance without creating bureaucracy. Someone needs to verify that projects are using the standard correctly, identify problems, and enforce adoption. But if that verification becomes onerous, it creates resistance. Finding the right balance between assurance and enablement is difficult.


Transition periods create confusion. During the shift from old approaches to the new standard, some projects will be using legacy formats while others adopt the standard. Portfolio-level analysis becomes even harder temporarily because you're dealing with mixed data. Managing that transition requires careful planning.


Integration with existing reporting frameworks adds complexity. Projects already report to multiple bodies using different templates. Adding another reporting requirement, even one that should eventually simplify things, initially feels like more work. Showing how the standard reduces overall reporting burden rather than increasing it is critical.


What Success Would Look Like

If the standard succeeds, what would the government actually gain? Several outcomes become possible:


Real-time portfolio visibility. Senior leadership can see programme health across the entire portfolio without waiting for quarterly reports or requesting bespoke analyses. Issues surface earlier. Patterns become visible. Decision-making improves because it's based on current data rather than stale snapshots.


Cross-programme learning. Government can systematically compare programme performance, identify which approaches work best in which contexts, and share lessons learned effectively. That transforms institutional learning from anecdotes and luck into evidence and pattern recognition.


Resource allocation based on evidence. When budget decisions arise, the government can see which programme types deliver the best return on investment, where capabilities are stretched thin, and where interventions would have the most impact. That shifts funding discussions from politics to data.


Risk management at scale. Portfolio-level risk aggregation becomes possible when projects use standard risk taxonomies and assessment approaches. Government can identify systemic risks, spot emerging patterns, and intervene before individual project risks cascade into portfolio-level problems.


Supplier performance tracking. When projects consistently record supplier delivery, the government can evaluate contractor performance across multiple engagements, identify patterns of success or failure, and make better procurement decisions. That creates accountability that's currently missing.


Benefits realisation measurement. Standard benefit definitions and tracking approaches enable the government to measure aggregate benefit delivery across programmes, understand which interventions drive most value, and improve benefit realisation approaches based on evidence.


None of this is possible without agreed data standards. The standard creates the foundation. But only if it's adopted consistently, appropriately maintained, and used actively to drive better decisions.


The Broader Context

This data standard sits within a broader government push to improve project delivery capability. The National Infrastructure and Service Transformation Authority (NISTA) has been strengthening delivery assurance. The Government Property Agency is modernising estate management. Digital teams are implementing modern development practices. Data standardisation complements these efforts by creating the information foundation needed to measure progress.


Internationally, other governments face similar challenges. Australia, Canada, and several European countries have implemented project data standards with varying success. The UK can learn from their experiences about what works, what doesn't, and how to navigate implementation challenges.


Private sector organisations have also developed project data standards, though usually within single organisations rather than across an entire government. Those corporate standards demonstrate that consistent project data is achievable with appropriate governance and incentives. They also show that the benefits justify the implementation costs when standards are well-designed.


The challenge the government faces is achieving consistency without central control. Departments retain significant autonomy. Programmes span multiple organisations. Delivery partners bring their own approaches. Creating standardisation across that complexity requires more than technical design. It requires political will, sustained commitment, and careful change management.


What Project Delivery Professionals Should Do

For project delivery leaders in government, the new standard creates both obligations and opportunities. Several actions make sense:


Engage early with trial programmes. If your programme is selected for the trial, view it as an opportunity rather than a burden. Early adopters influence how the standard evolves and gain experience that becomes valuable when broader rollout happens.


Prepare data architecture for change. Even if your programme isn't in the trial, start thinking about how you'd implement common standards. Review your current data structures, identify gaps, and plan for eventual adoption. Getting ahead of mandatory implementation gives you more control over timing and approach.


Build capability in your teams. Data literacy, structured thinking about information management, and understanding why standards matter all become more critical. Investing in those capabilities now pays dividends when standards are mandated.


Advocate for practical design. If you have concerns about how the standard is being designed, raise them constructively. The trial phase is when the government can still adjust based on feedback. Once the standard is finalised and mandated, changing it becomes much harder.


Connect data standards to delivery outcomes. The most effective way to drive adoption is to show how better data enables better delivery. Find examples from your programme where common standards would have prevented problems, enabled better decisions, or improved outcomes. Those stories make the case more effectively than abstract arguments.


The Foundational Breakthrough

A government data standard won't generate headlines. There's no demo to share on social media. No viral moment. No breakthrough capability that makes everyone's jaw drop. It's infrastructure. Foundational, essential infrastructure that enables everything else to work better.


That's precisely why it matters. The flashy announcements about AI capabilities and technology breakthroughs get attention because they're new and exciting. But they don't improve project delivery if the foundational data architecture is broken. You can't use AI to identify at-risk programmes if programme data is inconsistent. You can't apply advanced analytics to portfolio management if you can't aggregate data across projects. The tedious work of standardisation enables the exciting applications.


The government's willingness to invest in this unsexy but essential work signals maturity. Rather than chasing the latest technology trend, they're fixing fundamental problems that have plagued delivery for decades. That's how you build sustainable capability rather than creating another initiative that launches with fanfare and fades into irrelevance.


The trial will reveal whether the standard works in practice. Implementation will show whether the government can drive adoption consistently. Results will determine whether benefits justify the effort. But regardless of how this specific initiative plays out, the focus on data standards represents the right priority.


Project delivery needs better data before it needs better algorithms. It needs common standards before it needs advanced analytics. It needs consistent foundations before it needs innovative applications. The UK government has recognised that sequence. Now comes the hard part: actually implementing it.


That's less exciting than announcing the next AI breakthrough. But for anyone actually delivering major programmes, it might matter more. Sometimes the foundational breakthrough is the one that actually changes things. This is a massive step in the right direction, and it'll be interesting to see what the trial brings.


Foundational improvements in project delivery rarely make headlines but often matter most. Subscribe to Project Flux for analysis that focuses on what actually improves delivery rather than what generates buzz.








 
 
 

Comments


bottom of page