A Big Four Firm's Hallucinated AI Government Report
- James Garner
- 2 days ago
- 6 min read
When a Big Four firm delivers a taxpayer-funded report with fabricated references [1], it’s not just an error; it’s a catastrophic failure of governance that should terrify every project leader.

The recent scandal involving Deloitte's AI-generated report for the Australian government is a moment of reckoning for the professional services industry and a stark warning for every project delivery professional [1]. A report, commissioned at a cost of AUD$439,000 to the taxpayer, was found to contain fabricated academic references and citations. This isn't a minor oversight; it's a fundamental betrayal of trust and a catastrophic failure of governance. It reveals a shocking lack of quality control and a cavalier attitude towards the truth that should send a shiver down the spine of anyone who relies on expert advice to make critical decisions.
This incident is not just about one report or one firm. It is a symptom of a much deeper problem: the unchecked and unverified use of AI in high-stakes professional contexts. We are in a headlong rush to adopt AI, seduced by the promise of efficiency and cost savings, but we are failing to put in place the basic governance and quality control mechanisms that are necessary to ensure the integrity of our work. We are building a house of cards on a foundation of unverified data and opaque algorithms, and the Deloitte scandal is a clear sign that the whole thing is about to come crashing down.
"When the guardians of truth start fabricating their sources, we are in very dangerous territory indeed."
For project delivery professionals, the implications are profound. We are the ones who are responsible for managing the risks associated with our projects, and the Deloitte scandal has just revealed a massive, and previously hidden, risk. We can no longer blindly trust the outputs of our AI systems, or the work of our expert consultants. We need to adopt a new, more sceptical and rigorous approach to quality control, and we need to demand a new level of transparency and accountability from our partners and vendors. The age of trust is over; the age of verification has begun.
The Industry Blind Spot: The Illusion of AI as a Magic Wand
The Deloitte scandal exposes a dangerous blind spot in the professional services industry: the growing perception of AI as a magic wand that can be waved to produce instant, effortless, and authoritative-sounding reports. The allure of this illusion is powerful. In a world of tightening budgets and ever-increasing pressure to deliver, the temptation to outsource thinking to a machine is almost irresistible. But as the Deloitte case so painfully demonstrates, this is a deal with the devil, and the price is our professional integrity.
The problem is not the technology itself, but the culture that surrounds it. We have created a culture of “AI solutionism,” where we are so eager to embrace the latest technological fad that we are failing to ask the hard questions about its limitations and risks. We are treating AI as a black box, a mysterious oracle that can provide us with the answers we need, without our having to do the hard work of thinking, researching, and verifying. This is not just lazy; it is a dereliction of our professional duty.
The FTI Consulting article on AI compliance highlights the profound governance challenges that this new reality presents [2]. We are operating in a world where the legal and regulatory frameworks are struggling to keep pace with the speed of technological change. This creates a dangerous vacuum, where the only thing standing between us and a catastrophic failure of governance is our own professional judgment. The Deloitte scandal is a clear sign that our judgment is failing. We are so mesmerised by the power of AI that we are forgetting the first rule of professional practice: trust, but verify.
What We’re Missing: A Culture of Accountability
What’s missing from this picture is a culture of accountability. In the rush to embrace AI, we have created a diffusion of responsibility, where everyone and no one is responsible for the quality and integrity of the work. The Deloitte scandal is a perfect example of this. Who is to blame for the fabricated references? The junior consultant who used the AI? The partner who signed off on the report? The firm that failed to provide adequate training and oversight? The AI vendor that created the tool? The truth is, it’s all of them, and none of them.
We are missing a clear and unambiguous chain of command for AI governance. We are missing the processes, the protocols, and the people who are responsible for ensuring that our AI systems are used safely, ethically, and responsibly. We are treating AI as a technical tool, when it is, in fact, a powerful agent that can have a profound impact on our work, our clients, and our society. And we are failing to give it the respect and the scrutiny that it deserves.
"Accountability is the bedrock of professional practice. Without it, we are just a bunch of smart people with expensive toys."
This is not just a matter of process; it’s a matter of culture. We need to create a culture where people feel empowered and expected to ask tough questions about the use of AI. We need to create a culture where people are rewarded for their diligence and their scepticism, not just for their speed and their efficiency. And we need to create a culture where leaders are held to the highest standards of accountability for the work that is done on their watch. We need to move away from a culture of plausible deniability and towards a culture of radical responsibility. The future of our profession depends on it.
What We Can Actually Do About It: A New Governance Manifesto for the AI Age
The Deloitte scandal is a wake-up call, not a reason to abandon AI. It is an opportunity to build a new, more robust, and more trustworthy approach to AI governance. Here is a four-point manifesto for a new era of professional responsibility:
1. Adopt a “Zero-Trust” Mindset: We must abandon our naive and uncritical faith in AI and adopt a “zero-trust” mindset. This means treating every output from an AI system as unverified until it has been rigorously checked and validated by a human expert. It means building in multiple layers of quality control, from automated fact-checking tools to manual review processes. And it means creating a culture where scepticism is not just tolerated but actively encouraged.
2. Demand Radical Transparency: We must demand radical transparency from our AI vendors and partners. This means insisting on a clear and understandable explanation of how their models work, what data they were trained on, and what their known limitations are. It means refusing to accept “black box” solutions that we do not fully understand. And it means holding our vendors to the same high standards of quality, integrity, and accountability that we hold ourselves to. We need to be partners in responsible innovation, not just passive consumers of technology.
3. Build a Culture of Verification: We must move beyond a culture of “move fast and break things” and build a culture of verification. This means investing in the tools, the processes, and the people that are necessary to ensure the quality and integrity of our work. It means making verification a non-negotiable part of every project lifecycle, not just an afterthought. And it means creating a culture where people are rewarded for their diligence and their attention to detail, not just for their speed and their efficiency.
4. Invest in AI Literacy: We must invest in the AI literacy of our people. This means providing them with the training, the education, and the resources they need to understand the capabilities and limitations of AI. It means demystifying the technology and empowering our people to be critical and informed users of AI tools. And it means creating a culture of continuous learning, where people are encouraged to stay up-to-date with the latest developments in the field. We need to build a workforce that is not just AI-enabled but also AI-literate.
This is not just about protecting ourselves from reputational damage; it’s about upholding the very foundations of our profession. As project delivery professionals, we have a duty to our clients, to our society, and to ourselves to ensure that our work is of the highest quality and integrity. Let’s not abdicate that responsibility to a machine.
Call-to-action: Don’t let your projects become a cautionary tale. Subscribe to Project Flux for the insights and strategies you need to build a culture of AI governance that works.
References
[1] Pivot to AI. (2025, August 28). Deloitte Australia writes government report with AI — and fake references. Pivot to AI. Retrieved from https://pivot-to-ai.com/2025/08/28/deloitte-australia-writes-government-report-with-ai-and-fake-references/
[2] FTI Consulting. (2025, March 20). AI Compliance: What General Counsel Need to Know. FTI Consulting. Retrieved from https://www.fticonsulting.com/insights/articles/ai-compliance-what-general-counsel-need-know
Comments