In the Age of AI, Are We Forgetting How to Be Human?
- James Garner
- Oct 27
- 6 min read
What if the greatest promise of artificial intelligence isn’t that it will make us more efficient, but that it will force us to become more human? It’s a paradoxical thought in an era defined by relentless technological advancement. We are captivated by the wonder of AI, the sheer magic of its capabilities. Yet, for every dazzling innovation, a quiet question lingers: in our rush to embrace the artificial, what essential parts of our own humanity are we leaving behind?
This is the central tension explored in a recent, deeply thoughtful conversation on the Project Flux podcast with Michael LaPage, Chief Learning Officer at Plan Academy. While many voices in the project delivery space are evangelising AI’s potential, Michael offers a refreshingly candid and necessary dose of scepticism. His perspective isn’t one of a luddite, but of a realist, grounded in over fifteen years of training people in the world of project controls. It’s a view that acknowledges the power of technology whilst fiercely advocating for the primacy of human connection.
As someone deeply involved in teaching the practical application of AI, I find this balance essential. It’s healthy to have a sceptic in the room, because we need to temper the boundless optimism of what AI can do with a realistic understanding of what it should do. Michael’s points, rooted in his extensive experience, provide that crucial grounding. He reminds us that the true revolution may not be in the algorithms we build, but in the human skills we rediscover as a result. In a way, we’ve come full circle: the very technology that threatened to automate our roles is now creating the space for us to excel at the things only people can do.

An Unexpected Journey from Software to Soul
Michael LaPage’s journey is not that of a typical tech guru. Based near Toronto, Canada, he has spent the better part of two decades demystifying the complexities of project scheduling software, most notably the venerable Primavera P6. As the head of Plan Academy, his world is steeped in data, logic, and the intricate dance of project controls. Yet, when you speak with him, it’s not the technology that animates him most. It’s the people.
He speaks of his wife, a therapist, and his own years in therapy, with an openness that is rare in the corporate world. This deep-seated interest in human dynamics has become the lens through which he views the unfolding AI revolution. He didn’t set out to be a contrarian, but his experience has led him to question the prevailing narrative. He sees an industry so mesmerised by the promise of AI that it risks forgetting the fundamental truth of how major projects are delivered: through relationships, trust, and nuanced human interaction. It’s this perspective that makes his insights so compelling – they come not from a textbook, but from a career spent at the intersection of technology and human fallibility.
The Great Distraction: A Contrarian View on AI
The dominant narrative surrounding AI in the professional world is one of liberation. We are told that AI will handle the “grunt work,” freeing us up for more strategic, creative, and interpersonal tasks. Michael, however, isn’t buying it. He posits a more unsettling theory: that AI, far from being a bridge to more human interaction, is becoming a “grand distraction.”
“We think that AI is going to clear up our schedule so that we're going to focus on the humanness,” he argues in the podcast, “and I don't think that's going to happen. I think what we've created is another outlet that keeps our brains deeply engaged… there's nothing pushing us towards more human interactions. There's only things pulling us away from them.”
He points to the sheer pervasiveness of AI-powered tools, from our email inboxes to our project management software, each demanding our attention, each offering a frictionless digital alternative to the sometimes-messy business of talking to another human being. There are no barriers to consulting a chatbot, he notes, but there are very real barriers to striking up a conversation with the colleague sitting next to you – assuming you’re even in the same office anymore.
This is a sobering thought. It challenges the utopian vision of an AI-powered future and forces us to confront a more complex reality. The risk is that we become so immersed in this digital hall of mirrors, where AI assistants validate our ideas and streamline our communications, that we lose the ability for genuine, unscripted, and sometimes difficult, human connection. It’s the casual chat by the coffee machine, the shared glance of understanding in a tense meeting, the collaborative problem-solving that happens on a whiteboard – these are the moments that build the trust and rapport essential for project success. Michael’s concern is that these moments are being quietly eroded by the seductive ease of technology.
The 51% Problem: Why Tech is Less Than Half the Battle
This brings us to a crucial point that resonates deeply with my own experience in deploying AI solutions. The technology itself is often less than half of the problem. You can have the most sophisticated algorithm in the world, but if the people who need to use it don't trust it, understand it, or feel threatened by it, the project is doomed to fail. As I often say, at least 51% of any AI implementation is about human interaction: managing culture change, dispelling myths, and assuaging fears.
Michael provides a perfect real-world example from the world of project controls. He speaks of owner-consultants who, despite the availability of advanced AI-powered analysis tools, still insist on manually scrutinising every detail of a contractor’s schedule. They are reluctant, he explains, to put their professional reputation on the line with a tool whose inner workings they don’t fully comprehend. They need to own the output, and to do that, they need to have gone through the painstaking process of analysis themselves.
“How can I go to the owner and say, ‘there are these risks,’ if I haven’t done my diligence?” he asks, channelling the voice of these consultants. This isn’t just about old habits dying hard; it’s about accountability. An AI can’t be held accountable, but a human can. This fundamental issue of trust and ownership is the single biggest barrier to adoption. It highlights the fallacy of simply “plugging in” an AI and expecting it to work. The real work lies in the human-to-human engagement required to build confidence and create a culture where technology is seen as an aid to human judgment, not a replacement for it.
The Blurring of Worlds and the Search for Community
Perhaps the most poignant part of the conversation comes when Michael reflects on the changing nature of our personal and professional lives. He shares a personal story about his 17-year-old son finding a powerful sense of community in a Christian organisation. Initially concerned, Michael came to admire the deep, supportive connections his son was building – a level of community he feels has been largely lost in modern Western society.
This personal reflection ties directly into his professional thesis. In an age of increasing digital isolation, where AI could potentially drive us further apart, the need for genuine human community becomes more critical than ever. It also reflects a broader truth about the modern world of work. As I see it, the old boundaries have dissolved. Your professional and personal lives are now one. The 9-to-5 is a relic of a bygone era; our networks, our technology, and our personal development now span both spheres. This isn’t necessarily a bad thing, but it requires a new way of thinking.
Michael’s interest in coaching entrepreneurs on “human dynamics” stems from this very idea. “You bring your stuff, whatever it is, your dysfunctions into your business,” he says. This is profoundly true. Our personal fears, biases, and insecurities don't get left at the office door – they directly influence how we lead, how we collaborate, and how we adopt new technologies like AI. The fear of being made redundant, the insecurity of not understanding the technology, the bias towards established processes – these are the human factors that can derail even the most promising AI initiatives.
Why This Conversation Matters Now
Michael LaPage’s perspective is a vital counterpoint to the relentless hype surrounding artificial intelligence. He reminds us that progress isn’t just about technological innovation; it’s about human wisdom. It’s about recognising that the most valuable skills in the coming decades may not be coding or data science, but empathy, communication, and the ability to build trust.
His scepticism is not a rejection of AI, but an invitation to engage with it more thoughtfully. It’s a call to focus on the human side of the equation – the 51% that will ultimately determine whether these powerful new tools lead to a future that is more connected and fulfilling, or one that is more fragmented and isolated.
The full conversation on the Project Flux podcast is a must-listen for anyone grappling with these questions. We delved even deeper into the specifics of AI scheduling tools like nplan and Nodes & Links, debated the radical idea of appointing AI to corporate boards, and explored the fascinating concept of creating a personal “board of advisors” using AI personas of figures like Steve Jobs and Gandhi. Michael also shares more about his personal journey and the profound experiences that have shaped his unique and insightful worldview.
This blog post only scratches the surface. To truly appreciate the depth and nuance of this essential conversation, I encourage you to listen to the full episode. It might just change the way you think about AI, and more importantly, the way you think about the enduring power of human connection.
Listen to the full episode with Michael LaPage on the Project Flux podcast https://www.buzzsprout.com/2346327/episodes/18060787.



Comments