GPT-5 Has Landed (And It's Already Having a Bit of a Wobble)
- James Garner
- 1 day ago
- 12 min read
Updated: 1 hour ago

700 million users got access to reasoning AI this week, but 64% of people don't even know they're using artificial intelligence daily. Here's why that gap might just reshape your entire career.
Right, let's have a proper chat about what happened this week, shall we? While most of us were busy arguing about whether it's too early for Christmas decorations (it absolutely is, by the way), OpenAI dropped GPT-5 onto the world like a digital hand grenade wrapped in a very polite press release.
Now, I know what you're thinking. "Another AI model? Brilliant. Just what we needed to add to the ever-growing pile of tech things I should probably understand but definitely don't."
And honestly, I get it. The pace of AI development these days makes a caffeinated squirrel look positively leisurely. But here's the thing that's been rattling around in my brain like a marble in a biscuit tin: this isn't just another incremental update. This is the moment when AI stopped being a clever party trick and started becoming something that could fundamentally reshape how we work, think, and deliver projects.
The numbers alone should make you sit up and pay attention. OpenAI expects to hit 700 million weekly active users this week. That's nearly 10% of the entire global population using this technology regularly. To put that in perspective, that's more people than live in the entire European Union, all tapping away at what Sam Altman describes as having "a
team of PhD-level experts on hand at any time."
But here's where it gets properly interesting (and slightly terrifying): while 700 million people are using this technology, research shows that 64% of people don't even realise they're using AI daily. It's like having a superpower and not knowing you've got it. Or more accurately, it's like having a superpower while your competition definitely knows they've got it and are using it to absolutely demolish the old ways of doing things.
When "The Best Model in the World" Meets Reality (Spoiler: It Gets Messy)
Now, before we get too carried away with visions of AI-powered utopia, let's talk about what actually happened when GPT-5 launched. Because if there's one thing I've learned from years of watching technology rollouts, it's that the gap between the marketing presentation and reality is often wide enough to park a double-decker bus.
The rollout has been, to put it diplomatically, a bit of a disaster. Within 24 hours of launch, Reddit's r/ChatGPT was absolutely ablaze with posts calling GPT-5 "the biggest piece of garbage" and accusing OpenAI of pulling "the biggest bait-and-switch in AI history." Not exactly the reception you'd expect for what Altman claimed was "the best model in the world."
The complaints were as varied as they were passionate. Users were furious that OpenAI had unceremoniously binned previous models overnight—GPT-4o, o3, and others simply vanished without warning. Imagine if your favourite coffee shop suddenly replaced all their blends with a new "improved" recipe without telling anyone. That's essentially what happened here, except instead of coffee, we're talking about AI models that people had grown genuinely attached to.
And this is where it gets properly fascinating from a human psychology perspective. Some users described using different models for different tasks—4o for creative ideas, o3 for logic problems, o3-Pro for deep research. They'd developed relationships with these tools, preferences, workflows. One user even talked about using 4o to help with anxiety and depression, describing how the model felt "human" to them.
When those models disappeared overnight, people weren't just losing a tool—they were losing a digital colleague they'd come to rely on. It's a bit like having your favourite work partner suddenly replaced by someone who does the job differently, even if they might technically be better at it.
The technical issues didn't help matters either. GPT-5 was making basic errors that would make a primary school teacher wince—counting the letter "b" in "blueberry" as three instead of two, struggling with simple linear equations, and generating maps of the United States with most states labelled as gibberish. For a model being touted as a significant step toward artificial general intelligence, these weren't exactly confidence-inspiring moments.
But here's what's really telling about this whole debacle: the backlash forced OpenAI to backtrack faster than a politician during election season. Within days, Sam Altman was posting updates on X promising doubled rate limits, easier model selection, and—most tellingly—bringing back GPT-4o for Plus users. The message was clear: even the most advanced AI company in the world had underestimated how attached people become to their digital tools and workflows.
The Quiet Revolution: What GPT-5 Actually Means for How We Work
Right, let's move past the drama and talk about what GPT-5 actually brings to the table when it's working properly. Because despite the rocky start, the underlying technology represents something genuinely transformative—the first "unified" AI model that combines reasoning abilities with fast responses.
Think of it this way: previous AI models were a bit like having a brilliant colleague who was either really fast but not terribly thoughtful, or incredibly thorough but took ages to get back to you. GPT-5 is the first model that can switch between these modes automatically, deciding whether your question needs a quick response or deeper thinking. It's like having a team member who instinctively knows when to fire off a rapid email and when to sit down and properly think through a complex problem.
The technical benchmarks tell a story that should make anyone involved in knowledge work pay attention. On coding tasks, GPT-5 scored 74.9% on SWE-bench Verified, a test of real-world programming challenges pulled from GitHub. That's not just impressive—it's edging into territory where AI can handle the majority of routine coding tasks that currently occupy human developers.
But it's the "vibe coding" capability that really caught my attention. This is where you describe what you want in plain English—"create a web app to help English speakers learn French with flashcards, quizzes, and progress tracking"—and the AI builds it for you. During OpenAI's demonstration, they submitted the same prompt twice and got two different, functional applications within seconds.
Now, I know what you're thinking: "That's all very impressive, but I'm not a coder." Fair enough. But here's the thing—this isn't really about coding. It's about the fundamental shift from needing to know how to do something to simply needing to know what you want done. That's a change that affects every knowledge worker, from project managers to consultants to anyone who's ever had to wrestle with a spreadsheet.
Consider what Aaron Levie, CEO of Box, observed after testing GPT-5: "The model is able to retain way more of the information that it's looking at, and then use a much higher level of reasoning and logic capabilities to be able to make decisions." This isn't just about automating routine tasks—it's about augmenting human decision-making with AI that can process vast amounts of information and spot patterns we might miss.
The healthcare improvements are particularly striking. GPT-5's hallucination rate on health-related questions dropped to just 1.6%, compared to 12.9% for GPT-4o. When millions of people are already using AI for health advice (whether we like it or not), that's the difference between a useful tool and a potentially dangerous one.
But perhaps most importantly for those of us in project delivery, GPT-5 represents the first AI that can genuinely act as an agent rather than just a chatbot. It can complete tasks on your behalf—managing calendars, creating research briefs, generating entire applications. As Sam Altman put it: "People are limited by ideas, but not really the ability to execute, in many new ways."
The Great Divide: Why Some People Are About to Get Very, Very Left Behind
Here's where things get a bit uncomfortable, and I'm afraid there's no polite way to put this: we're witnessing the emergence of a new kind of digital divide, and it's happening faster than most people realise. While 700 million people are using advanced AI weekly, research from KnowBe4 shows that 14.4% of employees don't even know their company has an AI policy.
Think about that for a moment. We're not talking about people who are choosing not to use AI—we're talking about people who don't even know it's available to them. It's like having a Ferrari in your garage and not knowing you own it, while your neighbours are already racing around the track.
The workplace implications are staggering. The World Economic Forum's Future of Jobs Report 2025 reveals that 40% of employers expect to reduce their workforce where AI can automate tasks. Meanwhile, MIT and Boston University research suggests AI will replace as many as two million manufacturing workers by 2025. But here's the kicker: BCG found that 46% of employees at organisations undergoing comprehensive AI-driven redesign are worried about job security, compared to those at less advanced companies.
The pattern is becoming clear: companies that embrace AI are restructuring faster than their employees can adapt, while companies that don't are falling behind their competitors. It's creating a double-edged sword where being at an AI-forward company might threaten your current role, but being at a traditional company might threaten your entire career.
But here's what's really keeping me up at night: Stanford's research shows there's a growing gap between what workers want from AI and what AI can actually deliver. Workers want AI to handle routine tasks so they can focus on creative and strategic work. AI is increasingly capable of handling creative and strategic work, potentially making routine tasks the safer bet for human employment.
It's like asking for a sous chef to help with the chopping and dicing, only to discover they're actually better at creating the entire menu. Suddenly, your role as head chef looks a lot less secure than the dishwasher's job.
The trust gap makes this even more complex. Workday's research reveals that business leaders have greater enthusiasm for AI than their workforce, creating a disconnect where decisions about AI implementation are being made by people who see the potential, while the actual work is being done by people who see the threat.
And then there's the awareness problem. UNESCO's research on AI literacy highlights how this represents a new digital divide, with unequal access to AI benefits across regions, communities, and socioeconomic groups. But unlike previous digital divides, this one isn't just about access to technology—it's about understanding how to leverage it effectively.
Consider this: while 78% of organisations reported using AI in 2024, up from 55% the year before, many employees in those same organisations remain unaware of how AI is being used or how they could use it themselves. It's creating a situation where AI adoption is happening to people rather than with them.
When Governments Start Buying In: The Institutional Tipping Point
If you needed any more evidence that we've reached a tipping point, consider this: OpenAI is offering ChatGPT Enterprise to federal agencies for just $1 per agency for the next year. That's not a typo. One dollar. For the entire federal government to access enterprise-level AI capabilities.
Now, I've seen some aggressive pricing strategies in my time, but this is essentially giving away the crown jewels to secure market position. It's the kind of move that signals OpenAI sees government adoption as crucial for long-term legitimacy and market dominance. And frankly, it's working.
This isn't just about cost—it's about normalisation. When government agencies start using AI for routine operations, it sends a signal to every other organisation that this technology has moved from "experimental" to "essential infrastructure." It's like when the government started using email in the 1990s—suddenly, every business that wasn't using email looked hopelessly outdated.
The timing is particularly telling. This deal comes just weeks after the Trump administration's AI Action Plan, which seeks to boost AI integration across government operations. When political administrations—regardless of their other differences—agree that AI adoption is a national priority, you know we've moved well past the "should we?" phase into the "how quickly can we?" phase.
But here's what's really interesting about the government deal: it includes unlimited access to advanced models and tailored training resources. The government isn't just buying AI tools—they're investing in AI literacy for their workforce. They understand that the technology is only as good as the people using it.
Compare that to most private sector organisations, where AI adoption often feels like throwing technology at problems without investing in the human side of the equation. Research shows that 14.4% of employees don't even know their company has an AI policy, let alone how to use AI effectively in their roles.
The government's approach—combining access with education—might just be the template for successful AI adoption. It's not enough to give people access to powerful tools; you need to help them understand how to use those tools effectively and safely.
This institutional adoption also addresses one of the biggest barriers to AI implementation: trust and security. When government agencies—organisations that handle the most sensitive information—start using AI tools, it validates the security and reliability of the technology for everyone else. It's like having the most paranoid person in your office finally agree to use cloud storage—suddenly, everyone else's concerns seem a bit overblown.
So What Do You Actually Do About All This? (A Practical Guide for the Properly Concerned)
Right, enough doom and gloom. Let's talk about what you can actually do about this situation, because sitting around worrying about the AI revolution isn't going to help anyone, least of all your career prospects.
First, let's address the elephant in the room: you don't need to become a prompt engineering wizard overnight. Despite what LinkedIn influencers might tell you, the goal isn't to master every AI tool that comes along. The goal is to understand how AI can augment what you're already good at. Think of it like learning to drive—you don't need to understand how the engine works, but you do need to know how to operate the vehicle safely and effectively.
Start with the basics. If you're not already using ChatGPT, Claude, or similar tools for routine tasks, you're essentially choosing to do things the hard way. I'm talking about using AI to draft emails, summarise documents, brainstorm ideas, or research topics. These aren't revolutionary use cases—they're the equivalent of using a calculator instead of doing long division by hand.
But here's where most people get it wrong: they treat AI like a magic wand rather than a thinking partner. The most effective AI users I know don't just throw questions at the technology and hope for the best. They engage in conversations, provide context, ask follow-up questions, and iterate on responses. It's less like using Google and more like working with a very knowledgeable but occasionally confused intern.
For project delivery specifically, start thinking about AI as a force multiplier for your existing skills. If you're good at stakeholder management, use AI to help you prepare for difficult conversations, draft communication strategies, or analyse feedback patterns. If you're strong on technical delivery, use AI to help with documentation, risk assessment, or resource planning. The key is to enhance your strengths, not replace them.
And please, for the love of all that's holy, start paying attention to how your organisation is thinking about AI. If you're one of those 14.4% who don't know whether your company has an AI policy, find out. If they don't have one, that's actually valuable information—it suggests you're working for an organisation that's behind the curve, which has implications for your career trajectory.
More importantly, start having conversations about AI with your colleagues and managers. Not the "robots are coming for our jobs" conversations, but practical discussions about how AI could improve your team's efficiency, quality, or capabilities. Be the person who brings solutions, not just concerns.
Here's a quotable truth for you: "The people who will thrive in the AI age aren't necessarily the ones who are best with technology—they're the ones who are best at combining human judgment with AI capabilities." The future belongs to people who can ask the right questions, provide proper context, and know when to trust AI and when to override it.
Think about it this way: AI is becoming like electricity—incredibly powerful, increasingly ubiquitous, and most useful when it's invisible. You don't need to understand how electricity works to benefit from it, but you do need to know how to use electrical devices safely and effectively. The same principle applies to AI.
The Bottom Line: This Train Has Already Left the Station
Look, I'm not going to sugarcoat this. The GPT-5 release—bumpy as it's been—represents a fundamental shift in how work gets done. We're not talking about some distant future where robots might possibly maybe change things a bit. We're talking about right now, this week, 700 million people getting access to AI that can reason, create, and execute tasks at a level that would have been science fiction just a few years ago.
The truth is that the divide between AI-savvy professionals and everyone else is widening faster than most people realise. While some are learning to leverage AI as a thinking partner and force multiplier, others are still debating whether this whole AI thing is just a fad. Spoiler alert: it's not.
The organisations that figure out how to combine human creativity and judgment with AI capabilities are going to absolutely demolish their competition. The professionals who learn to work effectively with AI are going to outperform those who don't by margins that will seem almost unfair. And the people who stick their heads in the sand and hope it all goes away? Well, they're going to find themselves in the same position as those who insisted email was just a passing trend.
But here's the thing that gives me hope: this isn't really about technology. It's about curiosity, adaptability, and the willingness to learn new ways of working. The same qualities that have always separated thriving professionals from struggling ones. AI just amplifies those differences.
The people who will succeed in this new landscape aren't necessarily the most technical—they're the ones who ask better questions, think more strategically, and understand how to combine human insight with AI capabilities. They're the ones who see AI not as a threat to their expertise, but as a tool that makes their expertise more valuable.
So here's my challenge to you: stop treating AI adoption like it's optional. It's not. Start treating it like a core professional skill, because that's what it's becoming. You don't need to become an AI expert overnight, but you do need to become AI-literate. The difference between those two things might just determine whether you're leading the change or being left behind by it.
Your move. The future of work isn't coming—it's here. The only question is whether you're going to be part of shaping it or watching it happen to you.
What's your experience with AI in your workplace? Are you seeing the divide I've described, or is your organisation ahead of the curve? Drop me a line—I'd love to hear how this is playing out in different industries and roles. Because if there's one thing I've learned, it's that the best insights come from the people actually doing the work, not the ones writing about it.
And if this has got you thinking about how AI might change your industry or role, don't just sit on those thoughts. Start the conversation with your team, your manager, your organisation. The people who shape the future are the ones who start talking about it before everyone else catches on.
Comments