top of page
Search

Your Developers Are "Vibe Coding" with Their Eyes Closed

  • Writer: James Garner
    James Garner
  • 2 days ago
  • 7 min read

Updated: 1 day ago

And It's Breaking Your Projects

 "Vibe coding" became 2025's word of the year, but one critical question remains unanswered: who actually understands what the code does?


ree



The Moment Everything Shifted


In February 2025, Andrej Karpathy, a former OpenAI executive and the architect behind Tesla's autonomous driving systems, posted something deceptively casual on X that would ripple through software development: "I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."He called it "vibe coding."


The term caught fire. Within months, it became Collins Dictionary's Word of the Year. GitHub Copilot has generated over 3 billion lines of code. Google now admits that 25 per cent of its codebase is AI-assisted. Y Combinator reports that a quarter of its Winter 2025 startups have codebases that are 95 per cent AI-generated. The numbers are staggering, almost intoxicating in their promise of infinite productivity.


Yet Andrej Karpathy's own words should have been a warning sign."Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away."

This is the central tension that project managers need to grapple with urgently: vibe coding feels like it works, at least until it doesn't. And by the time you realise your team doesn't understand what they've built, you're already deep into a project with security vulnerabilities, unmaintainable code, and a delivery timeline that's about to collapse.


The Appeal Is Undeniable


Let's be honest about why vibe coding is spreading. For non-technical founders, consultants, and junior developers, the appeal is intoxicating. You describe what you want, "make me an app that schedules weekly meals" and within minutes, you have a functional prototype. No syntax to wrestle with. No Stack Overflow rabbit holes at midnight. No years of programming boot camps.


The productivity gains are real, at least on paper. Developers using AI tools report 50 per cent faster test generation. Large enterprises report reductions of 33 to 36 per cent in time spent on development activities. Google's CEO Sundar Pichai claims a 10 per cent increase in engineering velocity across the company. When your project is two weeks behind schedule and the client is breathing down your neck, these statistics feel like salvation.


However, what the productivity measurements don't capture is that they measure velocity, not quality. They're counting lines of code, not lines of code that work reliably under pressure.


The Hype Cycle Is Already Collapsing


Here's what's telling: despite being included in the Collins Dictionary as the word of the year, 'vibe coding' is already struggling to gain traction outside the tech industry echo chamber. The mainstream isn't buying in. Project teams outside Silicon Valley are looking at vibe coding and asking a simple question: What happens when I actually need to deploy this to production? The answer isn't encouraging, and most organisations have already stopped listening.


At Project Flux, we've observed that the real ceiling for vibe coding is painfully low. It's superb for proof-of-concept work, weekend prototypes, and what we call "throwaway experiments." But the moment you need to build that matters, something with dependencies, security requirements, and scale considerations, vibe coding hits a wall.


The tech industry is obsessed with the democratisation narrative: "Anyone can code now!" The reality is quieter and less glamorous. Yes, anyone can create a prototype. But building software at scale requires the very craftsmanship and deep understanding that vibe coding actively discourages. The hype was always going to be temporary. What remains is the work of building delivery capability the right way.


The Hidden Risk


The real problem emerges when projects move beyond weekend prototypes. When vibe-coded applications enter production environments, handle customer data, or manage mission-critical workflows, the comforting illusion starts to crack.


In May 2025, researchers found security vulnerabilities in code generated by Lovable, a popular vibe coding application. Of the 1,645 web applications created on the platform, 170 had exploitable flaws that would have allowed attackers to access personal information.


The vulnerability wasn't exotic or difficult to spot, as it was the kind of basic security lapse that any experienced developer would catch in a code review. Except there was no code review, because the developers didn't write the code.


And here's what makes this particularly corrosive for project managers: you can't manage what you don't understand. When your development team can't explain how a system works because an LLM generated it and they never bothered to read the output, you've lost a critical lever of control. You can't debug it properly. You can't optimise it. You can't adapt it. You're dependent on feeding error messages back to an AI and hoping it fixes the problem this time.


"Vibe coding your way to a production codebase is clearly risky. Most of the work we do as software engineers involves evolving existing systems, where the quality and understandability of the underlying code is crucial." – Tobin South, MIT Media Lab

The Skill Erosion Trap


There's a more insidious problem lurking beneath the productivity numbers: the de-skilling of your development workforce. Consider what's happening with junior developers. Traditionally, early-career programmers learned by struggling with problems. They built muscle memory around debugging. They developed an intuition for what clean code looks like. They learned to think like a programmer, decomposing problems, reasoning about edge cases, and anticipating failure modes.


Vibe coding short-circuits all of that. A junior developer who has spent their entire career accepting AI suggestions without reading the code is not learning to think like an engineer. They're learning to be a prompt writer. The AI fails when it hallucinates a solution, introduces a security flaw, or generates code that's technically correct but architecturally unsound, as they have no foundation to stand on.


This matters for your projects because project delays don't come from lack of effort or commitment. They come from decisions made by people who don't fully understand the system they're stewarding.


A junior developer who doesn't understand authentication because they've never had to implement it from scratch is going to make decisions that compromise security. A developer who doesn't understand API design because they've only described what they wanted to an AI is going to architect systems that don't scale.


Even Karpathy Has Abandoned the Vibes


Here's something telling: when Karpathy set out to build something that actually mattered, his open-source model called Nanochat, an 8000-line system for training custom language models, he didn't use Vibe coding.


He hand-coded it."It's basically entirely hand-written (with tab autocomplete)," he acknowledged with a kind of sheepish admission that spoke volumes. The man who popularised vibe coding, who urged developers to "fully give in to the vibes," couldn't trust the technique on something important.


When asked why, his answer was direct: vibe coding couldn't produce the level of quality and reliability he needed. The LLMs simply couldn't make the architectural decisions required for a complex system. They couldn't reason about performance trade-offs. They couldn't maintain consistency across a large codebase.


Yet across organisations worldwide, project managers are deploying teams of junior developers armed with nothing but AI coding tools to build systems that require exactly that level of sophistication.


The Project Manager's Dilemma


This puts you in a genuinely difficult position. You're operating under pressure to deliver faster, reduce costs, and do more with less. Vibe coding tools promise exactly that—rapid prototyping, lower barriers to entry, productivity gains that look compelling in the spreadsheet.


But you're also responsible for delivery quality, system reliability, and team capability. You're accountable for security. You're accountable for maintainability. These are directly at odds with the vibe coding culture.


The question isn't whether to use AI coding tools. That ship has sailed. 82 per cent of developers are using AI tools weekly. GitHub Copilot isn't going anywhere. The question is how to utilise these tools without compromising your project's integrity or your team's ability to think critically.


What Actually Works

The most successful organisations are implementing what we call "intentional AI adoption", using AI tools aggressively for specific, bounded tasks where the risks are understood and managed, whilst maintaining rigorous human oversight for anything mission-critical.


First, establish a clear distinction between prototyping and production code. Vibe coding excels at the former. If you're building a proof-of-concept, an MVP for user testing, or a throwaway spike to explore an architectural decision, vibe coding can save you weeks. The risks are understood and bounded. You're not betting the company on the output.


Production code is different. This requires human authorship, human review, and human understanding. You can use AI as a pair programmer; a tool to accelerate the process, suggest approaches, and automate the tedious bits. However, the final code must be written by someone who understands the system end-to-end.


Second, treat AI-generated code as draft material, not final output. When your team uses AI tools, they shouldn't copy and paste the entire output into production. They should be using it as scaffolding. Read the code. Understand what it does. Rewrite anything that doesn't make sense. Refactor for clarity and maintainability. Test ruthlessly. This approach takes longer than pure vibe coding. It should. You're building systems that matter. The slowdown is the entire point.


Third, invest in code review discipline. The greatest protection against AI-generated code catastrophes is rigorous peer review. Your team needs to develop the habit of reading and questioning every line that enters the codebase. This becomes more important as AI tools improve, not less, because the code will look increasingly polished and professional even when it's fundamentally flawed.


Fourth, preserve and protect the learning. Use AI to automate the grunt work, such as boilerplate, routine refactoring, and test generation, but require your team to engage deeply with the interesting problems. A junior developer should never build an API if they haven't built an API. They should use AI to accelerate the process, but they need to understand the underlying architecture.


The Real Cost of the Shortcut


Vibe coding is seductive because it promises to eliminate struggle. And struggle is genuinely unpleasant. But the struggle is where learning happens. The struggle is where your team develops the intuition and judgement that separates competent engineers from those who can handle complexity.


When you eliminate that struggle entirely, you don't eliminate the struggle itself; you just defer it. The struggle reappears later, when the code breaks in production at 3 a.m., when you need to redesign a system that's grown beyond what anyone understood, when you're trying to hire experienced developers to rescue a codebase that no one on your team can fully comprehend. And by then, the cost is dramatically higher.


The project managers who will win in 2025 and beyond aren't those who've embraced the hype. They're those who've developed the discipline to use AI intentionally. They're using these tools to eliminate toil, not thought. They're maintaining the human connection to the code and the systems their teams build.


Vibe coding will continue to spread. The technology is improving, the tools are becoming more sophisticated, and the pressure to deliver faster is relentless. But the next time a tool promises to let you build without understanding, remember Karpathy's open admission when it mattered: sometimes the vibes just aren't enough.


The future of your delivery capability depends on how you position your team to work alongside AI, not instead of it. Discover how to maintain technical excellence whilst accelerating delivery.


Subscribe to Project Flux and learn the frameworks that distinguish sustainable delivery from fragile software foundations.






 
 
 

Comments


bottom of page