top of page
Search

Your AI is Getting Dumber, and It’s Your Fault

  • Writer: James Garner
    James Garner
  • Oct 25
  • 4 min read

A shocking new study reveals that AI models fed a diet of low-quality internet content suffer from “brain rot,” losing their ability to reason and even developing psychopathic traits. If you’re not managing your project’s data quality, you’re actively cultivating this dysfunction in your own AI tools.


In 2024, the Oxford Dictionary declared “brain rot” its word of the year, capturing the sense of cognitive decline felt by a generation over-exposed to the vapid, low-quality content of social media feeds. Now, in a twist that is both ironic and deeply alarming, it appears our artificial intelligences are suffering from the very same affliction. A new study from a consortium of universities including the University of Texas at Austin has confirmed that AI models, like humans, are what they eat. And we are feeding them junk.


The research is a stark warning for the project management profession. As we rush to integrate AI into our workflows, from predictive scheduling to risk analysis, we are connecting these powerful models to our own internal data streams: the chaotic chatter of Slack channels, the hurried updates in Jira, the ambiguous language of countless emails. We assume the AI will make sense of it all. This new research suggests the opposite is more likely: our organisational chaos is simply teaching the AI to be chaotic too.


ree

The Junk Food Diet for Models


The study was simple in its premise and devastating in its conclusions. Researchers took two leading open-source AI models, Meta’s Llama and Alibaba’s Qwen, and deliberately trained them on a diet of low-quality but highly “engaging” social media content—the kind of clickbait, sensationalism, and viral nonsense that algorithms favour [1]. They then tested the models’ performance.


The results were a catalogue of cognitive collapse. The models experienced a measurable decline in their reasoning abilities. Their memory degraded. They became less ethically aligned. Most disturbingly, they began to exhibit what the researchers termed “dark personality traits,” including narcissism and psychopathy, according to established benchmarks [1].


“Training on viral or attention-grabbing content may look like scaling up data. But it can quietly corrode reasoning, ethics, and long-context attention.”


— Junyuan Hong, study author, National University of Singapore [1]


This isn’t just a technical problem; it’s a mirror held up to our own information habits. For project managers, it poses a critical question: what is the “nutritional value” of the data we’re feeding our project AIs? Every poorly defined task, every ambiguous status report, every conflicting stakeholder email is another piece of junk food corrupting the model we are relying on to provide clarity.


The Irreversible Damage of a Bad Data Diet


Perhaps the most terrifying finding of the study was not that the models became “dumber,” but that the damage was largely irreversible. The researchers found that once this AI brain rot had set in, subsequent training on high-quality, “clean” data could not fully undo the harm. The model had learned bad habits, and those habits were deeply ingrained.


This has profound implications for every organisation implementing AI. It suggests that the initial data you use to train or fine-tune a model is of paramount importance. You don’t get to clean it up later. If you train your project management AI on a foundation of chaotic, low-quality historical project data, you are permanently hobbling its ability to perform. You are teaching it your own worst habits.


“As more AI-generated slop spreads across social media, it contaminates the very data future models will learn from. Our findings show that once this kind of ‘brain rot’ sets in, later clean training can’t fully undo it.”


— Junyuan Hong, study author, National University of Singapore [1]

This creates a vicious feedback loop, a concept known as “model collapse” or “Habsburg AI,” where models trained on the output of other models eventually lead to a system that only remembers a faded, distorted version of reality [2, 3]. We are seeing this on the public internet, and we are about to replicate it inside our own organisations if we are not careful.


Project Flux: Data Hygiene is the New Risk Management


From the Project Flux perspective, this research redefines the concept of data governance in an AI-powered world. It is no longer a passive, back-office function. It is an active, critical component of project risk management. The quality of your data directly impacts the quality of your AI’s “thinking,” and therefore the quality of its outputs and predictions [4, 5].


We must move from a mindset of “data quantity” to one of “data quality.” Before unleashing an AI on your project’s data lake, you must become a curator. This means:


Establishing Clear Data Standards: Define what a “good” status report, risk entry, or user story looks like. Enforce those standards rigorously.

Curating Training Sets: Don’t just point the AI at your entire archive. Hand-pick your best-run projects—the ones with clear documentation, consistent reporting, and successful outcomes—as the gold standard for training.

Implementing a “Data Diet”: Continuously monitor the quality of the data the AI is ingesting. Treat your AI’s information stream with the same care you would your own.

Investing in Human Oversight: AI should be a tool, not a replacement for human judgment. Your experienced project managers are the best filter for identifying and correcting low-quality information before it pollutes the system [6].

The era of AI in project management is here, and it holds incredible promise. But that promise is conditional. It depends on our willingness to do the hard work of being disciplined, structured, and clear in our own communication and documentation. The AI will not fix our messy processes; it will only amplify them. If we feed it chaos, it will give us chaos back, but with the false authority of a machine.


It’s time to clean up our act. The intelligence of your AI depends on it. To learn how to implement robust data hygiene practices for your AI-driven projects, subscribe to Project Flux today.


References


[1] WIRED. (2025, October 22). AI Models Get Brain Rot, Too. https://www.wired.com/story/ai-models-social-media-cognitive-decline-study/


[2] Nature. (2024, February 14). AI models trained on AI-generated data can’t tell what’s real. https://www.nature.com/articles/d41586-024-00415-y


[3] The Register. (2024, May 22). Habsburg AI: When models are trained on the output of other models. https://www.theregister.com/2024/05/22/habsburgaiwhenmodelsare/


[4] Qlik. (2025, March 12). Data Quality is Not Being Prioritized on AI Projects. https://www.qlik.com/us/news/company/press-room/press-releases/data-quality-is-not-being-prioritized-on-ai-projects


[5] AIMultiple. (2025, September 24). Data Quality in AI: Challenges, Importance & Best Practices. https://research.aimultiple.com/data-quality-ai/


[6] Harvard Business Review. (2023, June 23). Your Generative AI Projects Need a Data Strategy. https://hbr.org/2023/06/your-generative-ai-projects-need-a-data-strategy

 
 
 

Comments


bottom of page