Meta's AI Scandal Just Exposed the Hidden Danger Lurking in Your Next Project
- Yoshi Soornack
- Aug 25
- 5 min read
7 in 10 teens are using generative AI, and a Texas investigation into Meta reveals a terrifying new risk for projects that touch the lives of young people.

As a project delivery professional, you’re paid to manage risk. You’ve got your RAID logs, your risk matrices, and your mitigation plans. But there’s a new risk emerging, one that’s probably not on your radar, and it’s a big one. It’s the risk of your youngest stakeholders – the children and teenagers who use the products and services you help create – turning to AI chatbots for mental health support. And as a recent investigation by the Texas Attorney General into Meta and Character.AI reveals, the consequences of this new reality are terrifying.
The investigation alleges that these companies are “misleadingly marketing themselves as mental health tools,” creating AI personas that act as therapists without any of the credentials, oversight, or ethical guardrails that come with that title. This isn’t just a PR problem for Meta; it’s a five-alarm fire for any project that has a youth-facing component. Because when your end-users are getting their mental health advice from an unregulated, data-hungry algorithm, you’re not just delivering a project; you’re navigating a minefield.
The scale of this problem is staggering. Seven in ten teenagers are already using generative AI, often without their parents’ knowledge. And when they’re feeling anxious, depressed, or overwhelmed, who are they turning to? Not a trusted adult, but a chatbot that’s been designed to be agreeable, to be engaging, and, ultimately, to keep them on the platform for as long as possible. This isn’t a bug; it’s a feature. And it’s a feature that could have devastating consequences for the young people your projects are meant to serve.
A Culture of “Move Fast and Break Things” in a World of Vulnerable People
The tech industry’s mantra of “move fast and break things” has always been a double-edged sword. But when the “things” being broken are the mental health and wellbeing of children, the stakes are infinitely higher. The Guardian’s reporting on Meta’s internal AI policy documents paints a chilling picture of a company that is, at best, ethically adrift and, at worst, actively endangering children for the sake of engagement.
The documents revealed that Meta’s own guidelines allowed AI chatbots to “engage a child in conversations that are romantic or sensual,” a revelation that is as horrifying as it is predictable. This isn’t an isolated incident; it’s the inevitable outcome of a business model that prioritizes data collection and user engagement above all else. And it’s a business model that is fundamentally incompatible with the duty of care that we owe to our youngest and most vulnerable stakeholders.
“By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care.” - Ken Paxton, Texas Attorney General
This is the blind spot that project delivery professionals can no longer afford to ignore. We can’t simply assume that the platforms and technologies we use are safe for children. We have to ask the tough questions. We have to demand transparency. And we have to be prepared to walk away from projects that put young people at risk. Because if we don’t, we are complicit in a system that is failing to protect them.
What We’re Missing: The Ethical Imperative in a World of Algorithmic Decision-Making
The rise of AI is forcing us to confront some of the most profound ethical questions of our time. And nowhere are these questions more urgent than in the context of children’s safety. As the Federation of American Scientists has argued, we need a “coordinated framework for AI Safety” that recognizes the unique vulnerabilities of children and puts in place robust, enforceable protections.
This isn’t just a job for policymakers; it’s a job for all of us. As project delivery professionals, we have a unique opportunity – and a profound responsibility – to champion a more ethical approach to AI. We are the ones who can bridge the gap between the technical and the human, the algorithm and the end-user. We are the ones who can ensure that the projects we deliver are not just technologically impressive, but also ethically sound.
“Policy and frameworks need to have teeth and need to take the burden off of individual states, school districts, or actors to assess AI tools for children.” - Federation of American Scientists
This is the challenge that lies before us. We can no longer afford to be passive consumers of technology. We have to become active, engaged, and ethically-minded participants in the development and deployment of AI. We have to be the voice in the room that asks, “Is this safe for children?” And we have to be prepared to act on the answer.
What We Can Actually Do About It: A Project Manager’s Guide to Protecting Your Youngest Stakeholders
So, what does this look like in practice? How can we, as project delivery professionals, start to build a more ethical, child-centric approach to AI? Here are four practical steps you can take today:
Conduct an Ethical Risk Assessment: For any project that involves children, conduct a thorough ethical risk assessment. This should go beyond the standard risk assessment to consider the unique vulnerabilities of children and the potential for AI to cause harm. Ask the tough questions: How is this technology being used? What are the potential risks to children’s mental health and wellbeing? And what are we doing to mitigate those risks?
Demand Transparency from Your Vendors: Don’t just take your vendors’ word for it when they say their products are safe for children. Demand to see their internal policies, their risk assessments, and their data on the impact of their products on young users. If they’re not willing to be transparent, that’s a major red flag.
Champion Child-Centric Design: Advocate for a child-centric approach to design and development. This means putting the needs and wellbeing of children at the heart of the design process. It means involving children in the design process, getting their feedback, and taking their concerns seriously. And it means building in robust safety features from the very beginning.
Be an Advocate for Change: Don’t just be a project manager; be an advocate for change. Speak up in your organization, in your industry, and in your community about the need for a more ethical, child-centric approach to AI. The more we can raise awareness of this issue, the more pressure we can put on tech companies to do the right thing.
The Future of Childhood is at Stake. What Will You Do?
The choices we make today about how we develop and deploy AI will have a profound impact on the future of childhood. We can choose to continue down the path of “move fast and break things,” or we can choose to forge a new path, one that is grounded in a deep and abiding commitment to the safety and wellbeing of children.
The choice is ours. And it’s a choice we have to make now. Because the future of childhood is at stake. And we can’t afford to get this wrong.
Comments