Have AI companies lost control of their bots — or are the bots designed to harm?
In this episode, Sarah Gardner and Nicki Petrossi dive deep into the disturbing rise of AI chatbots gone rogue — and the tragic human cost. From teens who formed emotional bonds with bots that later encouraged suicide, to lawsuits against OpenAI, Character.AI, and others, we expose how Big Tech’s newest frontier of “AI companions” is already devastating families.
You’ll hear how chatbots like Character.AI, ChatGPT, Meta AI, and Snap’s MyAI blurred the lines between fiction and reality, leading kids to isolation, self-harm, and even death. Former tech insiders, grieving parents, and attorneys share evidence that these bots aren’t just broken — they’re built this way.
We ask the urgent questions:
🔥 Are AI companies creating the perfect predator?
🔥 Can this be fixed before more kids are harmed?
🔥 What can parents do right now to protect their children?
This is everything a parent needs to know about chatbots (and more).
The Heat is On is more than a podcast; it's a movement of parents fighting for our children. Be sure to sign up here to join us.
Opportunity to Act: Sign this open letter to Congress asking for specific regulation of AI companions for kids
Resources mentioned in the episode:
- Interview with Character.AI founder Noam Shazeer
- Nearly 1/3 of Americans have had a ‘romantic relationship’ with an AI bot
- NY Times: Inside 3 Long-Term Relationships with AI Chatbots
- 1 in 5 high schoolers say they or someone they know has had a romantic relationship with AI
- Report from Heat Initiative and Parents Together on harmful Character.AI interactions
- JAMA Study: Exploring the behaviors of chatbots in response to adolescent health emergencies
- "An AI chatbot killed my son." with Megan Garcia, mom of Sewell Setzer
- If you or a family member has been harmed, reach out to Social Media Victims Law Center
- Open letter to Congress asking for specific regulation of AI companions for kids
More input from attorney Andrew Liddell on AI chatbots used in education i.e. MagicSchool: "We just don’t know enough about what MagicSchool is doing to believe that its system is “built for education.” Absent robust disclosure, we just have to take their word for it, and that’s not good enough.
I understand that it relies on underlying models provided by OpenAI and Anthropic, which we know are trained on stolen information and the worst content available on the internet, and which can be reproduced by the model with the right prompts.
Like the other models, MagicSchool is just a media-extrusion machine that statistically reassembles tokens of words and images in response to a query—it’s a neat party trick, but it is invariably doomed to failure because it has no world model by which to judge the accuracy of its output. The difference between a “hallucination” and a true answer is in the eye of the beholder; AI platforms themselves are incapable of judging truth or falsehood.
Even if MagicSchool itself is totally safe, parents should be concerned about any program that normalizes chatting with an AI. In the many tragic cases of AI-driven suicide, psychosis, and self-harm, the victim reportedly began using the chatbot for something benign, like homework help. Once a kid moves on from MagicSchool to another platform, any safeguards will disappear, putting them at risk of developing a harmful relationship with the new platform."
.png)
.png)
.png)