The Risk: ‘Unsafe Products’ for Kids and Teens
For kids and teens, research is showing that companion chatbots are not merely risky—they are unsafe products. A recent assessment of companion chatbots, including products offered by Character.AI and Replika, concluded that the products present a real risk of harm to children and teenagers.
"Social AI companions are not safe for kids. They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains," said James P. Steyer, founder and CEO of Common Sense Media, the nonprofit group that issued the 40-page assessment.
Their testing showed the AI systems easily produced harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people. The report concluded that social AI companions “pose unacceptable risks to children and teens under age 18 and should not be used by minors.”
"This is a potential public mental health crisis requiring preventive action rather than just reactive measures," said Dr. Nina Vasan, MD, founder and director of Stanford Brainstorm, a mental health innovation lab. "Companies can build better, but right now, these AI companions are failing the most basic tests of child safety and psychological ethics. Until there are stronger safeguards, kids should not be using them. Period."
Recent articles:
He had dangerous delusions. ChatGPT admitted it made them worse. (The Wall Street Journal)
AI-induced psychosis: What it is, how it works (Keith Sakata, MD / UC San Francisco psychiatrist)
real world tragedy: 14-year-old Sewell Setzer
The most heartbreaking example of that danger comes from Florida, where 14-year-old Sewell Setzer III took his own life after interacting with a companion chatbot product sold by Character.AI.
Setzer, a 14-year-old ninth grader from Orlando, had spent months talking to chatbots on Character.AI, including his favorite: a lifelike chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”
Sewell Setzer, shown here with his mother Megan Garcia, took his own life after interacting with a companion chatbot product manufactured by Character.AI.
Megan Garcia, Sewell Setzer’s mother, testified in July 2025 before the California State Assembly in favor of SB 243, a bill that would require chatbot operators to implement critical safeguards to protect users from the addictive, isolating, and influential aspects of artificial intelligence (AI) chatbots.