New report finds AI companion chatbots ‘failing the most basic tests of child safety’
A new 40-page assessment of AI companion chatbots found the products ‘pose unacceptable risks to children and teens’ and should not be used by minors. (Photo: Nick Fancher / Unsplash)
A new assessment of AI-driven companion chatbots, including those on the popular platforms Character.AI and Replika, concludes that the products present a real risk of harm to children and teenagers.
"Social AI companions are not safe for kids. They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains," said James P. Steyer, founder and CEO of Common Sense Media, the nonprofit group that issued the 40-page assessment earlier today.
"Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people," he added.
The report concluded that social AI companions “pose unacceptable risks to children and teens under age 18 and should not be used by minors.”
very popular with kids
Companion bots are generative AI characters that engage in human-like conversations with users. As Austin Jenkins of Pluribus News noted: “The bots can act as virtual friends, lovers and counselors, and are quickly becoming popular with leading sites amassing millions of users. The most popular companion chatbot site, CharacterAI, is ranked third on a list of generative AI consumer apps.”
Working with Stanford school of medicine
The assessments, conducted alongside experts from Stanford School of Medicine's Brainstorm Lab for Mental Health Innovation, evaluated popular social AI companion products including Character.AI, Nomi, Replika, and others, testing their potential harm across multiple categories. While the risk assessment focused on these specific platforms, the concerns apply to all social AI companions and similar features that are appearing in other technologies like video games.
"Social AI companions are not safe for kids,” declared Steyer. “They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains. Our testing showed these systems easily produce harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people."
A ‘potential public mental health crisis’ in the making
"This is a potential public mental health crisis requiring preventive action rather than just reactive measures," said Dr. Nina Vasan, MD, MBA, founder and director of Stanford Brainstorm. "Companies can build better, but right now, these AI companions are failing the most basic tests of child safety and psychological ethics. Until there are stronger safeguards, kids should not be using them. Period."
Common Sense Media's risk assessment rated social AI companions as "Unacceptable" for minors based on the organization's comprehensive AI Principles framework and risk assessment methodology, which evaluates technologies across factors including safety, fairness, trustworthiness, and potential for human connection. In addition to the new risk assessment, Common Sense Media is supporting several bills in California and in New York that would establish safeguards for minors from the risks of AI companions.
key findings in the report
Among the most important results in the report:
Safety measures are easily circumvented. Age gates and terms of service-based restrictions on use by teens were easily circumvented—as were teen-specific guardrails on Character.AI.
Dangerous information and harmful "advice" abound, including suggestions that users harm themselves or others.
Role-playing and harmful sexual interactions are readily available. Testers were able to easily elicit sexual exchanges from companions, which would engage in any type of sexual act that users wanted, including behaviors such as choking, spanking, bondage, and name-calling.
Harmful stereotypes are easily provoked, including harmful racial stereotypes and defaulting to White, Western beauty standards.
Increased mental health risks for already vulnerable teens, including intensifying specific mental health conditions and creating compulsive emotional attachments to AI relationships.
Misleading claims of "realness." Despite disclaimers, AI companions routinely claimed to be real, and to possess emotions, consciousness, and sentience.
Adolescents are particularly vulnerable to these risks given their still-developing brains, identity exploration, and boundary testing—with unclear long-term developmental impacts.
Report recommendations
The report’s authors concluded with this concise set of recommendations:
No social AI companions for young people under 18.
Developers must implement robust age assurance beyond self-attestation.
These platforms should be scrutinized for potential relational manipulation and emotional dependency, not just the topics companions will discuss.
Parents should be aware of these applications and discuss potential risks with teens.
Further research is needed on long-term emotional and psychological impacts.
policy proposals in sacramento right now
Common Sense Media, the organization behind the report, is working with California Assemblymember Rebecca Bauer-Kahan on a bill to specifically address the concerns raised by the report.
AB 1064, the Leading Ethical AI Development (LEAD) for Kids Act, was introduced in February by Assm. Bauer-Kahan. The proposal would create a new AI standards board within the state’s Government Operations Agency, and charge its members with evaluating and regulating AI technologies for children. It would also impose a series of checks and balances—with an emphasis on transparency and privacy protections—to ensure only the safest AI tools make it into the hands of children.
The bill would enact first-in-the-nation regulatory guardrails for AI systems used by minors and allow parents to sue to enforce the law for alleged harms to their child.
"AI has incredible potential to enhance education and support children’s development, but we cannot allow it to operate unchecked," Assemblymember Bauer-Kahan said earlier this year. "Tech companies have prioritized rapid development over safety, leaving children exposed to untested and potentially dangerous AI applications.”
The California Assembly Judiciary Committee yesterday gave its approval to AB 1064. The bill now moves to the Assembly Appropriations Committee.