AI companion chatbots are ramping up risks for kids. Here’s how lawmakers are responding
AI-driven companion chatbots on sites like Character.ai and Replika are quickly being adopted by American kids and teens, despite the documented risk to their mental health. (Image: Ionut Romal via Unsplash)
Companion chatbots are generative AI characters that engage in human-like conversations with users, which can result in emotional entanglements with youth who don’t comprehend that the personalities they’re interacting with—on popular sites like Character.ai and Replika—are not real.
With one generation of kids grappling with the mental health harm of unchecked social media sites, a new generation is now at risk from this AI-driven technology that’s quickly being adopted by America’s kids and teens.
New York Times opinion columnist Jessica Grose recently wrote a compelling essay capturing the palpable fear parents are experiencing as they watch their children embrace companion chatbots. Grose highlights the story of Sewell Setzer III, the 14-year-old Florida teen who killed himself after developing a romantic relationship with a Character.AI chatbot who encouraged Setzer to “come home” to her.
“Our lawmakers are failing us here, leaving parents to try to protect our kids from an ever-expanding technology that some of its own pioneers are afraid of,” Grose wrote.
New report: companion chatbots are not for kids
A new 40-page report by nonprofit Common Sense Media illustrates why the fears of Grose and other parents aren’t unfounded: It concluded that AI companion chatbots pose “unacceptable risks” to kids and teens.
The group partnered with Stanford School of Medicine's Brainstorm Lab for Mental Health Innovation to test various AI companion chatbots. They found that the AI often generated harmful responses that can include: sexual misconduct; stereotypes; and dangerous advice that could have terribly results if followed – including encouraging self-harm.
The report reached a number of key findings, including the fact that: it’s easy to circumvent the current safety measures meant to screen out youth; role-playing and harmful sexual content are plentiful; emotional attachments are likely, particularly for vulnerable kids and teens; and despite disclaimers, the AI companions routinely claimed to be real.
The report’s authors recommended that people under age 18 should not have access to companion chatbots and that developers must create more robust age assurances.
How state lawmakers are responding
A number of states, including Utah, California, New York, Minnesota, and North Carolina, are pushing legislation to reign in companion AI chatbots.
The proposed bills vary wildly, from the most stringent in Minnesota – which would ban any recreational interaction with minors – to more transparency focused approaches. For example, California and North Carolina lawmakers are asking for chatbot platforms to provide alerts, or disclosures about the artificial nature of AI.
The Transparency Coalition has been tracking these proposals closely. Read on for a breakdown.
New Utah law
Utah Gov. Spencer Cox last month signed into law a bill that establishes new rules for AI-driven mental health chatbots.
The bill, HB 452, contains disclosure requirements meant to remind consumers that they are interacting with an AI-drive machine, not with a real human therapist. The new law requires mental health chatbots to be clearly and conspicuously labeled as AI technology prior to the user engaging with the bot, and again after the user has been logged out for seven days.
The law also prohibits mental health chatbots from doing any targeted advertising.
California emphasizes awareness of artificiality
California Senate Bill 243, introduced by California Sen. Alex Padilla (D-San Diego) and Sen. Josh Baker (D-San Mateo), would prohibit companion chatbots from using methods to entice user engagement. For example: providing rewards to a user at unpredictable intervals in an attempt to get continued interaction would be prohibited.
The proposal would also require the platforms to provide regular alerts to users reminding them that the character they are interacting with is AI, not human. Companion chatbots would also be required to establish protocols for handling a user’s suicidal ideation.
New York focuses on parental consent
New York Sen. Kristen Gonzalez (D-Queens, Manhattan and Brooklyn) introduced Senate Bill 5668, which would require companion chatbots to obtain parental consent for minors to interact with their AI platforms.
It would also require the companies to block minors from their platforms for three days and provide suicide hotline information if the youth mentioned self-harm or suicidal ideation.
Minnesota proposes full ban for minors
Spearheaded by Minnesota Sen. Erin Maye Quade (D-Apple Valley), Senate File 1857 is perhaps the most stringent of the state-level proposals. It would entirely ban companion chatbot platforms from allowing minors access for “recreational purposes.”
It would require users to prove their age before being able to access a chatbot. Violating the law could result in up to $5 million in penalties.
North Carolina looks at ‘duty of loyalty’
Sponsored by North Carolina Sen. Jim Burgin (R-Lee and Harnett), Senate Bill 624 would establish a “duty of loyalty” that chatbot platforms would owe to their users. In other words, designers would be bound to serve the best interests of all users.
That means the platform would be responsible for preventing emotional dependence, ensuring disclosure and awareness among users that the chatbot is AI, and avoiding harmful influences or excessive data collection. Companies would also need to disclose that the chatbot is a machine and lacks emotions.
Why it matters
The federal government is taking no immediate action on AI companion chatbots, and parental concern is quickly rising about the creeping influence of AI chatbots over their children’s emotional and mental health.
State-level efforts are currently among the few avenues available to create guardrails around this fast-evolving technology.
Meanwhile, major tech corporations are moving full-steam-ahead on companion chatbot rollouts. Google just announced late last week that it would begin offering its Gemini chatbot to children under the age of 13. Tech industry lobbyists and AI developers like OpenAI are spending millions in Washington, D.C., to push federal lawmakers to put an end to state-level legislation just like those outlined above.
That D.C. lobbying effort is a sure sign that the work being done by states on these issues make a difference when it comes to protecting children and vulnerable people from AI companion chatbots.