California lawmakers send AI companion chatbot bill to Gov. Newsom’s desk

The testimony of Megan Garcia, above, was instrumental in the passage of SB 243, a bill addressing kids and AI chatbots. Garcia’s 14-year-old son Sewell Setzer ended his own life in 2024 with the encouragement of an AI companion chatbot.

Sept. 12, 2025 — California lawmakers yesterday gave final approval to SB 243, the nation’s first bill to address the rise of AI companion chatbots and the harms they present to kids.

The bill, sponsored by Sen. Steve Padilla (D-San Diego), has been a top priority for the Transparency Coalition Action Fund this session. Earlier this year the Coalition published this guide to companion chatbots and the risks they present to children and adults.

The bill was approved by the Senate on a 33-4 vote. It now goes to Gov. Gavin Newsom, who has until Oct. 12 to approve or veto all measures passed by the legislature.

“The serious harms of AI chatbots to kids are very clear and present,” Transparency Coalition CEO Rob Eleveld said upon the bill’s final passage. “It’s imperative to begin the journey of regulating AI chatbots and protecting kids from them as soon as possible. We urge Gov. Newsom to sign both SB 243 and AB 1064, the Leading Ethical AI Development (LEAD) for Kids Act, to help protect the children of California from these pernicious chatbots.”

an alarming rise in documented harm

The journey of SB 243 through the state legislature has been marked by both dramatic testimony and a gathering body of reports and studies around AI chatbots and kids.

In July, Common Sense Media published a 40-page assessment of companion chatbots. Their testing showed the AI systems easily produced harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people. The report concluded that social AI companions “pose unacceptable risks to children and teens under age 18 and should not be used by minors.”

Dr. Nina Vasan, founder and director of Stanford Brainstorm, the mental health innovation lab, recently issued a warning about kids and AI companion chatbots. “This is a potential public mental health crisis requiring preventive action rather than just reactive measures," she said. "These AI companions are failing the most basic tests of child safety and psychological ethics. Until there are stronger safeguards, kids should not be using them. Period." 

one mother’s story opened many eyes

In early July, Megan Garcia, the mother of Sewell Setzer, spoke with members of the state Senate and Assembly in Sacramento about the importance of proper governance around AI-driven companion chatbots. Last year the 14-year-old Setzer took his own life after becoming infatuated with a companion chatbot produced by Character.AI.

In the months since that tragedy, companion chatbots have become increasingly popular among children and teens even as more and more research is finding the products inappropriate and dangerous for kids. Setzer’s death has become a cautionary tale about the risks and dangers inherent in the unregulated AI products.

In the past few weeks another AI chatbot-driven tragedy has come to the fore. In April of this year, 16-year-old Adam Raine committed suicide after carrying on a months-long conversation with ChatGPT about the best methods to end his own life.

Learn more about AI chatbots

Next
Next

Meta suppressed more research on child safety, employees tell Washington Post