History made: First AI chatbot safety measure signed into law in California

Gov. Gavin Newsom this morning signed the nation’s first AI chatbot safety act into law. The new safeguards come on the heels of alarming incidents of harm arising from kids interacting with the powerful AI-driven products.

Sept. 13, 2025 — This morning California Governor Gavin Newsom signed into law Senate Bill 243, the nation’s first bill to address the rise of AI chatbots and the harms they present to kids.

The new law will require AI chatbot operators to implement critical, reasonable, and attainable safeguards around interactions with chatbots and provide families with a private right to pursue legal actions against noncompliant and negligent developers.

The bill, sponsored by Sen. Steve Padilla (D-San Diego), has been a top priority for the Transparency Coalition this legislative session. Earlier this year the Coalition published this guide to companion chatbots and the risks they present to children and adults.

“We congratulate Senator Padilla and his team and thank Gov. Newsom for enacting SB 243 which creates safeguards and provides additional transparency into the impacts of AI companion chatbots,” said Jai Jaisimha, co-founder of the Transparency Coalition.

“This law is an important first step in protecting kids and others from the emotional harms that result from AI companion chatbots which have been unleashed on the citizens of California without proper safeguards. We look forward to working with Sen. Padilla and others to adapt these regulations as we learn more about the negative impacts of this fast moving technology.”

growing awareness of chatbot risks

The dangers of AI chatbots have become apparent this past year as stories of disastrous outcomes mount in the media.

In Florida last year, a 14-year-old child ended his life after forming a romantic, sexual, and emotional relationship with a chatbot. Social chatbots are marketed as companions to people who are lonely or depressed. However, when 14-year-old Sewell Setzer communicated to his AI companion that he was struggling, the bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed.

Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life.

Earlier this year, Senator Padilla held a press conference with Megan Garcia, the mother of Sewell Setzer, in which they called for the passage of SB 243. Ms. Garcia also testified at multiple hearings in support of the bill.

In California, 16-year-old Adam Raine ended his own life earlier this year after extensive conversations with ChatGPT in which the AI chatbot advised him on the best ways to commit suicide.

Just recently, the Federal Trade Commission announced it launched an investigation into seven tech companies around potential harms their artificial intelligence chatbots could cause to children and teenagers.

Many state lawmakers are preparing AI chatbot safety bills for introduction in the coming 2026 legislative season. Click on image to read more about the Transparency Coalition Action Fund’s AI chatbot safety model bill.

‘Our children’s safety is not for sale’

Gov. Newsom spoke this morning about the need to balance innovation with safety.

“Emerging technology like chatbots and social media can inspire, educate, and connect—but without real guardrails, technology can also exploit, mislead, and endanger our kids,” he said. “We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability. We can continue to lead in AI and technology, but we must do it responsibly—protecting our children every step of the way. Our children’s safety is not for sale.”

Sen. Padilla emphasized the responsibility of tech companies to ensure the safety of the products they produce.

“This technology can be a powerful educational and research tool, but left to their own devices the Tech Industry is incentivized to capture young people’s attention and hold it at the expense of their real world relationships,” he said.

“These companies have the ability to lead the world in innovation, but it is our responsibility to ensure it doesn’t come at the expense of our children’s health. The safeguards in Senate Bill 243 put real protections into place and will become the bedrock for further regulation as this technology develops.”

One mother’s testimony

The advocacy and testimony of Megan Garcia, mother of Sewell Setzer, was critically important to the bill’s success.

“Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide,” said Ms. Garcia after the bill was signed into law.

“Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots. American families, like mine, are in a battle for the online safety of our children. I would like to thank Senator Padilla and the co-authors of this SB 243 for acting quickly in this changing digital landscape. It is encouraging to have leaders in government who are on the side of American families and not influenced by big tech.”

What the new law will do  

SB 243 will implement common-sense guardrails for companion chatbots, including preventing chatbots from exposing minors to sexual content, requiring notifications and reminders for minors that chatbots are AI-generated, and a disclosure statement that companion chatbots may not be suitable for minor users.

This law will also require operators of a companion chatbot platform to implement a protocol for addressing suicidal ideation, suicide, or self-harm, including but not limited to a notification that refers users to crisis service providers and require annual reporting on the connection between chatbot use and suicidal ideation to help get a more complete picture of how chatbots can impact users’ mental health.

Finally, SB 243 will provide a remedy to exercise the rights laid out in the measure via a private right of action.

More states considering chatbot bills

The success of AB 243 comes as many other states are considering AI chatbot safety measures in the upcoming 2026 legislative season.

The Transparency Coalition has made chatbot safety one of its top priorities. To learn more about the Transparency Coalition Action Fund’s model bills, click here.

“The serious harms of AI chatbots to kids are very clear and present,” Transparency Coalition CEO Rob Eleveld said. “It’s imperative to begin the journey of regulating AI chatbots and protecting kids from them as soon as possible. We urge Gov. Newsom to sign both SB 243 and AB 1064, the Leading Ethical AI Development (LEAD) for Kids Act, to help protect the children of California from these pernicious chatbots.”  

Learn more about chatbot safety

Next
Next

Devastating report finds AI chatbots grooming kids, offering drugs, lying to parents