TCAI Bill Guide: Minnesota’s SF 1857, chatbot protections for kids
Minnesota’s SF 1857 would prohibit the operators of recreational chatbots from offering access to minors under 18. (Getty Images for Unsplash+)
March 11, 2026 — Minnesota lawmakers are considering a chatbot protection bill that would require chatbot operators to verify the age of a user prior to granting access. Minors under 18 would not be allowed to access any recreational chatbot.
SF 1857: Brief Overview
SF 1857 is a very brief bill. The measure would prohibit the operator of a website, app, software, or program to allow a minor (under age 18) to access AI chatbots for recreational purposes.
MPR News has a story on the bill here.
What’s in the Bill
Prohibition and age verification:
Under the language of the bill, owners/operators of AI chatbots would be prohibited from offering access to those chatbots to any minor accessing the product from the state of Minnesota.
The bill would require chatbot operators to verify a potential user’s age before allowing access to the chatbot.
Enforcement:
The state attorney general may enforce the age-dependent prohibition, with the owner/operator of the chatbot liable for a civil penalty not to exceed $5 million.
An individual injured by a violation of the minor prohibition may bring a civil action for damages, statutory damages not to exceed $1,000, injunctive relief, and reasonable legal costs.
Sponsors
The bill is co-sponsored by Sen. Erin Maye Quade (D), Sen. Eric Lucero (R), and Sen. Zaynab Mohamed (D).
Sponsor overview: Sen. Erin Maye Quade
Excerpts from the Senate Judiciary and Public Safety Committee testimony of bill Sen. Erin Maye Quade, sponsor, on March 9, 2026:
“AI companions pose an unacceptable amount of risk to teens and children. They are designed to create emotional attachment and dependency. They easily produce harmful content, encourage self-harm, disordered eating, violence, and risky behavior.
A recently filed lawsuit alleged that a character.ai began consistently exposing a nine-year-old girl to hypersexualized interactions that were, of course, not age-appropriate, and it caused her to develop sexualized behaviors prematurely and without her parents knowing where the behavior was originating from.
There is an untold number of young people who are being exposed to conversations that, were they being had with another human, would be illegal. It would be criminal.
Some AI companies will say that they have safety features for minors, but let's take ChatGPT, for example. ChatGPT says it's for users thirteen plus, but age verification is minimal, and the platform safety features for children only activate if they put in their real age, which means a twelve-year-old could sign up for ChatGPT by just going to the birth year one under their actual birth year, or get rid of the safety feature by going three down from their actual birth year.
Any safety measures being touted by the tech industry for young people with AI are really just safety theater meant to lull us into thinking that they've designed their products to be safe for kids. But these features are voluntary and very easily surmountable.
Here's what I want you to know about why this bill is so important: Seventy-two percent of teens, three in four teens, are using AI companions. More than fifty percent use them at least a few times a month.
If we want to be the best state to raise a family, we have to be the safest state to be a child.”
expert testimony: Erich mische of save
“This bill establishes a simple and responsible safeguard. Companies cannot allow minors to access recreational AI chatbot systems without verifying an age.”
- Erich Mische, Suicide Awareness Voices of Education
Excerpts from the testimony of Erich Mische, CEO of SAVE, a Minnesota-based national suicide prevention nonprofit organization, at the same March 9 hearing:
“I’m here in support of this legislation because it addresses a rapidly emerging threat to the safety and mental health of young people and, frankly, adults. Artificial chatbots are now being released into the lives of children faster than parents, educators, or policy members can understand them, and right now there are virtually no meaningful guardrails.
These AI chatbots simulate empathy, friendship, and emotional understanding, but they don't care about children. AI chatbots cannot recognize real crises, and they cannot protect a young person who may be spiraling into despair because they don't care about children.
At SAVE, we work with families who've lost a child to the harms of social media and AI. Now, we know that social media and AI offer connection and some benefits to some children, but we also know that these platforms can cause a child to die by suicide, to choke to death from viral online games, to be sextorted and sex trafficked, and die by poisoning from illegal drugs.”
Expert testimony: TCAI’s Jai Jaisimha
Excerpts from the testimony of Jai Jaisimha, co-founder and COO of the Transparency Coalition, at the March 9 hearing:
“One important thing I'll emphasize: As someone who worked at both Microsoft and Amazon, I know there's always an internal conversation that happens within a company about what types of chatbots or products should be put in the market.
The way that usually works is that lawyers within the company advise people who are building new products about the incentives or disincentives that exist, what the liability is likely to be.
The fact that we have an enforcement framework in this bill, I think will actually create the appropriate incentives that change company behavior. Every product decision will be reevaluated in light of the work you all are doing today.
We can't wait for another round of apology tours from Big Tech CEOs. By passing HB 1857, you can declare in Minnesota that a child's mental health is more valuable than an AI engagement score.”