TCAI Bill Guide: Oregon’s AI chatbot safety bill, SB 1546
Oregon’s SB 1546 is an AI chatbot safety bill that would require disclosures around the non-human nature of the chatbot, and protocols around suicide, self-harm, and interactions with minors. (Illustration: Getty Images for Unsplash+.)
During the 2026 legislative session, TCAI will offer clear, plain-language guides to some of the most important AI-related bills introduced in state legislatures across the country.
Feb. 3, 2026 — Oregon’s SB 1546 is an AI chatbot safety measure based on the learnings from similar bills adopted in California and New York in 2025.
Oregon legislators got their first full discussion of SB 1546 at an informational hearing held by the Senate Committee on Early Childhood and Behavioral Health on Feb. 3, 2026. The video clips below are from that hearing.
The original full text of SB 1546 is here. The bill’s progress, and revised versions, may be found here.
sponsor overview
The bill’s sponsor, Sen. Lisa Reynolds, offers a synopsis of the bill with Transparency Coalition COO Jai Jaisimha:
bill summary
SB 1546 requires operators of artificial intelligence (AI) chatbots to issue certain notifications and implement precautions for all users, and adds additional protocols for a user who the operator has reason to believe may be a minor.
The bills require operators of AI chatbots to:
tell users they are talking to AI, not a human,
implement protocols for preventing outputs that cause suicidal feelings or thoughts,
implement special protocols if the AI system operator has reason to believe the user is a minor, and
report each year to the Oregon Health Authority concerning incidents in which users were referred to resources to prevent suicidal ideation, suicide, or self-harm.
The bill also allows a user who has suffered ascertainable harm to bring an action for damages and injunctive relief.
what’s in it specifically for families and kids
SB 1546 requires operators of artificial intelligence (AI) chatbots to issue certain notifications and implement precautions for all users, and adds additional protocols for a user who the operator has reason to believe may be a minor.
Why the “reason to believe” language is important: Tech companies have designed sophisticated systems to identify kids online. Bad operators sometimes hide behind a legal reading of not having absolute proof that a user is underage. This bill language holds tech companies accountable and closes that loophole.
Initial warning for kids: The bill requires chatbot operators to state, up front, that the chatbot may not be suitable for kids.
Disclosure that the chatbot is AI, not human: When interacting with a minor, the bill requires a chatbot to remind the user that the user is interacting with an AI system, not a real human. This reminder must be repeated at least once per hour.
No deception allowed: The bill prohibits a chatbot interacting with a minor from misrepresenting the chatbot’s identity or falsely claim to be anything other than an AI system.
Regular reminder to take a break: The bill requires chatbots to provide kids with a clear and conspicuous reminder, repeated at least once per hour, that the user should take a break from interacting with the chatbot.
No sexual content allowed: The bill requires chatbot operators to ensure that when interacting with minors, the chatbot does not produce sexually explicit content or state that the minor should engage in sexually explicit conduct.
No addictive algorithms: The bill prohibits a chatbot, when interacting with a minor, to deliver a system of rewards or affirmations designed to maximize a minor’s engagement time with the chatbot.
No emotional manipulation allowed: The bill prohibits a chatbot interacting with a minor from generating messages of emotional distress, loneliness, or abandonment in response to a user’s desire to end a conversation or delete an account.
Required protocols to prevent suicidal outputs: The bill requires chatbot operators to prevent responses that could cause suicidal feelings or thoughts. This is a requirement for interactions with users of all ages.
Required protocols for suicidal/self-harm interest: The bill requires all chatbot operators to identify when a user (of any age) indicates suicidal ideation or interest in self-harm, and to refer the user to appropriate mental health resources.
Expert testimony: Dr. Mitch prinstein
Dr. Mitch Prinstein, a professor of psychology and neuroscience at the University of North Carolina, is the senior science advisor of the American Psychological Association (APA). These are excerpts from his testimony on Feb. 3, 2026:
“Toddlers need to form deep interpersonal connections with human adults to develop language, learn relationship skills, and to regulate their biological stress and immune systems.”
“Chatbots are not an adequate substitute, although almost half of young children are currently interacting with AI daily, blurring the lines between facts and fantasy.”
“Adolescents are no less vulnerable. Brain development across puberty creates a period of hypersensitivity to positive feedback, while teens are still unable to stop themselves from staying online longer than they should. AI exploits this neural vulnerability with chatbots that can be obsequious, deceptive, factually inaccurate, yet disproportionately powerful for teens.
“More and more teens are interacting with chatbots, depriving them of opportunities to learn critical interpersonal skills. Science shows that failure to develop these skills leads to lifetime problems with mental health, chronic medical issues, and even early mortality.”
“Part of the problem is that AI chatbots are designed to agree with users about almost everything, but real human relationships are not frictionless. We need practice with minor conflicts and misunderstandings to learn empathy, compromise, and resilience. This has created a crisis in childhood. Science reveals that many youth are now more likely to trust AI than their own parents or teachers.”
Parent advocate: Danica noble
Danica Noble, a member of the Washington State Parent Teacher Association’s advocacy committee, spoke on Feb. 3. Noble has a 20-year career in federal antitrust and consumer protection law enforcement, and is co-chair of the Washington State Bar Association’s Antitrust, Consumer Protection, and Unfair Business Practices Section. Excerpts from her testimony:
“AI has really evolved, and with it…comes the next business model, which has been described as attachment hacking or the attachment economy. The idea is to get the users to stay online as much as possible, to extract as much data as possible.
The way this is happening with chatbots is different than with social media. With social media, ‘enragement equals engagement.’ Whereas with chatbots, they’re overly encouraging and sycophantic. These chatbots aren’t just going for attention, they’re going for attachment.
The Character AI CEO said the quiet part out loud when he said, ‘We’re not trying to replace Google with these chatbots. We’re trying to replace your mom.’”