TCAI Bill Guide: Washington HB 2225 and SB 5984, companion chatbot safety bills

Washington State’s HB 2225 and SB 5984 are AI safety bills that would implement basic safeguards for kids and teens interacting with chatbots. (Image by Zulfugar Karimov via Unsplash.) 

During the 2026 legislative session, TCAI will offer clear, plain-language guides to some of the most important AI-related bills introduced in state legislatures across the country.

Jan. 20, 2026 — Washington’s HB 2225 and SB 5984 are companion bills. They are AI chatbot safety measures based on the learnings from similar bills adopted in California and New York in 2025.

Washington legislators got their first full discussion of HB 2225 at the House Technology, Economic Development, and Veterans Committee hearing on Jan. 14, 2026, and at the Senate Environment, Energy, and Technology Committee hearing on Jan. 20, 2026. The video clips below are from those hearings.

The original full text of HB 2225 is here, and revised versions are here.

The original full text of SB 5984 is here, and revised versions are here.

Brief summary

HB 2225 and SB 5984 require operators of artificial intelligence (AI) companion chatbots to issue certain notifications and implement precautions for minors.

The bills require operators of AI companion chatbots to implement protocols for detecting and addressing expressions of self-harm.

Bill sponsors

HB 2225: Representatives Callan, Thomas, Ryu, Parshley, Simmons, Leavitt and Berry; by request of Governor Ferguson.

SB 5984: Senators Wellman, Shewmake, Frame, Hasegawa, Nobles, Pedersen, Riccelli, Valdez, and Wilson; by request of Governor Ferguson.

HB 2225 / SB 5984 overview

Notification to all users: An AI chatbot operator is required to notify the user that the companion chatbot is artificially generated and not human. The notification must be provided at the beginning of the interaction, and at least every three hours during continued interaction.

Engagement with minors: If the chatbot operator knows the user is a minor, the operator is required to:

  • notify the user that the chatbot is artificially generated and not human;

  • implement reasonable measures to prevent the AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors;

  • prohibit the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user.

Required protocols: An AI chatbot operator may not make available or deploy an AI companion chatbot unless it maintains and implements a protocol for detecting and addressing suicidal ideation or expressions of self-harm by all users. The protocol must:

  • include reasonable methods for identifying expressions of suicidal ideation or self-harm;

  • prevent the generation of content encouraging or describing self-harm;

  • provide automated or human-mediated responses that refer users to appropriate crisis resources.

Protocol disclosures: An AI chatbot operator must disclose on their website the details of the required protocols, including safeguards used to detect and respond to self-harm expressions and the number of crisis referral notifications issued in the preceding calendar year.

Enforcement: Violations of the bill's requirements are deemed to affect the public interest and constitute an unfair or deceptive act in trade or commerce for purposes of the Consumer Protection Act.

Appropriation: None.

Effective date: January 1, 2027.

Video overview

 

SB 5984 Sponsor testimony: Sen. Lisa Wellman

Sen. Lisa Wellman (D-Mercer Island) sponsored the Senate bill, at the request of Gov. Bob Ferguson. Wellman testified on Jan. 20, below.

An excerpt from Sen. Wellman’s remarks:

“We have provided our children with 21st century tools to prepare them for the world that they're moving into, which is quite a bit different than the world that we moved into as we came out of school.

Computers and smartphones have opened up a digital playground for them. The question is, is that a safe playground? And as a teacher, as a mother, as a grandmother, and as a senator I feel very strongly that it's our responsibility to make sure it's a safe playground.

The bill before us is here because the Governor and I believe that the answer to this question is no: It is not a safe playground, the way it is structured.

AI holds the promise of amazing benefits, but [we’ve seen] numerous instances of damage to humans, including a number of child suicides with AI involvement. We feel we need to step in and put industry on notice that it is not okay to put out a product that has so many possibilities for significant damage.

Having been in the business for a number of years, I can tell you that with technology, we can tell very quickly by key strokes, by areas of interaction, etc., whether the person that is engaged is a young child, is an adult, or is even a man or a woman.

This bill aligns our vision with controls and oversight already in place in other states.”

HB 2225 Sponsor testimony: Rep. Lisa Callan

Rep. Lisa Callan (D-Issaquah, Enumclaw) sponsored the House bill, at the request of Gov. Bob Ferguson. Callan testified on Jan. 14, below.

An excerpt from Rep. Callan’s remarks:

“It's up to us to figure out how to put guardrails in right now. We need to make sure that companion chatbots are not providing inappropriate sexual material and are not encouraging self harm, drug use, or disordered eating. We need to make sure that the human interaction development and social emotional learning of our children and teens are not stunted by a technology device that incentivizes interaction with the chatbot and not with other individuals.”

Transparency Coalition’s Jai Jaisimha testifies on behalf of HB 2225

Jai Jaisimha, COO of the Transparency Coalition, testified at the hearing. Video and excerpts are posted below.

“I and my organization have been working with lawmakers in multiple states, including the two—California and New York—that have already passed laws regulating the companion features of chatbots in 2025. HB 2225 benefits from the many lessons we learned during the passage and enactment of those laws.

I have also worked with the Washington State Attorney General’s office and the State AI Task Force, to help brief the members of the task force in 2025 on chatbot harms and
national policy trends on chatbot regulation.

Since technology does not stand still, California is working to strengthen its chatbot regulations to match the design of newer chatbots. This Washington bill already includes these strengthened protections. In addition, our organization, with our coalition partners, is working with lawmakers in over 30 states who will all be considering largely similar chatbot related legislation.

The key takeaway here: Washington is leading but we are by no means alone.

HB 2225 takes reasonable measures to ensure that companion chatbots are taking care to protect their users by requiring chatbot developers to intervene when users are contemplating self-harm and strengthening protections by prohibiting the use of manipulative engagement techniques just to prolong a session.

Based on my technical expertise, these protections are entirely feasible to implement today. Not doing so is a business choice. HB 2225 will change the incentives and help chatbot developers make different choices.”

Gov. Ferguson's office weighs in on HB 2225

Beau Perschbacher, Senior Policy Advisor to Gov. Bob Ferguson for economic development, offered the Governor’s perspective on the bill.

“The governor's interest in this comes from his ongoing commitment to public safety, wanting to keep Washington at the cutting edge of consumer protection and common sense regulation.

Most importantly, though, his interest comes from his perspective as a parent.

The Governor has read the media reports about teenage suicide and the role of AI companion chatbots. When we're discussing AI, he often references his own kids and the challenges of parents today in trying to keep up with rapidly evolving technology.

We did check this bill with both the tech industry and the [safety] advocates. The bill takes strong protections, but does try to balance reasonable requests from companies that are charged with administering these requirements. We think it will put us at the forefront of regulating AI companion chatbots.”

expert testimony: prof. katie davis, uw center for digital youth

Katie Davis, professor of human development and education, is the co-director of the University of Washington Center for Digital Youth. An excerpt from her testimony:

“I've been researching the impact of digital technologies on young people's well being for over 20 years. I've published more than 80 academic papers and three books on this topic.

Based on my research, I see a strong need for legislation regulating AI chatbots for minors. As a developmental scientist I'm particularly attuned to the impact of AI's design on children's development.

Research reveals that beyond seeking help with their school work, youth are using AI platforms like ChatGPT for high-stakes developmental domains, including identity exploration and seeking romantic and interpersonal support.

These chatbots use a set of manipulative designs to keep teens talking with AI companions about highly personal topics. For example, interaction extensions keep users engaged longer than they intended by offering a question at the end of each conversational turn.

AI Companions display a high degree of security, excessively validating users, free from any form of criticism. As part of an interaction design, AI Companions try to make self-disclosures to encourage a user to make their own disclosures.

These designs take advantage of an adolescent’s underdeveloped self-control abilities, increased self-focus, and heightened sensitivity, placing them at greater risk than adults of forming unhealthy attachments to AI companions.

Senate Bill 5984 would help protect teens by prohibiting companies from using manipulative engagement techniques to keep teens on their platforms.”

LEARN More about AI chatbots

Next
Next

AI Legislative Update: Jan. 16, 2026