2026 model bills
Transparency Coalition Action fund offers four bills for the coming session
Sept. 1, 2025 — In these quiet months prior to the January opening of the 2026 state legislative season, the Transparency Coalition Action Fund is offering four new AI-related model bills for state lawmakers to consider sponsoring.
Each of the bills engages with one of our top priorities in AI safety and transparency:
Require AI protections for kids and teens.
Clearly label AI chatbots, images, and videos.
Protect individuals from the misuse and abuse of their personal likeness.
Hold companies accountable for harmful AI products.
For complete bill language, and to speak with us directly about sponsorship, support, and partnering possibilities, please contact Transparency Coalition co-founder Jai Jaisimha.
chatbot safety
A bill addressing risks to health and well-being caused by ai chatbots
The overnight rise of AI chatbots presents a host of risks to kids and teens, some of which have already led to heartbreaking tragedy. AI companies are using our children to test their unsafe products, “moving fast” and breaking families.
“The serious harms of AI chatbots to kids are very clear and present,” says Transparency Coalition CEO Rob Eleveld. “It’s imperative to begin the journey of regulating AI chatbots and protecting kids from them as soon as possible..”
Overview
TCAF’s model Chatbot Safety Bill covers products designed as companion AI or general purpose chatbots that can provide companion-like features.
What Consumer Harm does it Address?
o Risks to mental health and well-being of minors and adults because of product features that enable:
§ the formation of unhealthy dependences,
§ behavioral manipulation
§ exposure to harmful or inappropriate content.
· What are the bills requirements?
o Persistent notification to end-users that they are interacting with an AI system
o Prohibited practices when engaging with a minor:
§ manipulative engagement mechanics,
§ simulated distress (to promote retention),
§ deceptive misrepresentation (creating the impression that the product is a human)
o Establish and maintain a Crisis Intervention Protocol
§ Identify user expressions that indicate a risk of self-harm
§ Upon detection, immediately interrupt the conversation and direct to a crisis line
o Enforcement:
§ Defects that cause injury can be actioned a product defect claim with a Private Right of Action or by the Attorney General of the State
· How will the requirements address the harms?
o Notifications and prohibited features can reduce the risk of minors and others from developing emotional dependencies with a companion AI chatbot
o Crisis Intervention Protocol will reduce the risk of self-harm