2026 model bills

Transparency Coalition Action fund offers four bills for the coming session

In these quiet months prior to the January opening of the 2026 state legislative season, the Transparency Coalition Action Fund is offering four new AI-related model bills for state lawmakers to consider sponsoring.

Each of the bills engages with one of our top priorities in AI safety and transparency:

  • Chatbot Safety: Requiring AI protections for kids and teens.

  • Clearly label AI chatbots, images, and videos.

  • Protect individuals from the misuse and abuse of their personal likeness.

  • Hold companies accountable for harmful AI products.

For complete bill language, and to speak with us directly about sponsorship, support, and partnering possibilities, please contact Transparency Coalition co-founder Jai Jaisimha.

model bill: chatbot safety

A bill addressing risks to health and well-being caused by ai chatbots

The overnight rise of AI chatbots presents a host of risks to kids and teens, some of which have already led to heartbreaking tragedy. AI companies are using our children to test their unsafe products, “moving fast” and breaking families.

“The serious harms of AI chatbots to kids are very clear and present,” says Transparency Coalition CEO Rob Eleveld. “It’s imperative to begin the journey of regulating AI chatbots and protecting kids from them as soon as possible..”

Overview

TCAF’s model Chatbot Safety Bill covers products designed as companion AI or general purpose chatbots that can provide companion-like features.  

The bill addresses risks to the mental health and well-being of minors and adults because of AI chatbot product features that enable: 

  • the formation of unhealthy dependences;

  • behavioral manipulation;

  • exposure to harmful or inappropriate content.

The bill requires persistent notification to end-users that they are interacting with an AI system. When the chatbot is engaged with a minor, certain practices are prohibited, including:

  • manipulative engagement mechanices;

  • simulated distress (to promote retention);

  • deceptive misrepresentation (creating the impression that the product is a human).

The bill requires the establishment and maintenance of Crisis Intervention Protocols in AI chatbot products. Those protocols must include:

  • the identification of user expressions that indicate a risk of self-harm;

  • upon detection, immediately interrupt the conversation and direct to a crisis line.

The bill includes this enforcement mechanism: Defects that cause injury can be actioned via a product defect claim with a Private Right of Action or by the Attorney General of the state.  

Summary: This model bill addresses the harm risk to minors and others from the use of AI chatbots through two main actions. Notification requirements and manipulative-feature prohibitions can reduce the risk of minors and other from developing emotional dependencies with an AI chatbot. Establishing clear Crisis Intervention Protocols will reduce the risk of self-harm.

Contact us directly for the full model bill language.

model bill: Ai Content labeling

A bill to address ai-enabled deception, fraud, and defamation

Everyone should have the right to know when they’re seeing or hearing AI-generated content.

Disinformation, deception, fraud, and online defamation have exploded with the release of generative AI tools. The digital world is now flooded with falsified images and video that look convincingly real. That’s opened the door for an influx of scammers and fraudsters who prey on our children and our elders.

Disclosing the AI-generated nature of images and video is a commonsense step that can be achieved with currently available technology.

Overview

This bill places requirements on AI tools that can generate or modify audio, video and image content to include labels in their outputs.

The bill addresses this specific harm: Consumers are unable to distinguish between AI-generated deepfakes and authentic content. This creates a very real risk of deception, fraud, and defamation.  

The bill requires generative AI systems to embed a Provenance Label in AI-generated audio, video, or image content. The label must include:

  • the name and version of the tool;

  • the date of the audio/video/image creation;

  • information to indicate whether the AI modifications were significant enhancements (such as removal or insertion of objects) or generated by AI.

The bill requires operators of generative AI systems to provide a tool that can be used to read Provenance Labels.

The bill requires large online platforms (such as social media platforms) to display Provenance Labels, and prohibits these platforms from stripping labels from the digital objects.

The bill encourages the use of capture devices (such as smartphones and cameras) to embed tools that reveals Provenance Labels.

Enforcement of the bill’s requirements would be tailored to each specific state, but we suggest civil penalties and state Attorney General enforcement.

Summary: This model bill will reduce the potential for consumers, especially kids and elderly consumers, to suffer harm as a result of being deceived, defrauded, or defamed by AI-generated content.   

Contact us directly for the full model bill language.

Model Bill: Likeness Protection

A bill to address the AI-enabled abuse of an individual’s voice and likeness

Deepfakes have quickly become one of the most tangible and widespread harms of generative AI, and they can have an especially damaging impact on our kids.

“These deepfakes can be generated by literally anyone with no real knowledge needed,” says Transparency Coalition CEO Rob Eleveld. “Girls in high schools across the country are being scarred in their youth” by fake pornographic images and videos that seamlessly transpose their likeness onto nude bodies in explicit situations.

Overview

This model bill covers generative AI tools that can be used to generate Digital Replicas: highly realistic, computer-generated depictions of an individual’s voice and likeness.

The widespread availability of these AI tools has enabled bad actors to create Digital Replicas of individuals that may be used to harm an individual’s personal reputation, social standing, employment status, as well as their physical and emotional well-being. Generative AI tools have been used to create non-consensual intimate imagery, to perpetrate financial fraud against individuals and businesses, to create false endorsements, to defame and harass individuals, and to deceive the public through fraudulent impersonation. 

This bill addresses those harms by establishing a property right for the use of an individual’s name, voice, or likeness. The bill will discourage AI abuses and offer legal recourse to victims by establishing clear penalties and accountability.

The bill prohibits the creation of a Digital Replica without individual consent for these stated purposes:

  • commercial purposes, including false endorsement of a product/service/message;

  • creation of non-consensual intimate imagery;

  • deceptive political advertising;

  • fraudulent impersonation.

The bill requires that AI tools capable of creating Digital Replicas must display a Consumer Warning that notifies users that misuse of the tool may result in criminal liability. Developers and deployers of AI tools capable of creating Digital Replicas must establish safety systems and protocols that prohibit users from creating content that violates this Act.

Enforcement of the bill’s requirements and prohibitions would be tailored to each specific state, and could include criminal penalties for violations of the law, as well as a private right of action for affected individuals.

Summary: This model bill addresses the harm potential of AI-generated Digital Replicas by deterring potential bad actors through consumer warnings and criminal penalties.

Contact us directly for the full model bill language.

Model bill: ai product liability

A bill to hold companies accountable for unsafe AI products

AI models and AI products are not magical entities. They’re designed and manufactured just like other products.

Tech companies should be held to the same standards of accountability as other manufacturers. Nobody gets a free pass to sell products that harm the public.

This bill coming in October 2025 will establish product safety, accountability, and liability standards for AI products that are in line with current laws governing products in many other industries.

Contact us directly for the full model bill language, coming soon.