TCAI Bill Guide: Nebraska’s AI chatbot safety bills, LB 939 and LB 1185

Two 2026 bills in the Nebraska legislature offer safety measures for kids interacting with AI chatbots. (Photo: Nick Fancher for Unsplash+)

During the 2026 legislative session, TCAI will offer clear, plain-language guides to some of the most important AI-related bills introduced in state legislatures across the country.

Feb. 9, 2026 — Nebraska lawmakers took a leading role last year in passing legislation to protect kids from AI and social media, and they’re following up in the 2026 session with two bills that give parents the tools to manage their family’s exposure to AI chatbots.

LB 1185 would enact the Conversational Artificial Intelligence Safety Act. This bill, sponsored by Sen. Eliot Bostar, would require the operators of AI chatbots to disclose that the chatbot is not human, and install certain safety measures for users who are minors.

LB 939 would enact the Saving Human Connection Act. This bill, sponsored by Sen. Dave Murman, would limit the use of AI chatbots with human-like features to adults (18 and older) only. Chatbot operators would be required to implement safety measures around suicide and self-harm, minimize data collection, and prevent emotional dependence by a user.

LB 1185: Bill overview

the conversational ai safety act

Sponsor: Sen. Eliot Bostar (D-Lincoln)

LB 1185, the Conversational Artificial Intelligence Safety Act, concerns AI chatbot operators offering products to Nebraska consumers. There are certain requirements that concern all users, and others specifically for minors.

For all consumers, adult and minor

Disclosure: If a reasonable person interacting with an AI chatbot would be misled to believe that the person is interacting with a human, an operator shall clearly and conspicuously disclose that the conversational chatbot is artificial intelligence.

Suicide and self-harm protocols: A chatbot operator must adopt a protocol for the chatbot to respond to user prompts regarding suicidal ideation or self-harm that includes, but is not limited to, making reasonable efforts to provide a response to the user that refers them to crisis service providers such as a suicide hotline, crisis text line, or other appropriate crisis services.

Therapy not allowed: A chatbot operator shall not knowingly and intentionally cause or program a conversational artificial intelligence service to make any representation or statement that explicitly indicates that the chatbot is designed to provide professional mental or behavioral health care.

For minors

Disclosure: The bill would require chatbot operators to disclose to each minor account holder that the user is interacting with artificial intelligence. This disclosure must be either a persistent visible disclaimer, or appear at the beginning of each session and once again at least every three hours during a continuous session.

No gamifying incentives: An AI chatbot operator may not provide a minor user with points or similar rewards with the intent to encourage increased engagement with the AI chatbot.

No sexually explicit content: When a user is a minor, a chatbot operator must institute reasonable measures to prevent the chatbot from:

  • producing visual depictions of sexually explicit content;

  • generating direct statements that the user should engage in sexually explicit conduct;

  • generating statements that sexually objectify the user.

No emotional manipulation: For minor users, the chatbot operator shall institute reasonable measures to prevent the chatbot from generating statements that would lead a reasonable person to believe they are interacting with a human, including:

  • explicit claims that the chatbot service is sentient or human;

  • statements that simulate emotional dependence;

  • role-playing of adult-minor romantic relationships.

Account setting tools: An AI chatbot operator must offer tools for minor account holders, and, when users are younger than 13 years of age, their parents or guardians, to manage the account holders’ privacy and account settings.

A chatbot operator must also offer related tools to the parents or guardians of users 13 and older, as appropriate based on relevant risks.

Enforcement

The Nebraska Attorney General would enforce the Conversational Artificial Intelligence Safety Act by bringing a civil action against the chatbot operator(s) for a violation of the Act. Appropriate relief includes declaratory relief, an award of actual damages, and civil penalties of at least $1,000 per violation and no more than $500,000 per operator, as well as reasonable expenses incurred in bringing the civil action.

There is no private right of action created by the Conversational Artificial Intelligence Safety Act.

LB 1185: Sponsor Overview

An excerpt from Sen. Eliot Bostar’s statement during a Feb. 9, 2026 committee hearing:

“Conversational AI tools are increasingly designed to simulate human conversation in ways that can feel personal, emotional, and real.

For minors, those design features can create real risks: confusion about whether they are interacting with a human, exposure to inappropriate content, or emotional reliance on a system that is not designed to act in their best interest.

LB 1185 responds to those risks in a narrow and commonsense way.

The bill requires clear and conspicuous disclosure when a user is interacting with artificial intelligence, so no one, especially a minor, is misled into believing they are speaking with a real person. For a minor, that disclosure must be persistent and repeated at a regular interval during longer interactions.

The bill establishes reasonable safeguards for minors. It requires operators to take steps to prevent sexually explicit content and sexualize interactions. It also prohibits the use of manipulative engagement techniques, such as reward systems, that are intentionally designed to increase time spent on the platform by minors.”


LB 939: Bill overview

the saving human connection act

Sponsor: Sen. Dave Murman (R-South Central NE)

LB 939, the Saving Human Connection Act, enacts requirements for operators of AI chatbots available to consumers in Nebraska. The requirements include the following rules.

Chatbots not available to minors: LB 939 would require chatbot operators to not make human-like features available to minors (those under 18). A chatbot operator must implement a reasonable age verification system that preserves privacy and ensures that minors are age-gated out of the system.

Provide a safety version to adults: The bill requires chatbot operators to provide, as the default service, a version of the chatbot platform that does not include a chatbot with human-like features. The operator must also provide suitable warnings as to the risks, for verified adults who want to add human-like features to the chatbot.

Provide regular disclosures: Chatbot operators must offer regular disclosures to users, informing users that the chatbot’s human-like features are actually artificial intelligence.

Detect and mitigate emergency situations: Chatbot operators must implement and maintain reasonably effective systems to detect, respond to, and mitigate emergency situations in a manner that prioritizes the safety and well-being of users.

Prevent emotional dependence: Operators must implement a system to detect and prevent emotional dependence by a user, prioritizing the user’s well-being over the operator’s interest in user engagement or retention.

Minimize data collection: Operators may collect and store only “that information that does not conflict with a trusting party’s best interest.” Any stored data must be relevant and necessary to fulfill the purpose of the operator’s product or system.

Consider the consumer’s best interests: Operators must “consider the best interests of a trusting party” when personalizing content based on personal data, and avoid conflicts with the best interests of the user when allowing government or other third-party access to a user’s data.

Clear terms of service: Operators must offer a terms-of-service agreement that is presented to users in clear, easily understood language. The TOS must explicitly outline the operator’s obligations, describe the rights and protections offered to users, require affirmative consent from the user, and may not contain a mandatory arbitration clause.

Enforcement

The Nebraska Attorney General would enforce the Act. An operator in violation of the Act shall be subject to an injunction and disgorgement of any unjust gains due to a violation of the Act. The operator shall be liable for a civil penalty of not more than $10,000 for each violation.

Private right of action: Any adult, or parent/guardian acting on behalf of a minor, who uses a chatbot product that is not in compliance with the Act, may bring a civil action on their own, or on a class wide basis, for appropriate relief. “Appropriate relief” includes damages of not less than $100 and not more than $10,000 per incident or actual damages, whichever is greater.

LEARN More: AI chatbots

Previous
Previous

New survey finds overwhelming 96% support for protecting kids online

Next
Next

AI Legislative Update: Feb. 6, 2026