Utah lawmakers scramble to pass four AI bills before Friday adjournment

Lawmakers meeting in the state capitol building in Salt Lake City, above, are scheduled to adjourn on Friday afternoon. Four AI-related bills remain active and await a final floor vote for passage. (Photo: Getty for Unsplash+)

March 4, 2026 — In Salt Lake City, state legislators are working late hours to beat this year’s short-session adjournment deadline, which comes at the close of business this Friday, March 6.

Among the bills still on pace for passage are five AI-related measures. Four have been approved by their chamber of origin and are awaiting final votes in their secondary chamber.

At the top of that list: HB 438, a bill that addresses chatbots and kids digital safety. We have a concise guide to the bill below.

A fifth bill, modifying the duties of the state AI policy office, has been approved and sent to the desk of Gov. Spencer Cox.

The four Bills in play this week

  • Leading the portfolio: HB 438, Rep. Doug Fiefia’s AI disclosure bill that would establish data protection requirements for operators and safety requirements for minor users. The bill was approved by the full House, 68-1 , on Feb. 20. It has now been approved by its Senate committees and has been placed on the Second Reading Calendar (awaiting a full floor vote) on Tuesday, Mar. 3.

  • SB 73, sponsored by Sen. Musselman and Rep. Eliason, would enact online age verification measures for minors. The bill was approved by the full Senate, 22-2, and sent to the House on Feb. 23. Of all the remaining AI bills this may be the furthest from approval, as it has remained with the House Rules Committee since Feb. 26.

  • HB 289 concerns AI and digital CSAM. This bill from Rep. Defay and Sen. Musselman was passed by the House on Feb. 17. It has since been approved by the Senate Rules Committee and was placed on the Second Reading Calendar on March 2.

  • HB 276 would enact the Digital Voyeurism Prevention Act, a deepfake protection bill sponsored by Rep. Defay. The bill was approved by the full House, 66-0, on Feb. 20. It passed out of the Senate Transportation, Public Utilities, Energy, and Technology Committee, and was placed on the Second Reading Calendar on Feb 26.

Already passed and headed to gov. cox

HB 320, sponsored by Sen. Cutler, would modify the duties of the state Office of Artificial Intelligence Policy. The bill was approved by the full House on Feb. 17, and passed the full Senate on Feb. 27. It now goes to Gov. Cox for signing.

spotlight on AI chatbot safety: SB 438

Rep. Doug Fiefia, a national leader on AI issues for the Future Caucus, is a strongly pro-business Republican seeking to balance innovation with appropriate data security and minor safeguards in AI products. His HB 438 would do a number of things, including:

Establish data protection requirements for operators of AI chatbots for all users, both minor and adult. A chatbot operator must obtain a user’s affirmative consent prior to processing the user’s sensitive data.

Embed suicidal and self-harm protocols into the system. The AI chatbot must be designed to prohibit responses that encourage suicidal ideation, suicide, self-harm, or harm to others. Upon receiving inputs that touch on suicide or self-harm, the chatbot must provide referral resources to crisis service providers, a suicide hotline, or a crisis text line.

Disclose advertisements as advertisements. A chatbot operator may not advertise a specific product or service to a user unless the operator clearly identifies the ad as an ad. The operator must also disclose any sponsorship, affiliation, or agreement to promote the advertised product. In other words: No embedding product placements into chatbot answers without full notice to the consumer.

Safety requirements for minors. SB 438 would establish special requirements for chatbot users who are minors (under age 18). Operators must provide a clear and conspicuous notice to the user at least every hour during a continuing chatbot interaction that reminds the user to take a break from interacting with the companion chatbot; states that the user is interacting with an artificial intelligence system, not a human; and must opt out, by default, the user from targeted advertising.

Chatbot operators must also include protocols that prevent the chatbot from producing or providing material harmful to minors, encourage the user to use illegal substances, consume alcohol, use tobacco or nicotine, engage in sexual conduct, engage in self-harm, or engage in illegal conduct.

Operators may not direct targeted advertising to a minor unless a parent or legal guardian of the user has provided affirmative consent. Operators may not collect data from the minor that is not required for core functioning of the chatbot, may not sell a user’s personal data, or otherwise convey a user's personal data unless a parent or legal guardian of the user has provided affirmative consent, except as required for core functioning of the chatbot.

Learn more: Kids, AI, and digital safety

Next
Next

TCAI guide: What parents should know about the social media addiction trials