TCAI Bill Guide: Utah’s HB 286, the AI Transparency Act
Utah lawmakers are considering HB 286, the AI Transparency Act, which would require large frontier model developers to implement safety protections for children and to prevent catastrophic events affecting public safety. (Photo: Utah Capitol, Getty Images for Unsplash+.)
Feb. 12, 2026 — Utah is known as a national leader in the management of artificial intelligence. Last year the state legislature adopted laws requiring disclosure of AI interactions, regulating the use of AI-driven mental health chatbots, and extending protections against AI deepfakes.
This year lawmakers in Salt Lake City are considering HB 286, the Artificial Intelligence Transparency Act. The bill would require developers of large frontier AI models to implement public safety and child protection plans.
Sponsors: Rep. Doug Fiefia (R), Sen. Mike McKell (R)
What’s in the bill
HB 286, the Utah AI Transparency Act, applies specifically to developers of frontier AI models, which are the largest and most powerful models. It does not apply to smaller models or to universities developing or using frontier models for research.
Public safety plan for catastrophic risks: The Act requires large frontier model developers to implement and publish a public safety plan. The plan must include:
national standards and industry best practices;
define and assess thresholds used to identify potential catastrophic risks;
apply mitigations to address the potential risks;
use third parties to assess risks and the effectiveness of mitigations;
revisit and update the plan;
implement cybersecurity practices;
identify and respond to critical safety incidents;
institute internal governance practices to assure compliance with the Act.
“Catastrophic risk” means a risk that a frontier model may aid in the creation of a chemical, biological, radiological, or nuclear weapon; or engage in a cyberattack; or evade control of the model developer or user; or engage in any conduct that is a violation of Utah law (murder, assault, extortion, theft, etc.).
Child protection plan: The Act requires a frontier model developer to implement and publish a child protection plan. The plan must:
incorporate national standards and industry best practices;
assess potential for child safety risks;
mitigate the potential child safety risks;
use third parties to assess the potential for child safety risks and the effectiveness of mitigation plans;
revisit and update the child protection plan;
identify and respond to child safety incidents;
institute internal governance practices to ensure implementation of these protocols.
“Child safety risk” means a risk that a developer’s frontier model, when used as part of a chatbot operated by the developer, will engage in behavior when interacting with a minor that would cause death or bodily injury to a minor, including self-harm, or cause severe emotional distress to a minor.
Transparency: Under the Act, a large frontier model developer may not make materially false or misleading statements, or omissions, about risks from the developer’s activities, management, or compliance.
Allowed redactions: A frontier developer may make redactions in the Act’s required public documents that are necessary to protect trade secrets, cybersecurity, public safety, national security, or compliance with federal or state law. The developer must describe and justify the redaction, and retain the information for five years.
Safety incident reporting: The Utah Office of Artificial Intelligence Policy may establish a mechanism for frontier developers or members of the public to report a safety incident, and establish alternate compliance procedures if equivalent or stricter federal reporting requirements are established.
A frontier developer must report a safety incident to the Office of AI Policy within 15 days of the discovery of the incident. A critical safety incident that poses an imminent risk of death or physical injury must be reported within 24 hours to a law enforcement or public safety agency.
A frontier developer must submit a quarterly report to the Office of AI Policy summarizing assessments of catastrophic risk from the developer’s model, or on an alternate schedule agreed to by the Office of AI Policy. These reports will be classified as protected records.
Whistleblower protections: A frontier model developer must provide a reasonable internal process for employees to anonymously report information regarding threats to public health, or the health/safety of a minor, or a violation of the Act has occurred.
A frontier model developer may not take adverse action against an employee who has or wishes to provide information to the Office of AI Policy. Employers in violation of this section may be subject to penalties including reinstatement, 2x back pay, compensation for legal expenses, and action damages.
Enforcement: The Utah Attorney General’s Office will enforce the Act through civil action.
Penalties: A frontier developer in violation of the Act is subject to a civil penalty of up to $1 million for a first violation, and up to $3 million for each subsequent violation.
Effective date: The Act takes effect on May 6, 2026.
Sponsor’s overview
Here are excerpts from Rep. Doug Fiefia’s testimony on Jan. 27, 2026:
“This bill is not about banning AI. It's not about punishing innovation. It exists because when technology becomes powerful enough to shape a child's behavior or put the public at risk, we can't just look the other way.”
Rep. Fiefia continued:
“Some AI systems today are extremely powerful. When they fail or are misused, the damage doesn't stay small. It can spread fast and affect a lot of people at once. These systems can be used to help launch cyberattacks, support dangerous biological or chemical activity, or run in ways that humans can't quickly stop or control it.
Now, some AI companies will say, ‘Come on, Doug, are these risks real?’ Some have even joked that this is armageddon-style regulation.
Well, let's give you a real-life example that happened just a few months ago. Hackers linked to the Chinese Communist Party, the CCP, used AI to carry out a major cyber espionage attack with very little human involvement. They tricked Anthropic's AI tool, Claude Code, into scanning systems, finding weaknesses, and stealing data. Here's the kicker: Anthropic didn't have to report this. They chose to. This bill would require all frontier AI companies to report incidents when their models are used this way.
HB 186 does four simple things, and it applies to the most powerful AI companies.
Number one, it requires them to tell us how they're going to keep us safe. As AI is moving fast, we just wanna know how they're going to keep us safe and to transparently post it.
Number two, to be honest about the risks.
Number three, report incidents when they happen.
And number four, protect whistleblowers so that employees and engineers can speak up about safety problems without fear of retaliation.
That's it. No content mandates, no government pre-approval, no micromanaging algorithms. It doesn't touch development, which means it doesn't stifle innovation.”
Joseph Gordon-Levitt testifies on behalf of HB 286
The actor, director, and business entrepreneur Joseph Gordon-Levitt spoke on behalf of HB 286 during a House hearing on Jan. 27, 2026.
Here are excerpts from his testimony:
“I'm a dad. That’s why I'm here today. I have two boys and a girl. They're ten and eight and three, and I am worried for them.
I'm worried about them growing up in a future that's dominated by these amoral AI businesses that have proven time and time again that they are incapable of prioritizing the well-being of kids.
Whenever there's a tragedy [involving an AI chatbot], they pay lip service to the families. But I'm sorry, it is clear as day that is spin, it's PR, it's marketing. These companies are driven by only one guiding principle: making money. That's it.
This is why the AI industry needs laws. The federal government hasn't done anything about this yet, but thank goodness the states are stepping up.
I was here in Utah just a couple of months ago for the AI summit that Governor Cox put on, along with his excellent team of very smart people that are working on this. Utah has been a leader in the past, protecting kids against these predatory tech companies,
Now it’s time for Utah to be that leader again. I am asking you as, as a tech enthusiast and as a businessman, as a fellow American and as a dad, please do the right thing and pass this bill.”