AI Legislative Update: August 15, 2025

Aug. 15, 2025 — During the state legislative season TCAI offers weekly updates every Friday on a variety of AI-related bills making progress in around the nation.

This week: Michigan lawmakers moved ahead with a deepfake bill and Colorado Gov. Jared Polis called a special session to being on Aug. 21. Meanwhile, lawmakers, staffers, and advocates prepared for the re-opening of the California session on Monday, Aug. 18. A number of AI-related bills are expected to see action in the coming week in Sacramento.

California

After enjoying their mid-summer break, California legislators are scheduled to return to Sacramento on Monday, Aug. 18.

A handful of AI-related bills have been approved by their chamber of origin and now sit with appropriations committees in their second chamber. Those committees are expected to meet and possibly conduct up-or-down votes on those bills next week.

Those bills include:

SB 11: AI Abuse Protection Act

SB 11, authored by Sen. Angelique Ashby, would make computer-manipulated or AI-generated images or videos subject to the state’s right of publicity law and criminal false impersonation statutes.  The proposal passed the Senate and is now making its way through the Assembly. On July 17 it was heard by the Assembly Committee on Privacy & Consumer Protection and approved on a 15-0 vote. It now sits with the Assembly Appropriations Committee, which is expected to next convene on Wednesday, Aug. 20.

AB 853: California AI Transparency Act

Assm. Buffy Wicks (D-Berkeley) introduced AB 853, which would require large online platforms (LOPs) to label whether content is AI-generated or authentic. It would also require manufacturers of phones, cameras, or any device that captures images or audio to give users the option to apply digital signatures to authentic material they produce. The bill passed the Assembly on June 2. It was amended in the Senate on July 15, passed 11-0 by the Senate Judiciary Committee, and was re-referred to the Senate Appropriations Committee. That committee is scheduled to meet on Monday, Aug. 18. The committee’s agenda is expected to be posted later today.

AB 1064: LEAD for Kids Act

The Leading Ethical AI Development (LEAD) for Kids Act, authored by Assm. Bauer-Kahan, would create a new AI standards board within the state’s Government Operations Agency, and charge its members with evaluating and regulating AI technologies for children. It would also impose a series of checks and balances—with an emphasis on transparency and privacy protections—to ensure only the safest AI tools make it into the hands of children.

The bill passed the Assembly, 59-12, on June 2. It was amended in the Senate Judiciary Committee on July 15, approved 11-0 by the committee, and was re-referred to the Senate Appropriations Committee. That committee is scheduled to meet on Monday, Aug. 18. The committee’s agenda is expected to be posted later today.

SB 243: Companion Chatbots

California Senator Steve Padilla (D-San Diego) sponsored SB 243, which would require AI platforms to provide regular reminders to minors that the chatbot is not human. The proposal passed in the Senate on June 3. June 9 it was sent to the Committees of the Judiciary and Privacy and Consumer Protection.

SB 243 passed the Assembly Privacy and Consumer Protection Committee on July 8, and was re-referred to the Judiciary Committee. On July 15 the Judiciary Committee approved the bill 9-1 and referred it to the Assembly Appropriations Committee. That committee is expected to next convene on Wednesday, Aug. 20.

SB 833: Human Oversight of AI in Critical Infrastructure

Sen. Jerry McNerney sponsored SB 833, which would require human oversight is maintained when AI systems are used to control critical infrastructure, including: transportation, energy, food and agriculture, communications, financial services, or emergency services. SB 833 passed in the Senate on June 3. It was then sent to the Assembly where it was read for the first and second time. On July 16 it passed the Assembly Privacy & Consumer Protection Committee 15-0 and was referred to the Assembly Appropriations Committee. That committee is expected to next convene on Wednesday, Aug. 20.

SB 53: CalCompute

Sen. Scott Wiener (D-San Francisco) sponsored SB 53, which is a more dialed-back version of his Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which passed both chambers last year before California Gov. Gavin Newsome vetoed it.

The bill would protect whistleblowers who work for developers of “foundational models” for AI platforms trained on broad sets of data. If the bill passes, whistleblowers working for these AI companies would be protected if they disclose information to the California Attorney General or other authorities regarding potential critical risks posed by the developer’s activities or any alleged false or misleading statements made by the company about risk management practices. 

The bill passed the Senate on May 28. The proposal has passed the Assembly Judiciary Committee and the Assembly Committee on Privacy and Consumer Protection. On July 16 it was re-referred to the Assembly Appropriations Committee. That committee is expected to next convene on Wednesday, Aug. 20.

colorado

On Aug. 6 Colorado Gov. Jared Polis called a special legislative session set to begin Aug. 21 to address Colorado’s budget crisis precipitated by President Trump’s federal budget bill, which passed in early July.

The special session is also expected to address the state’s landmark AI law. The Colorado Artificial Intelligence Act, CAIA, is scheduled to take effect Feb. 1, 2026, but has come under fire from corporate tech interests who are expected to kill or water down the law prior to that date.

Michigan

HB 4047: Protection from Intimate Deepfakes Act

Rep. Matthew Bierlein saw his HB 4047, the Protection from Intimate Deepfakes Act, gain approval from the Civil Rights, Judiciary, and Public Safety committee on Aug. 12. The next step for this bill is a floor vote.

HB 4047 would do the following:

  • Prescribe a misdemeanor penalty punishable by up to one year's imprisonment or maximum fine of $3,000, or both, for an individual who intentionally created or disseminated an intimate deep fake causing harm.

  • Prescribe a felony penalty punishable by up to three years' imprisonment or a maximum fine of $5,000, or both, for subsequent offenses or online platform distribution.

  • Allow an individual who was depicted in a nonconsensual intimate deep fake to bring a civil action for the creation or dissemination of that deep fake in the county of the individual's residence or the county where the deep fake was created or stored.

  • Require a court to allow for the confidential filing of a civil action.

  • Allow a court to issue a restraining order or permanent injunction that could award a plaintiff a daily maximum civil fine of $1,000 for a violation of such an order.

  • Prescribe the economic and noneconomic damages a court could award in a civil action including profits made from the deep fake and court costs.

  • Exempt specific criminal investigations, medical treatments, and legal proceedings from penalties and liability under the Act.

  • Exempt internet service providers, telecom networks, or educational or library systems providing access to content created by another person and a provider or developer of the technology used in the creation of a deep fake from liability under the Act.

HB 4048: Deepfake Dissemination Sentencing Guidelines

HB 4048, introduced by Rep. Penelope Tsernoglou as a companion to HB 4047, would revise the sentencing guidelines in the state Code of Criminal Procedure to reflect the adoption of HB 4047. This would include the dissemination of an intimate deep fake with aggravating factors as a Class F felony against a person with a maximum of three years' imprisonment.

HB 4048 passed out of the Judiciary Committee along with HB 4047 on Aug. 12, and is now on track for a floor vote.

HB 4668: Safety protocols for critical risk

Rep. Sarah Lightner (R-Springport) introduced the AI Safety and Security Transparency Act (HB 4668) on June 24.

The bill would require developers of large AI models to establish safety protocols that will prevent “critical risk,” meaning the serious harm or death of more than 100 people, or more than $100 million in damages.

HB 4668 would apply to only to the largest companies—those that have spent $5 million or more on a single model, and $100 million or more in the prior 12 months to develop one or more AI platforms.

Those companies would have to test their AI models for risk and danger, and enact safeguards to limit potential harm. Large developers would also be required to conduct annual third-party audits.

The proposal, which also includes whistleblower protections, is now with the Judiciary Committee.

HB 4667: Amending the penal code to include AI

Rep. Lightner also introduced HB 4667, which would establish new criminal penalties for using AI to commit a crime.

For example, if AI was used to duplicate someone’s voice, and then a bad actor used that vocal replica to call up the person’s grandmother and scam the elderly woman out of money.

The bill would make it a felony with a mandatory 8-year sentence to develop, possess, or use an AI system with the intent to commit a crime.

Creating an AI platform for others to use for committing crimes would come with a mandatory 4-year prison sentence.

The proposal is similar to Michigan’s felony firearm law, which includes an additional criminal penalty when a person has a gun during the commission of a felony.

The proposal was introduced on June 24 and referred to the Judiciary Committee, where it remains.

Michigan’s legislative session is scheduled to adjourn in late December.

Next
Next

Senators unveil the TRAIN Act, bipartisan bill to protect creators from unauthorized AI training