AI Legislative Update: July 18, 2025
July 18, 2025 — During the state legislative season TCAI offers weekly updates every Friday on a variety of AI-related bills making progress in around the nation.
This week: Multiple AI-related proposals in California continued to move through the legislature ahead of the scheduled July 19-August 17 recess in Sacramento.
California
California legislators continued working on the following AI-focused bills. The California legislature will recess for a one-month summer break at the end of business today (July 18) and is scheduled to return on August 18.
SB 11: AI Abuse Protection Act
SB 11, authored by Sen. Angelique Ashby, would make computer-manipulated or AI-generated images or videos subject to the state’s right of publicity law and criminal false impersonation statutes. The proposal passed the Senate and is now making its way through the Assembly. On July 17 it was heard by the Assembly Committee on Privacy & Consumer Protection and approved on a 15-0 vote. It now moves to the Assembly Appropriations Committee.
AB 853: California AI Transparency Act
Assm. Buffy Wicks (D-Berkeley) introduced AB 853, which would require large online platforms (LOPs) to label whether content is AI-generated or authentic. It would also require manufacturers of phones, cameras, or any device that captures images or audio to give users the option to apply digital signatures to authentic material they produce. The bill passed the Assembly on June 2. It was amended in the Senate on July 15, passed 11-0 by the Senate Judiciary Committee, and was re-referred to the Senate Appropriations Committee.
AB 412: AI Copyright Protection Act
Assm. Rebecca Bauer-Kahan’s AB 412, the AI Copyright Protection Act, would establish a framework for copyright owners to determine whether their registered works were used to train a generative artificial intelligence model.
It was passed by the full Assembly on May 12 and sent on to the Senate. The Senate has assigned it to both the Judiciary and Appropriations committees, where it remains. It has been since converted into a two-year bill to give lawmakers more time to resolve outstanding issues. This means the bill has been effectively paused during the 2025 session and will be considered again in 2026.
AB 1064: LEAD for Kids Act
The Leading Ethical AI Development (LEAD) for Kids Act, authored by Assm. Bauer-Kahan, would create a new AI standards board within the state’s Government Operations Agency, and charge its members with evaluating and regulating AI technologies for children. It would also impose a series of checks and balances—with an emphasis on transparency and privacy protections—to ensure only the safest AI tools make it into the hands of children.
The bill passed the Assembly, 59-12, on June 2. It was amended in the Senate Judiciary Committee on July 15, approved 11-0 by the committee, and was re-referred to the Senate Appropriations Committee.
SB 243: Companion Chatbots
California Senator Steve Padilla (D-San Diego) sponsored SB 243, which would require AI platforms to provide regular reminders to minors that the chatbot is not human. The proposal passed in the Senate on June 3. June 9 it was sent to the Committees of the Judiciary and Privacy and Consumer Protection.
SB 243 passed the Assembly Privacy and Consumer Protection Committee on July 8, and was re-referred to the Judiciary Committee. On July 15 the Judiciary Committee approved the bill 9-1 and referred it to the Assembly Appropriations Committee.
SB 833: Human Oversight of AI in Critical Infrastructure
Sen. Jerry McNerney sponsored SB 833, which would require human oversight is maintained when AI systems are used to control critical infrastructure, including: transportation, energy, food and agriculture, communications, financial services, or emergency services. SB 833 passed in the Senate on June 3. It was then sent to the Assembly where it was read for the first and second time. On July 16 it passed the Assembly Privacy & Consumer Protection Committee 15-0 and was referred to the Assembly Appropriations Committee.
SB 53: CalCompute
Sen. Scott Wiener (D-San Francisco) sponsored SB 53, which is a more dialed-back version of his Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which passed both chambers last year before California Gov. Gavin Newsome vetoed it.
The bill would protect whistleblowers who work for developers of “foundational models” for AI platforms trained on broad sets of data. If the bill passes, whistleblowers working for these AI companies would be protected if they disclose information to the California Attorney General or other authorities regarding potential critical risks posed by the developer’s activities or any alleged false or misleading statements made by the company about risk management practices.
The bill passed the Senate on May 28. The proposal has passed the Assembly Judiciary Committee and the Assembly Committee on Privacy and Consumer Protection. On July 16 it was re-referred to the Assembly Appropriations Committee.
Michigan
HB 4668: Safety protocols for critical risk
Rep. Sarah Lightner (R-Springport) introduced the AI Safety and Security Transparency Act (HB 4668) on June 24.
The bill would require developers of large AI models to establish safety protocols that will prevent “critical risk,” meaning the serious harm or death of more than 100 people, or more than $100 million in damages.
HB 4668 would apply to only to the largest companies—those that have spent $5 million or more on a single model, and $100 million or more in the prior 12 months to develop one or more AI platforms.
Those companies would have to test their AI models for risk and danger, and enact safeguards to limit potential harm. Large developers would also be required to conduct annual third-party audits.
The proposal, which also includes whistleblower protections, is now with the Judiciary Committee.
HB 4667: Amending the penal code to include AI
Rep. Lightner also introduced HB 4667, which would establish new criminal penalties for using AI to commit a crime.
For example, if AI was used to duplicate someone’s voice, and then a bad actor used that vocal replica to call up the person’s grandmother and scam the elderly woman out of money.
The bill would make it a felony with a mandatory 8-year sentence to develop, possess, or use an AI system with the intent to commit a crime.
Creating an AI platform for others to use for committing crimes would come with a mandatory 4-year prison sentence.
The proposal is similar to Michigan’s felony firearm law, which includes an additional criminal penalty when a person has a gun during the commission of a felony.
The proposal was introduced on June 24 and referred to the Judiciary Committee, where it remains.
Michigan’s legislative session is scheduled to adjourn in late December.