AI Legislative Update: July 11, 2025
July 11, 2025 — During the state legislative season TCAI offers weekly updates every Friday on a variety of AI-related bills making progress in around the nation.
This week: A Michigan lawmaker introduced a new AI safety bill, while many AI-related bills in California continued to move through the Legislature ahead of the scheduled July 19-August 17 recess in Sacramento.
California
California legislators continued working on the following AI-focused bills.
SB 11: AI Abuse Protection Act
SB 11, authored by Sen. Angelique Ashby, was read a second time and rereferred to the Assembly Committee on Public Safety on July 10. A public hearing in that committee scheduled for July 8 was postponed.
SB 11 would make computer-manipulated or AI-generated images or videos subject to the state’s right of publicity law and criminal false impersonation statutes. The proposal passed the Senate and is now making its way through the Assembly.
AB 853: California AI Transparency Act
Assm. Buffy Wicks (D-Berkeley) introduced AB 853, which would require large online platforms (LOPs) to label whether content is AI-generated or authentic. It would also require manufacturers of phones, cameras, or any device that captures images or audio to give users the option to apply digital signatures to authentic material they produce. The bill passed the Assembly on June 2 and was sent to the Senate, where was referred to the Committee on the Judiciary.
AB 412: AI Copyright Protection Act
Assm. Rebecca Bauer-Kahan’s AB 412, the AI Copyright Protection Act, would establish a framework for copyright owners to determine whether their registered works were used to train a generative artificial intelligence model.
It was passed by the full Assembly on May 12 and sent on to the Senate. The Senate has assigned it to both the Judiciary and Appropriations committees, where it remains. It has been since converted into a two-year bill to give lawmakers more time to resolve outstanding issues.
AB 1064: LEAD for Kids Act
The Leading Ethical AI Development (LEAD) for Kids Act, authored by Assm. Bauer-Kahan, would create a new AI standards board within the state’s Government Operations Agency, and charge its members with evaluating and regulating AI technologies for children. It would also impose a series of checks and balances—with an emphasis on transparency and privacy protections—to ensure only the safest AI tools make it into the hands of children.
The bill passed the Assembly, 59-12, on June 2. It’s scheduled for a hearing before the Senate Judiciary Committee on July 15.
SB 243: Companion Chatbots
California Senator Steve Padilla (D-San Diego) sponsored SB 243, which would require AI platforms to provide regular reminders to minors that the chatbot is not human. The proposal passed in the Senate on June 3. It was then sent to the Assembly, where it was read for the first time. On June 9 it was sent to the Committees of the Judiciary and Privacy and Consumer Protection.
SB 243 was heard before the Assembly Privacy and Consumer Protection Committee on July 8, with Megan Garcia testifying in favor of the bill. We have full coverage of that testimony here. The bill was approved and referred to the Assembly Judiciary Committee on the same day.
SB 833: Human Oversight of AI in Critical Infrastructure
Sen. Jerry McNerney sponsored SB 833, which would require human oversight is maintained when AI systems are used to control critical infrastructure, including: transportation, energy, food and agriculture, communications, financial services, or emergency services. SB 833 passed in the Senate on June 3. It was then sent to the Assembly where it was read for the first time. After a second reading in the Assembly, on July 7 it was re-referred to the Committee on Privacy and Consumer Protection.
SB 53: CalCompute
Sen. Scott Wiener (D-San Francisco) sponsored SB 53, which is a more dialed-back version of his Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which passed both chambers last year before California Gov. Gavin Newsome vetoed it.
The bill would protect whistleblowers who work for developers of “foundational models” for AI platforms trained on broad sets of data. If the bill passes, whistleblowers working for these AI companies would be protected if they disclose information to the California Attorney General or other authorities regarding potential critical risks posed by the developer’s activities or any alleged false or misleading statements made by the company about risk management practices.
The bill passed the Senate on May 28. The Assembly Judiciary Committee heard and approved the bill, 12-0, on July 1, and re-referred it to the Assembly Committee on Privacy and Consumer Protection.
Michigan
HB 4668: AI Safety and Security Transparency Act
On June 24, Rep. Sarah Lightner introduced the AI Safety and Security Transparency Act (HB 4668), which addresses critical risks of foundation models.
The bill would require large developers to:
Produce, implement, and publish safety and security protocols to manage critical risks of foundation models
Conduct third-party annual audits
Prescribe duties of large developers
Provide whistleblower protections to employees.
The bill bears similarities to New York's RAISE Act, which passed the legislature last month but still awaits a signature from Gov. Kathy Hochul.