AI Legislative Update: June 20, 2025

June 20, 2025 — During the state legislative season TCAI offers weekly updates every Friday on a variety of AI-related bills making progress in around the nation.

This week: New York adjourned its legislative session, passing the RAISE Act just in time while leaving many other AI bills to die short of the finish line. The RAISE Act, designed to prevent AI from creating “critical harm,” is now on the desk of Gov. Kathy Hochul. Meanwhile, a half-dozen bills to regulate AI continue to make their way through the California legislature.

california

California lawmakers are weighing a number of AI bills this session, including those that seek to label AI content, protect whistleblowers within AI companies, and regulate the use of AI to oversee critical infrastructure.

Read on for a rundown of what’s still alive in Sacramento:

SB 11: AI Abuse Protection Act

Sen. Angelique Ashby sponsored SB 11, which would make computer-manipulated or AI-generated images or videos subject to the state’s right of publicity law and criminal false impersonation statutes.  The proposal passed the Senate with minor language amendments and was sent to the Assembly. It was referred on June 9 to the Committees of the Judiciary, Public Safety, and Privacy and Consumer Protection.

Transparency Coalition COO Jai Jaisimha testified in favor of SB 11 on June 17 before the Assembly Judiciary Committee. (More in the image link below.)

“AI model developers have a duty of care that they're not fulfilling,” Jaisimha said. “They're able to produce harmful images because when these models are being developed or tested, proper care is not being exercised. There is strong evidence that consumer warnings can affect behavior—and the State of California has been a leader in deploying them.”

SB 11 moved out of that committee and had a second reading on the Assembly floor before being referred to the Committee on Public Safety.

Click on image above for the full story on Jai Jaisimha’s testimony at the SB 11 hearing on June 17.

AB 53: California AI Transparency Act

Assm. Buffy Wicks (D-Berkeley) introduced AB 53, which would require large online platforms (LOPs) to label whether content is AI-generated or authentic. It would also require manufacturers of phones, cameras, or any device that captures images or audio to give users the option to apply digital signatures to authentic material they produce. The bill passed the Assembly on June 2 and was sent to the Senate. It remains with the Senate Judiciary Committee.

SB 53: CalCompute

Sen. Scott Wiener (D-San Francisco) sponsored SB 53, which is a second, more dialed-back version of his Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which passed both chambers last year before California Gov. Gavin Newsome vetoed it.

The bill would protect whistleblowers who work for developers of “foundational models” for AI platforms trained on broad sets of data. If the bill passes, whistleblowers working for these AI companies would be protected if they disclose information to the California Attorney General or other authorities regarding potential critical risks posed by the developer’s activities or any alleged false or misleading statements made by the company about risk management practices. 

The bill passed the Senate on May 28 and remains in the Assembly Committees of the Judiciary and on Privacy and Consumer Protection.

AB 412: AI Copyright Protection Act

Assm. Rebecca Bauer-Kahan’s AB 412, the AI Copyright Protection Act, was passed by the full Assembly on May 12 and sent on to the Senate. The Senate has assigned it to both the Judiciary and Appropriations committees, where it remains.

SB 243: Companion Chatbots

California Senator Steve Padilla (D-San Diego) sponsored SB 243, which would require AI platforms to provide regular reminders to minors that the chatbot is not human. The proposal passed in the Senate on June 3. It was then sent to the Assembly, where it was read for the first time. On June 9 it was sent to the Committees of the Judiciary and Privacy and Consumer Protection. It was scheduled for a first hearing for June 24, but that was delayed until July 8, to ensure a key witness will be available to testify.

SB 833: Human Oversight of AI in Critical Infrastructure

Sen. Jerry McNerney sponsored SB 833, which would require human oversight is maintained when AI systems are used to control critical infrastructure, including: transportation, energy, food and agriculture, communications, financial services, or emergency services. SB 833 passed in the Senate on June 3. It was then sent to the Assembly where it was read for the first time. On June 9 it was referred to the Committee on Privacy and Consumer Protection, where it remains.


new york

The New York State legislature adjourned sine die in the early morning hours of June 18.

Passed: The RAISE Act

The Responsible AI Safety and Education Act (RAISE Act), or S 6953A, is on the desk of New York Gov. Kathy Hochul after the state Senate passed the bill on June 12.

Assemblymember Alex Bores (D) and Sen. Andrew Gounardes (D) introduced the bill, which seeks to prevent AI platforms from causing “critical harm,” defined as the serious injury or death of 100 or more people or at least $1 billion in damages.

The proposal would require safety and security protocols, annual reviews and safeguards on AI platforms that cost more than $100 million to train or exceed a certain computational power.

The goal is to avoid allowing AI systems to be misused to launch chemical, biological, radiation or nuclear attacks.

New York’s attorney general would be charged with enforcement, with fines up to $30 million for repeat violations.

The Transparency Coalition has a full guide to the RAISE Act below.

Out of time: The bills that didn’t make it

S06954: Requiring provenance data 

Sen. Andrew Gounardes sponsored S06954, which requires that provenance data – the records on the origin and history of digital content – be attached to any synthetic content created by AI systems.

S06953: Transparency in training AI models

Sen. Gounardes also sponsored S06953, which would require transparency and safety protocols for training frontier AI models –the largest artificial intelligence systems that draw on massive datasets and billions or even trillions of parameters.

S01228: Transparency of synthetic performers

Sen. Michael Gianaris (D-Queens) introduced S01228, which would have required advertisements to disclose when they are using a synthetic performer to sell a product or service.

S02698: Oversight of AI-generated documents

Sen. Brad Holyman-Signal (D-Manhattan) sponsored S02698, which would require disclosure any time artificial intelligence is used to produce legal documents or filings. The bill would also require certification that a human reviewed and verified the AI-generated content.

S05668: Regulating chatbots 

Sen. Kristen Gonzalez (D-Manhattan and Queens) sponsored S05668, which would impose liability for chatbots that cause financial or other harm by providing misleading, incorrect, or contradictory information to a user. Chatbots would need to clearly indicate that they are artificial and not real humans, and they would be prohibited from engaging with minors without explicit parental consent.

S07263: Chatbot liability for impersonation

Gonzalez also sponsored S07263, which establishes liability for any harm caused if a chatbot impersonates licensed professionals, such as a lawyer.

S00934: Labeling AI as potentially inaccurate 

Gonzalez sponsored S00934, which requires developers or deployers of AI systems to conspicuously display a notice that will consistently tell users that its output of information may be inaccurate.

S01169: Preventing discrimination in AI 

Gonzalez also sponsored S01169, which would regulate the development and use of AI systems deemed “high-risk” with the goal of preventing algorithmic discrimination. Users would need an option to have a human review any decision made by an AI system.

Next
Next

Guide to the RAISE Act, the New York Responsible AI Safety and Education Act