AI Legislative Update: June 13, 2025

June 6, 2025 — During the state legislative season TCAI will offer weekly updates every Friday on a variety of AI-related bills making progress in capitol buildings around the nation.

This week: Vermont Gov. Scott signed a state ‘Kids Code’ into law, Florida Gov. DeSantis signed a law designed to protect victims of sexually explicit deepfake content, the New York State Senate approved the RAISE Act, and a half dozen AI-related bills continued to move in Sacramento.

California

California lawmakers are weighing a number of AI bills this session. One proposal would require the labeling of authentic vs. AI content, while another would protect whistleblowers within AI companies who alert authorities to critical risks.

In addition, three AI-related bills successfully passed out of suspense last month, allowing them to continue to be considered. The bills would govern AI-generated images or videos; AI chatbots; and the use of AI to oversee critical infrastructure.

Still in play in Sacramento:

AB 53: California AI Transparency Act

Assm. Buffy Wicks (D-Berkeley) introduced AB 53, which would require large online platforms (LOPs) to label whether content is AI-generated or authentic. It would also require manufacturers of phones, cameras, or any device that captures images or audio to give users the option to apply digital signatures to authentic material they produce. The bill passed the Assembly on June 2, and was sent to the Senate. It has since passed through the Rules Committee and is now with the Senate Judiciary Committee.

SB 53: CalCompute

Sen. Scott Wiener (D-San Francisco) sponsored SB 53, which is a more measured approach to his Safe & Secure Innovation for Frontier AI Models Act (SB 1047), which passed both chambers last year before being vetoed by California Gov. Gavin Newsome.

The bill would protect whistleblowers who work for developers of “foundational models” for AI platforms trained on broad sets of data. If the bill passes, whistleblowers from these AI companies who disclose information to the California Attorney General or other authorities regarding potential critical risks posed by the developer’s activities or any alleged false or misleading statements made by the company about risk management practices. 

The bill passed the Senate on May 28 and was sent to the Assembly where it had a first reading before being sent to the Judiciary Committee and the Committee on Privacy and Consumer Protection. It has since been re-referred to those committees for further consideration.

AB 412: AI Copyright Protection Act

Assm. Rebecca Bauer-Kahan’s AB 412, the AI Copyright Protection Act, was passed by the full Assembly on May 12 and sent on to the Senate. The Senate has assigned it to both the Judiciary and Appropriations committees, where it remains, awaiting further action.

SB 11: AI Abuse Protection Act

Sen. Angelique Ashby sponsored SB 11, which would make computer-manipulated or AI-generated images or videos subject to the state’s right of publicity law and criminal false impersonation statutes.  The proposal passed the Senate with minor language amendments and was sent to the Assembly. It was referred on June 9 to the Committees of the Judiciary, Public Safety, and Privacy and Consumer Protection.

The Assembly Judiciary Committee is scheduled to consider and hear testimony on SB 11 this coming Tuesday, June 17.

SB 243: Companion Chatbots

California Senator Steve Padilla (D-San Diego) sponsored SB 243, which would require AI platforms to provide regular reminders to minors that a chatbot is AI and not human. The proposal passed in the Senate on June 3. It was then sent to the Assembly, where it was read for the first time. On June 9 it was sent to the Committees of the Judiciary and Privacy and Consumer Protection. It was scheduled for a first hearing for June 24, but that was delayed until July 8, to ensure a key witness will be available to testify.

SB 833: Human Oversight of AI in Critical Infrastructure

Sen. Jerry McNerney sponsored SB 833, which would require human oversight is maintained when AI systems are used to control critical infrastructure, including: transportation, energy, food and agriculture, communications, financial services, or emergency services. SB 833 passed in the Senate on June 3. It was then sent to the Assembly where it was read for the first time. On June 9 it was referred to the Committee on Privacy and Consumer Protection.


Florida

Florida state Gov. Ron DeSantis (R) this week signed a bill into law designed to crack down on deepfake images and videos.

State Sen. Alexis Calatayud (R-Miami-Dade) sponsored HB 1161, known as Brooke’s Law, which would mandate that internet and social media platforms establish and promote policies to help victims remove deepfake material.

Under the new law, digital platform companies will be required to create a system for victims to report deepfakes to them, and establish an internal review process to confirm the allegation and take the images or videos down within 48 hours.

Brooke’s Law was named for Brooke Curry, the teenage daughter of former Jacksonville Mayor Lenny Curry. Brooke Curry’s image was used without her consent and altered using artificial intelligence into sexually explicit content. It happened in July 2023, when Curry was a high school junior.

Under the new law, which takes effect Dec. 31, 2025, websites, online services, apps, or mobile platforms in Florida must establish a process for victims of deepfakes to notify them of the altered sexual depiction and request to have it removed. After providing evidence that they are the victim, the platform has 48 hours to remove the content.

Read more:

New York

The New York State Senate on Thursday approved the RAISE Act (S 6953A). which had previously passed in the Assembly.

The Responsible AI Safety and Education Act (RAISE Act), sponsored by Assemblymember Alex Bores (D) and Sen. Andrew Gounardes (D), targets models that cost more than $100 million to train or have a compute power or that exceed a certain computational power. Reporter Austin Jenkins has a full story on the bill’s passage at Pluribus News.

The legislation aims to prevent future AI models from unleashing “critical harm,” defined as the serious injury or death of 100 or more people or at least $1 billion in damages. A key focus is ensuring that AI systems aren’t misused to unleash chemical, biological, radiological or nuclear attacks. 

The RAISE Act would require frontier model developers to establish safety and security protocols, conduct annual reviews, implement safeguards, and disclose to state officials if a safety incident occurs. New York’s attorney general would enforce the law with fines up to $30 million for repeat violations.

Meanwhile, TCAI is closely watching two bills still moving in Albany:

AB 6578: Artificial Intelligence Training Data Transparency Act

In Albany, Assm. Alex Bores’s A 6578, the Artificial Intelligence Training Data Transparency Act, passed the Assembly June 10 and was sent to the Senate, where it is being considered by the Rules Committee. The Senate version, S 6955, also remains with the Senate Internet and Technology Committee.

SB 5668: Liability for Misleading or Harmful Information Provided by a Chatbot

New York Senate Bill 5668, which would require companion chatbots to obtain parental consent before minors can interact with them, advanced to third reading and was referred to the Rules Committee today.

New York Sen. Kristen Gonzalez (D-Queens, Manhattan and Brooklyn) introduced the chatbot bill, which would also establish liability if a chatbot provides misleading, incorrect, contradictory, or harmful information to a user that results in financial loss or other harm.

The New York Legislative session has been extended to June 17, and could be further extended as budget negotiations continue.

 

Vermont

Vermont Gov. Phil Scott signed s. 69, a ‘Kids Code’ bill into law Thursday, which will require tech companies to implement privacy-by-default and safety-by-design protections for youth online.

Rep. Monique Priestley and Senators Wendy Harrison, Seth Bongartz, and Patrick Brennan introduced the Vermont Age-Appropriate Design Code. The law bars collecting or selling childrens’ data, setting high privacy standards by default, and avoiding manipulative design.

Significantly, the new law asserts that “a covered business that processes a covered minor’s data in any capacity owes a minimum duty of care to the covered minor.”

Gov. Scott’s endorsement of the bill was unexpected. Almost exactly one year ago he vetoed a similar bill, citing concerns about legal challenges to the ‘Kids Code’ language.

Pressed by the growing concern of parents and others, Vermont legislators revived the Kids Code measure and passed it with strong support in both legislative chambers. It takes effect Jan. 1, 2027.

The full text of the new law is available here.

Previous
Previous

OpenAI partners with Mattel: Is a ChatGPT Barbie in the works?

Next
Next

Vermont Gov. Scott signs ‘Kids Code’ digital design bill into law