As New York legislature adjourns, the RAISE Act awaits Gov. Hochul’s signature

June 19, 2025 — The New York State legislature adjourned sine die in the wee hours of Wednesday morning, leaving dozens of bills out of time as the lights went out.

A number of AI-related bills made it very near passage this session, with one—the RAISE Act—successfully adopted by the Assembly and Senate. The RAISE Act now awaits action on the desk of Gov. Kathy Hochul.

Approved: The RAISE Act

The New York State Senate approved the RAISE Act (S 6953A) on June 12. The bill had previously passed in the Assembly.

The Responsible AI Safety and Education Act (RAISE Act), sponsored by Assemblymember Alex Bores (D) and Sen. Andrew Gounardes (D), targets models that cost more than $100 million to train or exceed a certain computational power. Reporter Austin Jenkins has a full story on the bill’s passage at Pluribus News.

The legislation aims to prevent future AI models from unleashing “critical harm,” defined as the serious injury or death of 100 or more people or at least $1 billion in damages. A key focus is ensuring that AI systems aren’t misused to unleash chemical, biological, radiological or nuclear attacks. 

The RAISE Act would require frontier model developers to establish safety and security protocols, conduct annual reviews, implement safeguards, and disclose to state officials if a safety incident occurs. New York’s attorney general would enforce the law with fines up to $30 million for repeat violations.

The RAISE Act is now with Gov. Kathy Hochul, who must decide whether to sign or veto the measure by July 18.

Wait ‘till next year: ai bills killed by the clock

Adjournment effectively killed more than half a dozen proposals that would have regulated AI by mandating protections for minors, greater transparency, and limiting chatbots.

Read on for an overview of the bills that TCAI has been watching, which will now have to wait until next session for a shot at passage.

S06954: Requiring provenance data 

Sen. Andrew Gounardes (D-Brooklyn) sponsored S06954, which requires that provenance data – the records on the origin and history of digital content – be attached to any synthetic content created by AI systems. It would also prohibit social media platforms from allowing the deletion or altering of provenance data for any AI-generated content shared on their sites.

The bill passed the Senate on June 12 and was sent to the Assembly, where it was ordered for a third reading. It came very close to a floor vote but ultimately ran out of time.

S06953: Transparency in training AI models

Sen. Gounardes also sponsored S06953, which requires transparency and safety protocols for training frontier AI models –the largest artificial intelligence systems that draw on massive datasets and billions or even trillions of parameters.

The bill passed the Senate and Assembly on June 12, but was sent back to the Senate Rules Committee during the reconciliation process, and did not progress further.

S01228: Transparency of synthetic performers

N.Y. Sen. Michael Gianaris (D-Queens) introduced the proposal, which would have required advertisements to disclose when they are using a synthetic performer to sell a product or service. The bill comes with a $1,000 civil penalty for a first violation, and a $5,000 penalty for any subsequent violations. The act would take effect immediately upon passage.

It had advanced to a third reading in the Senate, and was sent to the Rules Committee on June 13. It’s Assembly version, A08546, was referred to the Judiciary Committee on May 20, but it never progressed further.

S02698: Oversight of AI-generated documents

Sen. Brad Holyman-Signal (D-Manhattan) sponsored S02698, which would require disclosure any time artificial intelligence is used to produce legal documents or filings. The bill would also require certification that a human reviewed and verified the AI-generated content. The bill would take effect 90 days after becoming law.

The proposal advanced to third reading in the Senate, and was sent to the Rules Committee on June 13. The Assembly version, A08546, was referred to the Judiciary committee on May 20, where it remained.

S05668: Regulating chatbots 

Sen. Kristen Gonzalez (D-Manhattan and Queens) sponsored S05668, which would impose liability for chatbots that cause financial or other harm by providing misleading, incorrect, or contradictory information to a user.

Chatbots would also need to clearly indicate that they are artificial and not real humans, though that doesn’t waive their liability, particularly if a user engages in self-harm at a chatbots direction.

The bill would also prohibit chatbots from engaging with minors without explicit parental consent. It advanced to a third reading in the Senate in March, and was sent to the Rules Committee on June 13. In the Assembly it remained in the Consumer Affairs and Protection Committee.

S07263: Chatbot liability for impersonation

Gonzalez also sponsored S07263, which establishes liability for any harm caused if a chatbot impersonates licensed professionals, such as a lawyer. It would require clear labeling of chatbots indicating that it is artificial intelligence.

The bill had its third reading in the Senate in May and was sent to the Rules Committee on June 13. The version in the Assembly remained in the Consumer Affairs and Protection Committee.

S00934: Labeling AI as potentially inaccurate 

Gonzalez sponsored S00934, which requires developers or deployers of AI systems to conspicuously display a notice that will consistently tell users that its output of information may be inaccurate.

The bill includes a civil penalty up to $1,000 for each violation in which a user does not receive a notice.

The bill passed the Senate on June 12 and was sent the Assembly, where it was referred to the Codes Committee.

S01169: Preventing discrimination in AI 

Gonzalez also sponsored S01169, which would regulate the development and use of AI systems deemed “high-risk” with the goal of preventing algorithmic discrimination.

It would also require regular independent audits of these AI systems, and give the New York attorney general authority to enforce the law.

Users would have to have the option to have a human review any decision made by an AI system, for example in employment or education systems.

The bill would also prohibit the creation or deployment of any AI system that developed social scoring systems.

The bill passed the state Senate on June 12 and was sent to the Assembly, where it was referred to the Rules Committee. It did not make it out for a floor vote.

The latest ai policy news

Next
Next

Complete guide to the California Report on Frontier AI Policy