Guide to the RAISE Act, the New York Responsible AI Safety and Education Act

The RAISE Act, passed by the New York legislature on June 12, now awaits approval or veto on the desk of Gov. Kathy Hochul. (Photo: Office of Governor Kathy Hochul)

The RAISE Act, approved by the New York State legislature on June 12, is one of the most significant AI bills passed during the 2025 legislative season.

The Transparency Coalition has organized this guide to the Act, which still awaits a signature or veto from Gov. Kathy Hochul, to offer a plain-language overview of the legislation.

what is the raise act?

The Responsible AI Safety and Education Act (RAISE Act, S 6953B) focuses on ensuring the safety of AI models that cost more than $100 million to train or exceed a certain computational power.

The legislation aims to prevent future AI models from unleashing “critical harm,” defined as the serious injury or death of 100 or more people or at least $1 billion in damages. A key focus is ensuring that AI systems aren’t misused to unleash chemical, biological, radiological or nuclear attacks. 

The RAISE Act would require frontier model developers to establish safety and security protocols, implement safeguards, publish a redacted version of those protocols, and allow state officials access to the unredacted protocols upon request. New York’s attorney general would enforce the law with fines of up to $10 million for a first violation and up to $30 million for repeat violations. There is no private right of action allowed with regard to violations.

The RAISE Act is now with Gov. Kathy Hochul, who must decide whether to sign or veto the measure by July 18.

Read the latest version of the RAISE Act here, or click on the image below to download a pdf.

Select image to access the RAISE Act as approved by the New York Legislature on June 12, 2025.

what’s covered by the raise act: ‘frontier models’ only

The RAISE Act is intended to prevent the largest frontier AI models from unleashing critical harm.

A frontier model is defined in the RAISE Act as an AI model trained using more than 10>26 computational operations, at a compute cost of more than $100 million; or an AI model produced by applying knowledge distillation to a frontier model. (Knowledge distillation is a way to use outputs from a large frontier model to train smaller AI models.)

In non-legal terms, a frontier model is a highly advanced, large-scale AI model that pushes the boundaries of AI in areas like NLP, image generation, video and coding. Frontier models are typically trained on extensive datasets with billions or even trillions of parameters.

As of June 2025 there were only a handful of AI models that are considered frontier models. They include:

  • ChatGPT, developed by OpenAI

  • Claude, developed by Anthropic

  • Copilot, developed by Microsoft

  • Gemini, developed by Google

  • LLaMa, developed by Meta

  • Mistral, developed by Mistral AI

  • Deepseek, developed by Deepseek

the ‘critical harm’ the act means to prevent

Critical harm means the death or serious injury of 100 more more people, or at least $1 billion of damage to property or money, caused by or materially enabled by a frontier model.

Critical harm may include the creation or use of a chemical, biological, radiological, or nuclear weapon. It may also involve an AI model engaging in conduct that, if it were committed by a human, would constitute a crime specified in New York’s Penal Code.

duties of a frontier model developer under the raise act

The main requirement under the RAISE Act is to prepare and implement a safety & security protocol.

Pre-release responsibilities:

Prior to releasing a frontier model available to consumers in New York State, an AI developer must:

  • Implement a written safety & security protocol.

  • Retain an unredacted copy of the safety & security protocol, including revisions, for as long as the frontier model is deployed plus five years.

  • Publish a copy of the safety & security protocol with appropriate redactions, and grant access to the unredacted copy (upon request) to the New York State Attorney General and the New York State Division of Homeland Security and Emergency Services. (‘Appropriate redactions’ may be made to protect public safety, protect trade secrets, prevent the release of confidential information as required by law, protect employee/customer privacy, or prevent the release of information otherwise controlled by state or federal law.)

Post-release responsibilities:

Following the release of a frontier model, the model’s developer is required to conduct an annual review of safety & security protocols.

Disclosure of safety incidents

Large developers are required to disclose any safety incident affecting the frontier model to the New York State Attorney General and the New York State Division of Homeland Security and Emergency Services. This disclosure must be made within 72 hours of the developer learning of the incident.

truth in statements and disclosures

Large developers are prohibited from knowingly making false or materially misleading statements or omissions in, or regarding, documents dealing with safety & security protocols and the disclosure of safety incidents.

enforcement and penalties

The RAISE Act will be enforced by the New York State Attorney General, who may bring a civil action against a developer for violation of the RAISE Act.

First-time violations may be assessed a civil penalty not exceeding $10 million. Subsequent violation penalties are capped at $30 million.

The RAISE Act does not allow a private right of action associated with violations.

effective date

If the RAISE Act is signed by Gov. Kathy Hochul, the Act will take effect 90 days after her signature.

More AI policy news

Previous
Previous

AI Legislative Update: June 20, 2025

Next
Next

As New York legislature adjourns, the RAISE Act awaits Gov. Hochul’s signature