AI Leader Q&A: New York Assm. Alex Bores looks back (and ahead) at the RAISE Act
New York’s Alex Bores, author of the RAISE Act: “Don’t be intimidated. You can get the bill done.”
Editor’s note: This is the first in a series of interviews with state lawmakers who are leading the way in AI policy.
Jan. 5, 2026 — New York State’s RAISE Act was one of the most important pieces of AI safety legislation enacted in 2025.
Signed into law by Gov. Kathy Hochul on Dec. 19, the Act requires developers of the largest frontier AI models to create and publish information about their safety protocols, and report safety incidents to state authorities. The law also creates an AI oversight office within the New York Department of Financial Services. The Act is expected to undergo some fine-tuning during the upcoming 2026 session, based on changes requested by Gov. Hochul.
In this Jan. 2026 interview with Transparency Coalition Editorial Lead Bruce Barcott, the RAISE Act's author, Assemblymember Alex Bores, offers advice to fellow policymakers on how to steer AI-related measures to success.
Alex Bores: TECH BACKGROUND, POLITICAL RISING STAR
Born and raised in New York City, the 35 year-old Bores earned a masters degree in computer science before working as a software engineer for a number of tech companies, including Palantir Technologies, Merlon.ai, and Promise. Elected to the New York State Assembly in 2022, Bores has sponsored a number of AI-related bills in addition to the RAISE Act. The Transparency Coalition worked with him on his training data transparency bill, AB 6578, which passed the Assembly unanimously last year and will be considered by the Senate in 2026.
Bores won the national Future Caucus's 2024 Rising Star award, given to "Gen Z and millennial state lawmakers who embody the organization’s mission to transcend political tribalism by driving innovative, bipartisan legislation." The Manhattan-based legislator recently announced his candidacy for the Congressional seat held by Rep. Jerry Nadler, who has announced he will retire at the end of his term in 2026.
full video: New York Assemblymember Alex Bores
Above: NY Assemblymember Alex Bores in conversation with TCAI Editorial Lead Bruce Barcott.
codifying AI safety commitments
Bruce Barcott, Transparency Coalition AI: There will likely be a number of bills in other states based on your work in the RAISE Act. So let's talk about the original. What was the need you saw here?
Assemblymember Alex Bores: There is so much that we need to do about AI, because it's developing so quickly and touching so much of the economy. The RAISE Act was focused on the most extreme risks that come from this AI research.
All of the major labs—OpenAI, Anthropic, Google, et cetera—have said that their models could, for example, help a novice build a bioweapon. They are all preparing for losing control of the model if it starts acting on its own.
These are things they are thinking about voluntarily disclosing as risks, but there was no legal standard that held them to taking any action or even disclosing how they were thinking about it. Everything was based on voluntary commitments.
The good news is they made those commitments. The bad news is they also said, "If we see our competitors starting to slip in their commitments, we'll have to as well because of market pressures."
And so the RAISE Act is there to set a floor: a legal floor that says, "You can't go below this mark. You have to take actions to keep all of us safe."
What’s in the Act?
TCAI: What are some of the basic tenets of the RAISE Act?
Assm. Bores: Well, the RAISE Act changed quite a bit over the course of the session. The final version that's enacted into law applies to frontier developers, which are defined to be an extremely small set of companies (those doing frontier AI development that have over $500 million in annual revenue, the largest of the large).
The RAISE Act requires them to have a safety plan in which they describe in detail how they will handle specific risks. They have to disclose within 72 hours if certain things happen that indicate potential harm could come to others.
The Act also creates a New York State office focused on AI safety and AI transparency. That office has rulemaking authority so it can offer further guidance, within certain parameters, as to what needs to be disclosed. Each year that office will report to the legislature about improvements in statutory language that might be needed in order to ensure that AI development benefits all.
More on New york’s RAISE Act:
‘Be extremely clear,’ defend your choices
TCAI: What are some of the questions and concerns your colleagues had about the bill, and how did you address them?
Assm. Bores: A lot of the early questions were about what we just addressed: What is the need? Why are we doing this?
It was helpful that almost all of the [AI development] labs have said, “Hey, something like this is really necessary.” Others have confirmed that these are real risks, and someone needs to do something about it.
That helped people get over the hump and see the real need here.
A number of people asked about any potential downsides of the bill. That’s where you’ll have certain entrenched interests engage in good faith—and certain interests that don’t. So it’s helpful to be extremely clear about what the bill does, what it doesn’t do, and where you’ve made tradeoffs in designing the bill.
One of the big questions in any technology bill is: Do you set very specific standards around what needs to be done, [risking that] they might quickly become out of date if they’re locked in statutory language? Or do you set down more general guidelines that can keep up with the times but are less specific? And in the latter case there’s more uncertainty as to what an [AI developer] can do.
Regardless of which you choose you’re going to have people on the other side that say it's the wrong choice. They’ll either say "Your bill's way too vague," or they'll say, "Oh, your bill's stuck in the past."
Sometimes I’d just be explicit. I’d say: This is what I chose. Here’s a potential change I could make, but I think that would be worse. And people would say, “Yeah, okay, I understand why you made that change.” That gets rid of a lot of the potential opposition.
Tech industry ‘assumes other states will take action’
TCAI: How did you think about enforcement of the Act? You can set standards, but how to do make sure companies abide by them?
Assm. Bores: The fines in the RAISE Act ended up being far too low, and I will be the first to say that.
The Act does allow the state to set fines if a company doesn’t take the right action. The bill that originally passed established a $10 million fine for a first violation, up to $30 million for subsequent ones, and that would be scaled on how severe the action was. So a small paperwork error is not gonna be fined at $30 million. But willful negligence and willful disregard of what's required [by the Act] might be.
After a lot of lobbying, that amount ended up getting pushed down, so it's now only $1 to 3 million per violation.
The key point there is that industry was pushing and saying, “These fines are too large because if every state [sets a fine of] $10 to 30 million, then all of a sudden that becomes a huge amount.” We said we don’t think every state is going to take these actions.
In the end, it came down to $1 to $3 million on the assumption that other states are going to follow along [with versions of the RAISE Act] and do those same fines.
And so I would encourage other states that are thinking about taking action: Don’t listen to lobbyists saying, "Oh, it's done. It's taken care of elsewhere." The arguments they made in New York and California assumed other states were going to take action and also impose fines on them.
Expert backing helped
TCAI: Was there any particular support from groups or individuals that helped get this bill across the line?
Assm. Bores: Absolutely. We had a lot of support. We had two of the experts who are colloquially called the godfathers of AI, Yoshua Bengio and Geoffrey Hinton, who have two Turing Awards between them, the highest prize in computer science. Geoffrey Hinton's also a Nobel Prize winner. Yoshua Bengio is the most cited living scientist. So if anyone knows AI, it's these two. They led a letter with 40 other academics and AI experts calling for the RAISE Act to be passed.
We also had a lot of outside groups get involved. Labor unions wrote in, some venture capital firms wrote in, some startups wrote in, and a bunch of advocacy groups as well. The Secure AI Project, was very involved, as was Encode Justice. And the Transparency Coalition came to New York and were really helpful in explaining everything that's going on in AI, and why the RAISE Act would be a good bill to pass.
working with gov. Hochul
TCAI: How are you working with New York Gov. Hochul and her office in the coming session? There are going to be some revisions to the RAISE Act; tell us about what you expect to see there.
Assm. Bores: I said this before the RAISE Act and I'll say it afterwards: Gov. Hochul is the most forward-looking governor in the country in terms of balancing innovation and all that we want to see from AI, and the potential risks associated with it.
In her last two budgets, she's done things like [create] Empire AI, which invested a lot of money in buying GPUs, buying the advanced chips needed for AI research for our universities, so that we could accelerate research in New York and give our academics a leg up, which leads to more spinoffs of companies that create jobs in New York.
So she's been very forward-looking on the innovation side. She’s also been worried about the risks. In those same budgets she put forward requirements around chatbots: that they actually disclose they are a chatbot, that [AI developers] have a responsibility for looking for language that might be indicative of potential self-harm, and directing people to resources.
So she's really been working that balance. The RAISE Act is square within that—thinking that we want a lot of innovation, but we also wanna be sensitive to the potential harms. So there's a lot more to do going forward. I think some tweaks, some things that were taken out of the RAISE Act this past year, the last six months have shown us they’re even more necessary now.
So there may be additional aspects to add in [to the RAISE Act], and there are also bills that got close to passing last year that I'm excited to get over the finish line this year.
Coming in 2026: transparency, content disclosure bills
TCAI: Tell us a little bit about that. What are you working on in the AI-related area in the coming session?
Assm. Bores: Two bills I’ve worked really closely with Transparency Coalition on have been, one on training data transparency, and one on content provenance.
The training data transparency bill [AB 6578] passed the Assembly unanimously last year. We just ran out of time in the Senate, and we're hoping the Senate can pass it quickly this year. That measure is based on a law in California that requires the developers of AI models to disclose basic information about the data that went into training it.
This is important both for holding accountable the people who are training [AI models], but more importantly it allows you as a user to understand what that model should be able to do and should not. To see if it was trained on data that's actually relevant to what you are trying to work with.
The second piece on content provenance, I think, is one of the most under-covered stories about artificial intelligence. Because there is a real solution out there for deepfakes. We have been told for so long that it's always gonna be this cat-and-mouse game of better generators, and then better detectors, and then better generators, that’s never gonna get to a resolution. Instead, [through the use of content provenance tools] you can flip that question on its head and verify the things that are true, and therefore, anything that doesn't have that verification, you're immediately suspect of.
It's using math and technology in a way that’s similar to how we solved the problem of online banking. For those of us old enough to remember when the internet came about in the '90s, people were saying it'll never be used for online banking, because you can't trust that you’re talking to the right computer across the screen. We found ways to verify that.
And now with [generative AI] there is a free open source metadata standard, called C2PA, that can be attached to any standard audio, image, or video file format. and cryptographically proves whether that piece of content was taken from a real device, generated by AI, and/or how it's been edited over time.
Now, you need to get the regulations right to ensure both that it's requiring the verification information and not personally identifiable information. And also that it can evolve over time as new standards come to be. If we can have policies that encourage the usage of that or similar standards, we can make real progress in eliminating deepfakes and restoring a level of trust in what we see and hear online.
So I'm really excited about getting those two bills done this year.
advice: expect opposition, start early, and act
TCAI: Last question for you. Do you have any, uh, words of experience and wisdom for legislators in other states who might be thinking about introducing a, a bill in this area, which can be a little intimidating?
Assm. Bores: Yes. Don’t be intimidated.
First, you're gonna be told you don't understand it. I promise you, if you dive into it, you'll understand it more than the lobbyists who are coming to speak to you about it.
You can actually really get to know this—and [don’t be afraid to] rely on other experts that you can talk to. There are many groups out there that can be quite helpful. You will understand it more than the lobbyists who are there.
And you know it's so important. You're hearing it from your constituents. So fight through it.
I think the only other tip I would give is, expect opposition and start early.
I knew there was going to be a lot of opposition to the RAISE Act, so I started early. I sent drafts of the bill to all of the major [AI development] labs and asked for red lines multiple times.
That helped me engage in good faith, ask for suggested changes, and also understand where potential opposition might be coming from.
Perpahs more important in terms of getting it done, I spoke to other legislators and to internal staff, and I said, "Hey, there's going to be a lot of opposition to this. I know it. Here's why, and here’s why it's still urgent."
Because in New York—I don't know how it is in every other state—at the very end of session is when we both do most of our legislating. That’s when it’s easy to say, "Oh, just wait till next year." Like, "Let's analyze this in the off session."
And so before we got to the end of session, I was telling people: "It's really important. It has to be done this year. There is going to be opposition. Get ready for it."
If you start your session by telling people that message, when it comes, they're all prepared for it, and you can still get the bill done.