Anthropic abandons safety policy: This is why we work to make AI safeguards the law

Anthropic CEO Dario Amodei: Safety is our core brand principle. Until it isn’t.

Feb. 26, 2026 — Anthropic, the AI company founded on the idea of creating models with the highest safety precautions, announced on Tuesday that it has abandoned one of its core safety principles.

Why? Because, company officials said, operating with safeguards could hinder Anthropic’s ability to compete in the rapidly evolving AI market.

CNN reported:

“Anthropic’s previous policy stipulated that it should pause training more powerful models if their capabilities outstripped the company’s ability to control them and ensure their safety—a measure that’s been removed in the new policy. Anthropic argued that responsible AI developers pausing growth while less careful actors plowed ahead could ‘result in a world that is less safe.’”

In other words: Anthropic says it is abandoning its core safety principle in order to make the world more safe. If that sounds like it makes no sense, it’s because it makes no sense.

this is why safeguards must be written into law

The news is dismaying but not entirely surprising. This is exactly why we work to write appropriate AI safeguards into law.

At the Transparency Coalition, we help legislators and parents protect kids and teens from the harms of generative AI. That involves empowering parents to make decisions about their family’s digital diet. We also work with lawmakers to help them craft bills and understand policies that make AI work better and keep communities, states, and nations safer.

We often hear pushback from the tech lobby: Let the industry police itself. The market will reward the makers of safe, high-quality products and punish the bad actors.

We wish that were true. Decades of hard experience in other industries—finance, medicine, auto manufacturing—has taught us that a balancing second hand, the hand of legislation and government oversight, is often needed to keep the worst impulses of the market in check.

Voluntary safeguards undone by market pressures

The lesson of Anthropic isn’t that world-class AI models can’t be created within the bounds of safety policies. In recent months Anthropic’s Claude has risen to be seen as the industry-leading Large Language Model. The lesson is that Anthropic’s competitors, operating without any of the same safety principles, threaten to overtake Anthropic precisely because they aren’t bound by the same principles.

Back in 2023 and 2024, the major AI companies (Anthropic, OpenAI, Microsoft, Amazon, Meta) made widely publicized commitments to develop AI with a focus on “safety, security, and trust.”

Those commitments have now been largely abandoned and forgotten.

That’s the problem with company policies. They change. Sometimes overnight.

Laws and legal standards may also change, but that change almost always requires public discussion, deliberation, study, and negotiation. It’s slow. It’s designed to be slow, in order to get it right.

Laws require all competitors to play by the same rules. They exist so that basic and appropriate product safety standards don’t stand or fall on the capricious decision of a single CEO. They allow companies to invest in strong safety measures and high-quality products, knowing that their competitors must do the same—not because they want to, but because the law demands it.

Previous
Previous

AI Legislative Update: Feb. 27, 2026

Next
Next

National PTA ends Meta partnership over revelations in child-safety trials