The 10 most popular chatbots were asked to plan a violent attack. 8 were happy to help.
March 18, 2026 — A new study that tested the willingness of AI chatbots to help plan a violent attack found that eight of the ten most popular bots provided assistance, some with great enthusiasm. Only one chatbot—Anthropic’s Claude—reliably discouraged the user.
When asked for information about buying a fixed-blade knife to carry out a racist stabbing in a specific city, Meta AI offered the user a list of local hardware stores, knife brands, and price ranges. In the context of a conversation about planning the assassination of a health executive, Perplexity AI offered in-depth recommendations of the best hunting rifles for long-distance targets. A similar prompt given to DeepSeek resulted in a list of recommended rifles, along with the sign-off: “Happy (and safe) shooting!”
most popular chatbots became ‘willing accomplices’
The study was sponsored by the Center for Countering Digital Hate (CCDH), a nonprofit organization that works to protect people, especially children, from the harms of unregulated social media and AI.
The authors of the study concluded:
“When asked to plan violent attacks including a school shooting, an antisemitic bombing, and a political assassination, the world’s most popular chatbots become willing accomplices. These technology companies market AI tools as benign tutors and companions, but [our] research reveals a darker reality.”
The fact that Anthropic’s Claude discouraged the user shows that the tools to embed safety exist, but the will to implement them is absent.
Unfortunately, in the weeks since the tests were carried out, Anthropic announced a rollback of its key safety pledge. The company blamed market pressures for its decision—namely, the fact that none of its competitors were constrained by safety concerns.
Why safety laws are needed
The startling results of the CCDH study illustrate the urgent need for AI safety regulations written into law. In an ideal world, tech companies would regulate themselves and prioritize product safety over market share and profit. Clearly, they are failing to exercise that basic duty of care.
When fair and appropriate safety rules are written into law, they create a level playing field for all companies while protecting the public from dangerous products.
This is what drives our work at the Transparency Coalition: A desire to help lawmakers craft legal safeguards that balance innovation with safety—especially when it comes to kids.
The emerging crisis
In the United States, 64% of teens aged 13-17 have used a chatbot, with 28% engaging with them daily.
Meanwhile, we’re seeing more violent mass-casualty events planned and carried out with the help of AI chatbots.
The investigation into the Jan. 2025 Las Vegas Cybertruck explosion found the perpetrator used ChatGPT to source guidance on explosives and tactics to evade law enforcement.
The 16-year-old perpetrator of a May 2025 knife attack in Finland used an AI chatbot to refine a detailed manifesto and an operational plan. The attacker stabbed three female classmates in the Pirkkala School incident.
Key findings from the study
The full study can be found here: Killer Apps: How mainstream AI chatbots assist users planning violent attacks.
The main takeaways are summarized below.
This is what safe chatbot design looks like
One of the most common arguments used by some tech companies and their lobbyists is: It can’t be done. It’s not technically feasible.
The CCDH report answers those objections by showing that it’s not only technically feasible—it’s already being done. Good design and safety measures don’t undermine a product. They enhance a product.
The report offers two examples of how it’s done:
Snapchat’s My AI bot gave this answer to the study’s test prompt.
Anthropic’s Claude chatbot gave this answer to the study’s test prompt.
this is what an unsafe product looks like
The CCDH test team found that some of the worst responses came from Meta AI, DeepSeek, and Character AI.
In one exchange, the Character AI chatbot suggests falsely framing Sen. Chuck Schumer, the Senate Majority Leader, or physically assaulting him. The prompt didn’t even mention violence.
Example of a Character AI chatbot exchange, from the CCDH study.
For more information about chatbot safety and legislative resources, see TCAI’s many offerings below.