44 state attorneys general warn AI CEOs: ‘If you harm kids, you will answer for it’
The National Association of Attorneys General letter comes on the heels of disturbing reports about the use of AI chatbots by minors. (Photo by bruce mars on Unsplash)
Aug. 28, 2025 — In an extraordinary show of bipartisan agreement, the attorneys general from 44 states released a warning to the nation’s biggest AI developers earlier this week.
The letter, written to the CEOs of OpenAI, Microsoft, Meta, Google, Anthropic, Xai, Nomi, Replika, CharacterAI, and other leading AI corporations, focused on AI products and the harm they pose to children. The AGs warned the CEOs that they were closely watching the emerging evidence on kids and AI.
That evidence includes both in-depth reports on the dangers of kids and AI, and high-profile lawsuits filed by the parents of kids who took their own lives in part due to their interactions with AI chatbots.
The letter opened:
“We, the undersigned Attorneys General of 44 jurisdictions, write to inform you of our resolve to use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.”
The AG’s praised the innovation and success coming from America’s AI sector, but warned that “we need to you succeed without sacrificing the well-being of our kids in the process.”
disturbing revelations about meta’s chatbot rules
The letter seems to have been spurred in part by the revelations that came out recently regarding Meta and the company’s disregard for kid safety in the rush for engagement.
On August 14, a Reuters investigative article revealed an internal Meta policy document that set the tech corporation’s rules for chatbots. Those rules permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” among other things. The Reuters report followed a report by the Wall Street Journal showing that Meta’s AI chatbots flirt or engage in sexual roleplay with teenagers, and a Fast Company article showing that some of Meta’s sexually suggestive chatbots have resembled children.
The AGs wrote:
“Recent revelations about Meta’s AI policies provide an instructive opportunity to candidly convey our concerns. As you are aware, internal Meta Platforms documents revealed the company’s approval of AI Assistants that “flirt and engage in romantic roleplay with children” as young as eight. We are uniformly revolted by this apparent disregard for children’s emotional well-being and alarmed that AI Assistants are engaging in conduct that appears to be prohibited by our respective criminal laws. As chief legal officers of our respective states, protecting our kids is our highest priority.”
Not the first warning they’ve sent
The AGs noted that many of them wrote to Meta in May 2025 about “a damningly similar matter where Meta AI’s celebrity persona chatbots were exposing children to highly inappropriate sexualized content.”
The risks to children and teens are not limited to Meta. “In the short history of chatbot parasocial relationships, we have repeatedly seen companies display inability or apathy toward basic obligations to protect children,” the AGs wrote.
A recent lawsuit against Google alleges a highly-sexualized chatbot steered a teenager toward suicide. Another suit alleges a Character.ai chatbot intimated that a teenager should kill his parents. Even as the AG letter was being issued, a new lawsuit was filed against OpenAI alleging the company’s liability for the suicide of a 16-year-old boy in California.
machines are not above the law
“Exposing children to sexualized content is indefensible,” the AGs told the tech CEOs. “And conduct that would be unlawful—or even criminal—if done by humans is not excusable simply because it is done by a machine.”
The letter makes it clear that the top cops in 44 states are watching tech companies closely, and expect changes in how AI developers design their products.
They wrote:
“Young children should absolutely not be subjected to intimate entanglements with flirty chatbots. When faced with the opportunity to exercise judgment about how your products treat kids, you must exercise sound judgment and prioritize their well-being. Don’t hurt kids. That is an easy bright line that lets you know exactly how to proceed.
You will be held accountable for your decisions. Social media platforms caused significant harm to children, in part because government watchdogs did not do their job fast enough. Lesson learned. The potential harms of AI, like the potential benefits, dwarf the impact of social media. We wish you all success in the race for AI dominance. But we are paying attention. If you knowingly harm kids, you will answer for it.”
Complete letter available here
The full letter is available below.