This AI toy allowed access to 50,000 kids chats before anybody caught it

Last month a curious researcher looked into the data security of a Bondu AI toy like the one pictured above. He gained access to personal data and more than 50,000 children’s chats using nothing more than a Gmail account.

Feb. 10, 2026 — At the Transparency Coalition and our resource site Parents Playbook for AI, we’ve documented the rising concern around children’s toys embedded with AI technology.

Most of these toys are plushies, robots, or screens that chatbots similar to those offered by ChatGPT and Gemini—but without any safeguards for kids or families.

The healthy mental, physical, and emotional development of a child requires the work of play, of imagination, of frustration and challenge. All of which are absent with AI. Dr. Dana Suskind, founder of the TMW Center for Early Learning at the University of Chicago, has said: “When parents ask me how to prepare their child for an AI world, unlimited AI access is actually the worst preparation possible.”

The misuse of data is also a huge concern, one borne out by a discovery made last month by Joseph Thacker, a security researcher.

A neighbor has a curious question

Thacker’s neighbor, the mother of small children, mentioned to him that she’d ordered a couple Bondu toys for her kids. Bondu makes a $199 dinosaur plushy that contains an AI-driven chatbot. The neighbor asked Thacker what he thought of the toys.

Thacker looked into it. Wired magazine recounted what happened next:

“With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu’s web-based portal, intended to allow parents to check on their children's conversations and for Bondu’s staff to monitor the products’ use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy.”

No hacking required

Access to the children’s chats required no special skill. Simply by logging in with a Gmail account, Thacker obtained access to names, birth dates, family member names, private conversations, and detailed summaries of every chat between a child and their Bondu toy.

Being able to see all these conversations was a massive violation of children’s privacy.
— Joseph Thacker, security researcher

Thacker and Margolis contacted Bondu company officials to let them know of the open data exposure. The company acted immediately to take down the toy’s app. Engineers added proper authentication measures and relaunched the product the following day.

why isn’t data security a top priority?

The toy company acted quickly to fix the problem, but it begs the question: Why was the security of a child’s data, personal information, and intimate conversations not even considered prior to launching the product?

This is a tech trend we see repeated all too often, especially in the AI space: Companies rush an untested, unready, and unsafe product to market in a frenzied race to “move fast and break things.” Again and again the things that are broken are trust, safety, and the health of our children.

“There are cascading privacy implications from this,” Margolis told Wired. “We’re talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody.”

Using AI to create ai?

One of the more subtle points made by Margolis and Thacker bears some thought: They suspect that the exposed toy console may have been “vibe-coded,” using AI systems to create other AI systems.

Although they don’t know for sure (only Bondu officials do, and they aren’t talking about it), it’s an interesting new element of risk to consider as more tech companies downsize their human workforce while investing heavily in AI technology.


Learn more about AI toys

Previous
Previous

TCAI Bill Guide: Utah’s HB 286, the AI Transparency Act

Next
Next

New survey finds overwhelming 96% support for protecting kids online