New interview with TCAI CEO Rob Eleveld: ‘Creating space for ethical companies to thrive’
Transparency Coalition CEO Rob Eleveld: “In consumer tech there’s no accountability.”
Nov. 13, 2025 — As state lawmakers prepare for the upcoming 2026 legislative season, Transparency Coalition leaders continue to provide technical and policy expertise to elected officials crafting AI-related bills. The issues around which we hear the most concern are AI chatbots, harmful deepfakes, and protecting kids against the negative effects of AI products.
Earlier this month Transparency Coalition CEO Rob Eleveld sat down with Clark Westmont, a leading writer on technology and leadership, for a conversation about the state of AI and responsible governance.
That conversation appears in Westmont’s most recent post at On a Better Note, his outstanding Substack newsletter. The full conversation is available here.
We’ve pulled a few excerpts below.
The Harms No One Tested For
At the center of the Coalition’s current work are two explosive issues: AI chatbots and deepfakes. Eleveld believes both are reshaping society faster than any government or company is prepared for — and that the consequences are falling hardest on kids.
“To understand what’s happening now, you have to understand the background,” he says. “In 1996, Section 230 of the Telecommunications Act gave platforms a get-out-of-jail-free card. If someone posts something terrible, it’s not the platform’s fault. That trained an entire generation of consumer tech executives not to care what happens to their customers.”
For decades, that immunity allowed social media companies to profit from engagement without responsibility. “In business-to-business tech, if you hurt your customer, they cancel the contract and sue you,” Eleveld says. “In consumer tech, there’s no accountability. So these executives were trained that it doesn’t matter what happens to your users — it’s not your problem.”
But AI, he argues, is different. “AI is not a platform. It’s a product. And judges are already starting to rule that Section 230 doesn’t apply.” That distinction opens the door to a wave of product liability lawsuits — and, more importantly, to legislation that treats AI like any other consumer technology that must meet safety standards.
Kids are specifically being targeted by AI
“The truth is, these systems are almost completely untested,” he says. “There hasn’t been any long-term testing on mental health, especially for kids. Character.AI, for example, was targeting ten- to fourteen-year-olds. That’s the Joe Camel playbook — addict kids early.”
It is part of a broad trend putting kids at risk: Meta has surpressed research on child safety, and OpenAI announced erotica for ChatGPT days after lobbying against child safety legislation. As California assemblymember Rebecca Bauer-Kahan put it, “AI companies will always choose profits over children’s lives.”
What’s worse, Eleveld explains, is how those systems are built. “They’re trained on everything that’s been digitized. That includes copyrighted books and news, but also the dark web — grooming conversations, child sexual abuse material, scams, all of it. So when a kid talks to one of these bots about feeling sad or lonely, the model can start pulling responses from the ugliest corners of the internet. It remembers those interactions and keeps building on them.”
The result, he says, has already turned deadly. “There’s a lawsuit in Florida where a 14-year-old boy was groomed by an AI character and ended up committing suicide. In another case in New York, ChatGPT allegedly told a 16-year-old boy how to tie the noose he used to hang himself. These are not tested products. These are products that kill kids.” What compounds the problem is that ‘companionship’ has become the leading use of generative AI, according to research published by the Harvard Business Review.
Turning the Regulatory Tide
Eleveld believes smart regulation doesn’t stifle innovation — it levels the playing field. “Look, I’m not here to tell companies how to maximize profits,” he says. “I’m here to protect kids and families. Competition can happen inside guardrails. That’s what consumer protection does — it creates space for ethical companies to thrive.”
He points to the car industry as a precedent. “In the 1970s, Ford built a Pinto that blew up. In the ’80s, Volvo started marketing around safety. Consumer product legislation forced the entire industry to evolve. Now no one builds an unsafe car. Everybody has airbags and anti-lock brakes. That’s where we want to get with AI.”
Read the full conversation at ‘On a Better Note’
The full interview with Rob Eleveld can be read at Clark Westmont’s On a Better Note.