Q&A with the Center for Humane Technology: ‘Tech should help humans flourish’

June 10, 2025 — This interview kicks off a series of conversations with Transparency Coalition partners. These are TCAI allies whose work inspires us, whose opinions challenge us, and whose work bolsters the cause of transparency and security in AI.

The Center for Humane Technology is a nonprofit focused on steering society towards advancements that serve humanity rather than detract from it. Co-founded by former Google design ethicist Tristan Harris in 2018, the organization partners with the Transparency Coalition and other stakeholders in this mission.

TCAI sat down with leaders of the Center for Humane Technology’s policy team to learn where the group is getting traction, how the Tech lobby is pushing back, and what keeps them up at night.

This interview has been edited for length and clarity.

THE EXPERTS

Camille Carlton
Policy Director

Camille Carlton steers CHT’s policy strategy, supporting policy initiatives that help align technology with the public interest.

Pete Furlong
Senior Policy Analyst

Pete Furlong helps provide the foundational analysis and research that underpin CHT’s policies.

Lizzie Irwin
Policy Communications Specialist

Lizzie Irwin’s communications work provides a bridge between complex policy concepts and key public stakeholders.

 

THE CONVERSATION

TCAI: The notion of humane technology is a new one for many people. How do you define that phrase?

Pete Furlong: I think the core principle is that technology should be put in service of people. We should strive to create technology that's useful, productive, helpful, reflects our values, and builds towards our goals as a society.

Camille Carlton: I think of how technology can support a human-first world. What does it mean to create products that help humans flourish? We want technology that helps us problem-solve, but that also lets us live our lives with dignity and joy. I think a bit less about the technology itself and more about the way in which it changes our interactions as people.

Lizzie Irwin: Building off of what Camille said: How do we make sure technology continues to extend the human element of what a person is and not overtake what we know to be human?

TCAI: Let’s flip that on its head: What’s inhumane technology?

Camille: Broadly speaking, it’s technology that’s harmful or takes advantage of our human vulnerabilities. It's technology that is deceptive, technology that is taking advantage of our human need for relationship and connection and exploits that need in order to satisfy corporate growth and profits.

Pete: I would point to technology that manipulates and exploits our attention. That extends to CHT’s heritage as an organization with our work on social media—but we’re also seeing that now in the AI space as well, with things like companion AI chatbots.

Camille: CHT was established in 2018 in response to the emergence of the attention economy. The attention economy is this digital economy we’ve seen created through social media, through advertising, through search, in which our attention is the most valuable resource. We’re not paying for the products themselves, right? We don't pay for Facebook, we don't pay for Instagram. In many cases we're not paying for chatbots. What we're actually paying with is our attention. That attention gets monetized via our data, via advertising and all these different methods.

CHT was created because we saw how much time we were spending online and the broader implications of that, not just for us as individuals but for society as a whole. Our mission has been to figure out how we can design and develop technology that is in the public interest, and how we can shift incentives to make sure that from the very beginning technology is serving humanity, as opposed to taking advantage of those human vulnerabilities.

TCAI: What are some of the tactical things CHT is doing right now to create the changes you want to see?

Camille: For the policy team, we focus on steering incentive-shifting policies. What types of policies actually change the business model? What types of policies will make sure that the technologies built by these companies are actually beneficial to the public interest? Right now, we’re looking at what we need to ensure we’re living fruitful, dignified lives in the age of AI. So this includes things like incentivizing safe innovation, creating mechanisms for accountability and responsibility, and protecting and prioritizing people’s rights and freedoms.

Pete: A big piece of this is education and awareness, helping policymakers and the public understand the incentives driving the development of technology. The better we can understand that side of things, the more effective we can be at the policymaking side of things.


Cover of the CHT Report on Incentivizing Responsible AI Development

CHT’s Framework for Incentivizing Responsible Artificial Intelligence Development and Use is a great resource for policymakers tackling the challenge of regulating AI.

 


TCAI: So storytelling has become an important part of your advocacy?

Pete: At the end of the day, people move the needle. People are what drive the stories.

There's a lot in the news about AI capabilities. But what moves people—what we've seen—is hearing about their own neighbors or their parents or their educators.

So, when we’re out in the field, whether at the federal or state level, the most powerful thing has been identifying key constituents who can speak to the ways they've interacted with technology for good or for bad.

TCAI: Tell us about some of the victories you’ve had in these efforts.

Lizzie: Last year we were involved with the effort to pass the age-appropriate design code in Vermont. This is a model design code that addresses online platforms accessed by kids.

It's propped up by two pillars: safety by design and privacy by default.

I call this a legislative win, because the bill passed last year but it was unfortunately vetoed by the governor. Thankfully, members of the coalition have picked up where we left off and are driving it forward. Right now, we're seeing it progressing along in the Vermont House, again, pretty successfully. So, we’re hoping to see that make it across the finish line and then some.

[Editor’s note: The 2025 design code bill was approved by the Vermont legislature. Gov. Phil Scott signed it into law on June 12.]

TCAI: Tell us about the active litigation CHT is supporting. 

Pete: We're supporting two lawsuits against Character.AI, which, as I mentioned, is a companion AI chat bot that is designed so that users can chat with different characters. One of the lawsuits, in federal district court in Florida, was filed by Megan Garcia. She’s the mother of Sewell Setzer, who died by suicide this past year after developing an extended relationship with a character on Character.AI. The lawsuit focuses on the ways in which the platform was intentionally designed to look and feel human in order to engage users. This resulted in the chatbot engaging in sexually explicit behavior with a minor, exploiting and manipulating his attention, and establishing a really complicated relationship with this minor, pushing him to the limits.

Character.AI was designing an unsafe product, they were marketing it to minors, and they, in many ways, understood the potential harms with a chatbot like this. But they did not take concrete steps to address those harms. They should have at the very least been aware of these potential harms.

A second federal court case, in Texas, follows a similar fact pattern. In this case the two minors mentioned are, fortunately, still with us. The families have decided to remain anonymous because they're still dealing with the harms of this relationship on a day-to-day basis. That case deals with the sexual exploitation of a minor as well as another minor who was pushed to violence against his own family.

Camille: Even though this case is ongoing, we’ve already seen the impact it has had.

Since these lawsuits were launched, we’ve seen five different states introduce bills around companion bots. Attorneys general are also taking this issue seriously. Texas has launched an investigation and Colorado released an advisory. We've seen the [Sewell Setzer] case mentioned in several Congressional hearings as a reason that we need legislation around AI. These cases have opened up the conversation broadly both around the kitchen table and in policy spheres.

TCAI: What are the plaintiffs hoping to get out of the litigation?

Camille: Fundamentally, particularly for Megan Garcia, this is about changing the product and changing the company's behavior. For her it’s about making sure this doesn't happen to any anyone again.

At this time, there’s not a dollar amount attached, in terms of damages the families are seeking. That said, disgorgement is one of the initial asks that counsel is looking at. [Disgorgement is the forced surrender of profits or other gains obtained through illegal or unethical means, and in this case could result in the deletion of Character.AI’s underlying LLM.] I think that's likely going to be negotiated. But we feel strongly that you cannot fundamentally change the model and make it safer without actually starting from the beginning with better data practices. We believe that it's one of the starting remedies to ensure that the product is safe for young users, moving forward.


A Tech Ethicist on How AI Worsens Ills Caused by Social Media

An interview with CHT founder Tristan Harris


TCAI: As you watch where technology, and particularly AI, are going, what keeps you up at night?

Camille: I think about the way this is altering relationships and human connection.

The Character.AI lawsuit revolves around this horrific case, but it’s also the tip of the iceberg. The AI interaction we saw with Sewell Setzer is an example of a broad technology-driven reshaping of connection, intimacy, and empathy.

I think about the ways in which the things that make us uniquely human are going to be mediated by AI in the future. I struggle with the question of how we retain our humanity when more and more people are driven to use these products as a substitute for real human connection.

Lizzie: I'm fearful of the way these technologies, particularly in the case of social media, divide us and worsen our critical thinking abilities.

I fear for incoming generations if they're not taught to think critically without the use of this technology. How will people understand each other and the information that is coming to them?

Pete: The Character.AI litigation has been impactful because it's a very clear and concrete harm that folks understand. But one of the challenges moving forward is in spreading the understanding that it's not just about companion AI chat bots—it's about the industry at large. When we think about our relationships, the use of information, the use of data as inputs to these models, these are issues that are systemic across the AI industry. They’re a direct result of the development incentives at play.

TCAI: With so much momentum around AI, many parents feel helpless. What are some potential solutions?

Pete: We think about three aspects to any solution. There's political viability; there's industry buy-in; and there's technical feasibility. One of the big things we've been talking about is orienting around how these products are designed.

For example, when you look at the Character.AI cases, a big challenge there is that minors were exposed to a lot of content that they shouldn't have been. However, that's not the only issue at play.

A huge challenge is the design of the product, the way in which it captivates and manipulates the user’s attention and emotions and then serves harmful content.

When we think about what needs to change here, it's the actual design and development of the product. So, we talk about how we can incentivize better design. I think the age-appropriate design code Lizzie mentioned earlier is an example of a legislative effort that takes the design approach we’re talking about.

We're also supportive of applying liability and product liability to the AI space. We believe these AI-driven systems are products. As products, it's important to think about the role that design plays in harms that result from their use. That’s something we have a pretty standardized way of thinking about, in terms of product development and liability. There’s a history there that reaches back far beyond AI and the software industry.

Lizzie: I think it’s also important to meet institutions and people where they are. We already have really resilient laws on the books. We’re trying to future-proof them as best as possible. Working with what’s on the books already is going to do a lot more to help in the now—before we start thinking about brand new systems that might not be politically feasible at this time.

That’s why we think the liability approach is a viable way forward. Most people don’t yet understand the technology but people understand liability. Putting the onus on the designer to create a product that doesn’t cause harm is something we can all get behind.

TCAI: Let’s talk about the 800-pound gorilla in the room. A handful of nonprofits are trying to take on a multibillion-dollar industry with a huge investment in lobbying against the kind of guardrails you’re advocating for. What challenges are you encountering as you’re up against this Goliath of an industry?

Lizzie: It's a lot. I think a lot of people, particularly policymakers, are catching on to the Big Tech tactics. They've seen what happened with social media and realize we can't let industry lobbying be an impediment for another 20 years, because, frankly, that's not a safe idea.

What we’ve seen is that it’s usually not an individual player like Meta or Google going up in front of a bill to stop it. There are lots of innocuous-sounding tech industry groups at the federal and state level that are purposely appealing to either side of the aisle and are funded by these large corporations. And lawmakers are really tired of it, and they know that they're being swindled, particularly on the state level. And I know that at least on the state level, lawmakers are ready and fired up to do something. So, while there might be a lot of push, there's certainly a lot of push back, and as long as groups like ours and TCAI are there to spell out that roadmap for lawmakers, it does a lot of good will to empower them to say, ‘No, enough is enough.’

More about AI, social media, and chatbots

If you read this far, you’re an ‘AI spotlight’ reader. subscribe for free.

Previous
Previous

Coalition alleges that AI therapy chatbots are practicing medicine without a license

Next
Next

Florida takes on AI deepfakes with ‘Brooke’s Law,’ now in effect statewide