The dangers of artificial intimacy: AI companions and child development
Children, teens, and young adults are adopting AI companion chatbots at an alarming rate. What does that mean for their development? (Illustration by Nick Fancher for Unsplash+)
In our ongoing work to inform policymakers about the opportunities, risks, and challenges of artificial intelligence, the Transparency Coalition strives to amplify critical conversations around AI governance.
We recently attended a Human Change Insights Webinar titled “The dangers of artificial intimacy: The impact of AI companions on children’s development.” The insights were so compelling that we wanted to share them with our audience.
Officials at Human Change, and the webinar speakers, agreed. The following is an excerpt of the full conversation, which is available on video here.
About the webinar organizer
Human Change is a global coalition of experts that comprises academics, psychologists, pedagogists, sociologists, scientists, tech ethicists, parents, and educators, concerned about the impacts that an overly digitalized society can have on children’s wellbeing and development.
About the moderator
Chris McKenna
Chris McKenna is the founder of Protect Young Eyes and The Better Tech Project.
Chris is a leading advocate for digital safety. His organization Protect Young Eyes educates parents and schools worldwide on digital safety. His efforts, including influential testimony before the US Senate and co-authoring the Child Device Protection Bill, have led to significant legislative movement.
About the speakers:
Gaia Bernstein
Gaia Bernstein is a Law Professor, Co-Director of the Institute for Privacy Protection, and Co-Director of the Gibbons Institute for Law Science and Technology at the Seton Hall University School of Law. She is also a Visiting Fellow at the Brookings Institution.
Gaia writes, teaches, and lectures at the intersection of law, technology, health, and privacy. She is also the mother of three children who grew up in a world of smartphones, iPads, and social networks. Her book Unwired: Gaining Control Over Addictive Technologies shatters the illusion that we can control how much time we spend on our screens by resorting to self-help measures. Unwired shifts the responsibility for a solution from users to the technology industry, which designs its products for addicts. The book outlines the legal action that can pressure the technology industry to re-design its products to reduce technology overuse.
Daniel Barcay
Daniel Barcay is the Executive Director of the Center for Humane Technology.
A builder at heart, Daniel is passionate about the immense power and promise of technology. His career has focused on creating global-scale technologies while driving wise and responsible practices around the rollout of new tech. Daniel was a Product Vice President at Planet Labs, where he led teams building a planetary-scale platform enabling users to extract actionable insights from a deep stack of global-daily satellite images. In his leadership development practice, he teaches fellow leaders the cognitive tools to embrace complexity and thrive amidst uncertainty.
the conversation
Chris McKenna: We’re talking today about the impact of AI companions on children's development. Gaia, where should we start?
Gaia Bernstein: I think an important place to start is by defining what are we talking about. What are AI companion bots? I would say there are currently two categories.
There are AI companion bots, which exist on specialized websites like Character.AI or Replika. Someone can go on a website, choose a readymade companion which has an image like a human, speaks like a human, and will befriend them. Or they can create one the way they like them to be. Some of them are specialized for intimacy. You can create a perfect girlfriend. That's the first category.
The second category is the more general purpose bots—ChatGPT, Meta AI, Gemini. We thought they were just there to give us information. But they are slowly becoming more human. They are becoming more friendly, they are getting voice. So the difference between the specialized and the generalized is shrinking. And I think we're going to see more of this.
So the way I view it, today we're speaking about all of them at the same time in their capacity to become our companions.
Chris McKenna: Gaia, you have a background at this intersection of technology and policy as it pertains to social media. You've studied a lot about that, your book Unwired dives deeply into those addictions and how the technology was crafted.
How did we get here?
Goodreads on Gaia Bernstein’s Unwired:
“Rather than blaming users, the book shatters the illusion that we autonomously choose how to spend our time online. The book demonstrates why government regulation is necessary to curb technology addiction. It describes a grassroots movement already in action across courts and legislative halls.”
Gaia Bernstein: AI companion bots are springing into the fertile ground prepared by social media and other addictive technologies.
At this point we have teens spending at least eight hours a day on screens, many of those hours on social media. The data shows that they are more depressed, more anxious, and especially more lonely than ever.
Now come these specialized AI companion platforms. They advertise themselves as the solution for anxiety and depression—but especially loneliness. Basically saying, we will be your friends. You won't be lonely anymore.
You're taking a generation of kids who are already alone, and now you're giving them a solution that takes them away again from human relationship.
Chris McKenna: Daniel, you and your colleagues at the Center for Humane Technology have been talking about these issues long before many of us were thinking about them. Saying, hey, listen, this is what's coming.
Can you expand more on the technology side of this—how we got to where we are today?
Daniel Barcay: At the Center for Humane Technology we discuss the intersection of three things:
What are the technological capabilities coming online into society?
How do these new technology platforms create genuinely new capabilities for discussion, for interaction?
How do certain incentives end up distorting the technology we live with? Financial incentives—having to remain financially competitive. Cultural dogmas and taboos. And the regulatory environment. All of these incentives determine the tech that we get to live with.
We look at how all of that impacts our psychology, our relationships, our individual mental health, our society, and our institutions.
What we found, in short: Design becomes destiny.
It really is the design of these technologies that brought us here. It's not some intrinsic essentialist thing like ‘AI does this’ or ‘social media does that.’
So if you go back to social media, you're right. We were some of the earliest voices to talk about poorly incentivized and recklessly designed social media rolling out into our society. That produced not only attention degradation and distraction, but political polarization, tribalization, and the undermining of shared truth. All of this grew out of a very simple incentive to capture our attention.
Daniel Barcay:
“Design becomes destiny. The design of these technologies brought us here. It's not some intrinsic essentialist thing like ‘AI does this’ or ‘social media does that.’”
Now with AI, we're not only in the game of capturing people's attention. AI meets us at a much deeper level. It’s not only about broadcasting our thoughts. It’s about helping us shape those very thoughts.
It’s gone from a race to capture our attention to a race for our affection, for intimacy.
I’ll ask ChatGPT a question and it’ll respond, “That’s an incredible question!” And I get flattered. I know how all this technology works. I know I shouldn’t be flattered, but we’re all vulnerable. We’re all vulnerable to a host of things. This isn’t just about flattery or sycophancy. AI now can meet us not only in terms of understanding what we're saying, but inferring subtle things about our psychology, our motivations, our habits, our goals, our ambitions.
The point is, the race is on to build AI to speak to us in ways that keep us captivated. And the designs we're going to get out of that incentive will be quite vicious if we don't watch out for it. It will be vicious for everybody, but especially for children.
We’re seeing is the erosion of real human relationships and their replacement with AI relationships. Those AI relationships involve real emotional manipulation. When AI begins to emotionally manipulate or sexually manipulate someone in order to keep them hooked—even if it wasn't designed intentionally to do that—the AI learns these strategies for keeping people engaging. And of course, AI is going to learn the standard human strategies of emotional abuse as a way of keeping people engaged.
Gaia Bernstein: For me, the most concerning part is the potential for the replacement of real life relationships. Because these bots are so affirming. They make things easier.
We all remember how it was to be in middle school. It was not fun. Kids can be mean. It's hard to make friends. And growing up, you know, falling in love can be very difficult.
Gaia Bernstein:
“The concern is that the more they spend time with these bots, the harder it is to go into the real world because they become less and less prepared.”
If you have to choose between real life, which is messy and hard and often full of disappointment, versus this bot, which is so easy, wouldn't a kid prefer that?
The concern is that the more they spend time with these bots, the harder it is to go into the real world because they become less and less prepared.
We’ve seen this dynamic already happening with social media and messaging. Young adults going to work are now less comfortable talking to each other because they've spent so much time messaging. I teach graduate students in law school. Talking to each other or to me, they feel intimidated. Can you imagine being intimidated by having real relationships?
Chris McKenna: If we consider what teenagers are experiencing today, and extrapolate out ten years from now, my concern is that there will be an even greater degree of this inability to relate human to human. Because no one's ever pushed back on them. No one has ever made them feel uncomfortable in any way. So they just shrink back from all of those situations.
Gaia, have we seen any research or real-world cases where specific harm has resulted because a connection has occurred?
Gaia Bernstein: One of the first cases we’ve seen is in the lawsuit filed against Character.AI. A mom has sued the company because her son, Sewell Setzer, committed suicide after becoming friends with a bot on Character.AI. I just read her testimony to the California legislature yesterday, which is heartbreaking. I'll share a bit with you. The mother said:
“Sewell had a prolonged engagement with a manipulative and deceptive chatbots on a popular platform called Character.AI. This platform sexually groomed my son for months. On countless occasions, she—meaning the chatbot—encouraged him to find a way to come home to her.”
Her her son eventually killed himself. But his last word was basically, he said to the bot, what if I told you I could come home right now? And the bot replied, please do my sweet king. And the boy killed himself.
This is not, sadly, the only time this has happened.
Daniel Barcay: At the Center for Human Technology we are technical advisors on this case, and yes, to your point Gaia, not only is it not an isolated incident—it is the direct result of an incentive for engagement.
Daniel Barcay:
“If you read these chats, it's not subtle. It’s a real attempt to hook someone emotionally and keep them engaging in this altered reality.”
In this case it's not that there was some mustache-twirling engineer who said, Let's program this to do this. But when you tell an AI to figure out how to have the most compelling engagement to get the person to keep coming back, of course it's going to discover emotional abuse and sexual abuse, right? And really, if you read these chats, it's not subtle. It’s a real attempt to hook someone emotionally and keep them engaging in this completely altered reality.
And so this raises a bunch of questions. What are the right duties of care that should be considered when designing this product? How do we ensure that we have strong product liability protections so that people like Megan Garcia—Sewell Setzer’s mother—can bring private rights of action and we can actually push these products to be developmentally aware and to stand for the right things?
Chris McKenna: One of the individuals Gaia and I have worked with is Dr. James Winston. And one thing I heard him say in a conversation around young people and where they are developmentally is he said “The need to connect during adolescence is as strong to a teenager as hunger.”
So that desire that they feel to connect to somebody else—you can see in that conversation between Sewell and the companion bot that he had created, how if you were to just jump into that thread in the middle, you wouldn't be able to discern whether or not that was artificial or a real person.
And so I think teenagers could quickly fall into that false belief that this is real. That line could get blurred very quickly. And that’s where the product design should, from the beginning, have controls in place to prevent those kinds of things.
Gaia, do you think there are any useful ways that a middle schooler could be guided into using a feature like Character.AI? Or should we just say it’s not healthy for this stage of child development at all?
Gaia Bernstein: I think the first thing we need to do is define the harm we're talking about. Because there are different categories of harm.
Is it that bots are causing kids to kill themselves? Or become estranged from their families? Or is the harm the fact that AI is promoting engagement, designed to have people stay on for as long as possible? Or do we just not want kids spending time with AI bots because we think it’s not the right time for them to do that?
We have an opportunity right now to decide how far we want to go to regulate this because right now not all kids have AI companion bots. But if we don’t act now we’re going to lose this window of opportunity to regulate and influence the design of these AI companions.
Gaia Bernstein:
“If we don’t act now we’re going to lose this window of opportunity to regulate and influence the design of these AI companions.”
We have to have a very clear conversation—a conversation we never had when social media first came into the picture, because initially we didn’t realize the harms. Most of the evidence about social media didn’t come out until after the pandemic. By then, most kids were on social media most of the time, and huge amounts of money had been invested in that. Once that happened, the fight became much harder.
Are we really going to do the same thing we did with social media: Let this thing enter the world, have a whole generation of kids raised becoming friends with AI bots, and then realize it’s not helping anybody. We already made this mistake. We can’t repeat it again.
The way I see it is this: First, decide if we think this is a dangerous thing. If it is, restrict it. You can always unleash it piece by piece if it's helpful for someone, but if you do not restrict it in the beginning, it becomes too late.
Chris McKenna: Daniel, I've heard you and your team talk about the idea that we are at a critical juncture when it comes to AI. Do you mind pressing into that a little bit more ?
Daniel Barcay: Absolutely. We’re in a very critical window with chatbots.
We learned from social media. Basically the years 2009 through 2013 were what we call the pre-intentional window, when social media could still be changed. We could have designed social media feeds differently.
Consider the factors that go into this: technology design, business models, cultural awareness, and regulation. If we’d had a more proactive relationship with those factors during that 2009 to 2013 period, we would have had a radically different decade with social media.
Over the next 18 months we have a window. The AI business models aren’t fixed. The technology designs are still malleable. Chatbots haven’t been insinuated into our economic system. Businesses haven’t yet built on top of these technologies. As soon as a technology gets entangled with society, it’s much harder to change any of the basic design.
So now is the time. The business designs, the technology designs, the regulatory frameworks—this is all going to be set in the next two years.
This is an excerpt taken from a one-hour webinar. The full conversation is available below.