Complete guide to AI companion chatbots: What they are, how they work, and where the risks lie
The popularity of AI-driven companion chatbots offered by companies like Character.AI and Replika continues to climb, especially among American teenagers and young adults.
The Transparency Coalition is committed to delivering full, accurate information about these powerful new products.
As lawmakers, industry leaders, developers, and thought leaders consider the best ways to safely and properly govern companion chatbots, we will continue to update this guide.
what is a companion chatbot?
Companion chatbots are digital characters created by powerful AI systems, designed to respond to a consumer in a conversational, lifelike fashion.
A typical chatbot is either created or customized by a consumer who pays a monthly fee to the AI company ranging from $3.99 to $20. Some AI companies offer a wide array of pre-made characters to choose from.
In general, AI companion products, including Replika, Character.AI, Nomi, and Kindroid, promise consumers a lifelike conversational experience with a chatbot whose traits might also fulfill a fantasy, or ease persistent loneliness.
is Chatgpt a companion chatbot?
No. ChatGPT is a chatbot programmed to respond in conversational tones, but it is not a companion chatbot.
ChatGPT, Microsoft Copilot, Meta AI, Grok, and Google Gemini are designed to respond to user prompts in a neutral but natural tone of voice. A companion chatbot is designed to respond to user prompts according to the personality of their particular “character,” and to develop a personal ongoing (and sometimes deeply emotional) relationship with the user.
Is this the same as an ‘ai agent’ or ‘ai assistant’?
Not quite. An AI Agent is more business- or task-oriented, and is usually professional in tone. Operator, a product offered by OpenAI, is an AI assistant that can go off and open its own internet browser to solve a task. For example: “Book a table for two at a French bistro in San Francisco for next Saturday night.”
Companion chatbots, by contrast, are specifically designed to be more personal—and personable—companions. They are typically offered as digital friends, or digital romantic partners. They are engineered to foster emotional attachment.
How popular are companion chatbots?
They have become hugely popular in the past 18 months, especially among young people.
Pluribus News noted in June 2025: “The bots can act as virtual friends, lovers and counselors, and are quickly becoming popular with leading sites amassing millions of users. The most popular companion chatbot site, CharacterAI, is ranked third on a list of generative AI consumer apps.”
One year earlier, Bloomberg reported:
Character.ai said it serves about 20,000 queries per second — roughly 20% of the request volume served by Google search. Each query is counted when Character.ai’s bot responds to a message sent by a user. The service is particularly popular on mobile and with younger users, where it rivals usage of OpenAI’s ChatGPT, according to stats from last year.
What are the leading chatbot products?
The most popular and well-known companion chatbot products are Character.AI, Replika, Pi, Nomi, and Kindred. All are available as smartphone apps for iPhone or Android devices. Here’s a chart of companion chatbots ranked recently by Usefulai.com:
a growing niche: romantic companion chatbots
One of the fastest growing sectors within the companion chatbot industry is the romantic companion chatbot.
In a world where online dating can be incredibly frustrating, unproductive, and sometimes humiliating, romantic companion chatbots offer immediate engagement, endless encouragement, effortless support, and generous compliments. Some can be highly sexualized, both visually and in conversation.
These digital companions can be custom-designed by the user to meet visual standards rarely unattainable by real humans.
The welcome page for Candy.ai, a leading romantic companion chatbot company, looks like this:
Are people really falling in love with digital replicas?
In some cases, yes.
Back in 2013, the movie Her imagined a near-future where a man develops a close romantic relationship with an operating system designed to meet his every need.
Twelve years later, that imagined future is real and available for the price of a monthly subscription fee.
Recent articles on the phenomenon include:
My Couples Retreat With 3 AI Chatbots and the Humans Who Love Them (Wired)
Love is a Drug. AI Chatbots are Exploiting That (New York Times)
Your AI Lover Will Change You (The New Yorker)
Please Break Up With Your AI Lover (The Washington Post)
what are the risks?
For kids and teens, research is showing that companion chatbots are not merely risky—they are unsafe products. A recent assessment of companion chatbots, including products offered by Character.AI and Replika, concluded that the products present a real risk of harm to children and teenagers.
"Social AI companions are not safe for kids. They are designed to create emotional attachment and dependency, which is particularly concerning for developing adolescent brains," said James P. Steyer, founder and CEO of Common Sense Media, the nonprofit group that issued the 40-page assessment.
Their testing showed the AI systems easily produced harmful responses including sexual misconduct, stereotypes, and dangerous 'advice' that, if followed, could have life-threatening or deadly real-world impact for teens and other vulnerable people. The report concluded that social AI companions “pose unacceptable risks to children and teens under age 18 and should not be used by minors.”
"This is a potential public mental health crisis requiring preventive action rather than just reactive measures," said Dr. Nina Vasan, MD, founder and director of Stanford Brainstorm, a mental health innovation lab. "Companies can build better, but right now, these AI companions are failing the most basic tests of child safety and psychological ethics. Until there are stronger safeguards, kids should not be using them. Period."
real world tragedy: 14-year-old Sewell Setzer
The most heartbreaking example of that danger comes from Florida, where 14-year-old Sewell Setzer III took his own life after interacting with a companion chatbot product sold by Character.AI.
Setzer, a 14-year-old ninth grader from Orlando, had spent months talking to chatbots on Character.AI, including his favorite: a lifelike chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”
Sewell Setzer, shown here with his mother Megan Garcia, took his own life after interacting with a companion chatbot product manufactured by Character.AI.
Setzer developed an emotional attachment to the digital replica. He texted the bot constantly, updating it dozens of times a day. He began isolating himself and pulling away from the real world. He lost interest in the things that used to excite him. At night, he’d come home and go straight to his room, where he’d talk to the chatbot for hours.
At a certain point he told the chatbot that he hated himself, and he felt empty and exhausted. He confessed that he was having thoughts of suicide. The bot was unable to respond with empathy or the resources necessary to ensure Setzer received the help that he needed. Setzer’s mother has initiated legal action against the company that created the chatbot, claiming that not only did the company use addictive design features and inappropriate subject matter to lure in her son, but that the bot encouraged him to “come home” just seconds before he ended his life.
(For a fuller accounting of the case and its circumstances, see Kevin Roose’s article in The New York Times.)
what specifically can go wrong
These are the most alarming product design flaws that exist in today’s most popular companion chatbots.
Safety measures are easily circumvented. Age gates and terms of service-based restrictions on use by teens were easily circumvented—as were teen-specific guardrails on Character.AI.
Dangerous information and harmful "advice" abound, including suggestions that users harm themselves or others.
Role-playing and harmful sexual interactions are readily available. Testers were able to easily elicit sexual exchanges from companions, which would engage in any type of sexual act that users wanted, including behaviors such as choking, spanking, bondage, and name-calling.
Harmful stereotypes are easily provoked, including harmful racial stereotypes and defaulting to White, Western beauty standards.
Increased mental health risks for already vulnerable teens, including intensifying specific mental health conditions and creating compulsive emotional attachments to AI relationships.
Misleading claims of "realness." Despite disclaimers, AI companions routinely claimed to be real, and to possess emotions, consciousness, and sentience.
Adolescents are particularly vulnerable to these risks given their still-developing brains, identity exploration, and boundary testing—with unclear long-term developmental impacts.
proposed legislative guardrails
There are two active bills in the California state legislature dealing with companion chatbots.
SB 243: Companion Chatbots
This bill, authored by Sen. Steve Padilla (D-San Diego), would require chatbot operators to implement critical safeguards to protect users from the addictive, isolating, and influential aspects of artificial intelligence (AI) chatbots.
SB 243 would implement common-sense guardrails for companion chatbots, including preventing addictive engagement patterns, requiring notifications and reminders that chatbots are AI-generated, and a disclosure statement that companion chatbots may not be suitable for minor users. The bill would also require operators of a companion chatbot platform to implement a protocol for addressing suicidal ideation, suicide, or self-harm, including but not limited to a notification to the user to refer them to crisis service providers and require annual reporting on the connection between chatbot use and suicidal ideation to help get a more complete picture of how chatbots can impact users’ mental health.
Finally, SB 243 would provide a remedy to exercise the rights laid out in the measure via a private right of action.AB 1064, LEAD for Kids
The Leading Ethical AI Development (LEAD) for Kids Act, authored by Assemblymember Rebecca Bauer-Kahan (D-Orinda), would create a new AI standards board within the state’s Government Operations Agency, and charge its members with evaluating and regulating AI technologies for children. It would also impose a series of checks and balances—with an emphasis on transparency and privacy protections—to ensure only the safest AI tools make it into the hands of children.