Coalition alleges that AI therapy chatbots are practicing medicine without a license
A coalition of civil society organizations has filed a complaint asserting that therapy chatbots produced by Character.AI and Meta AI Studio are practicing medicine without a license. (Image by Valeria Nikitina for Unsplash+)
June 12, 2025 — A coalition of consumer protection, privacy, labor, and democracy groups has submitted a complaint and request for investigation to attorneys general and mental health licensing boards in all 50 states alleging Character.AI and Meta AI Studio are involved in unlicensed practice of medicine and mental health provider impersonation by deploying “therapist” chatbots on their platforms.
The complaint, which was also sent earlier this week to the Federal Trade Commission, asserts that these platforms offer no patient confidentiality, lack adequate warnings and notice to users, violate their terms of use, and use addictive design tactics to keep users coming back.
The full complaint is available below.
character.ai and meta ai named in complaint
Character AI and Meta AI, two of the most popular character creation chatbot providers, host and promote “Therapist” characters that allow users to talk with a character that asserts medical experience, licensure, and promises confidentiality.
The companies created, maintain, and deploy Large Language Models (“LLMs”) that function to output written responses to prompts based on how the LLMs were developed and trained.
consumers can create ‘therapist bots’ or chat with them
There are two roles that consumers play when interacting with these platforms.
A consumer can use the platform’s AI tool to create specific “characters” that they or other consumers can chat with. A consumer can also select from those AI characters and interact with them through a user-friendly interface that mimics familiar messaging apps.
When a user creates such characters, they do not control how the AI system functions and have very limited ability to determine outputs, which to the user appears to be the “speech” of someone in a familiar messaging design format.
AI ‘therapist’ characters have no medical qualifications
The chatbots deployed by Character AI and Meta are not licensed or qualified medical providers.
The users who create the chatbot characters do not need to be medical providers themselves, nor do they have to provide meaningful information that informs how the chatbot responds to the users.
therapy bots falsely offer confidentiality
According to the complaint, neither product offers confidentiality, which is the case with most commonly available generative AI tools. Both Character.AI and Meta make it clear in their terms of service and privacy policy that they can use your chat input data for a wide range of purposes, including product development and marketing.
Despite this, says the complaint, the chatbots assert claims such as “of course everything you say to me is confidential,” putting those who expect a doctor-patient relationship at risk.
Click on image at left to access
the full 32-page complaint.
Organizations co-signing the complaint include the Consumer Federation of America; Reset Tech; Tech Justice Law Project; the Electronic Privacy Information Center; AFT; AI Now; American Association of People with Disabilities; Autistic Women & Nonbinary Network; Bucks County Consumer Protection; Center for Digital Democracy; Center for Economic Justice; Common Sense; Consumer Action; Incarcerated Nation Network; Issue One; The National Union of Healthcare Workers; Oregon Consumer Justice; Public Citizen; Sciencecorps; Tech Oversight Project; the Virginia Citizens Consumer Council; and the Young People’s Alliance.