in the news
Featured Stories
Drexel University study finds AI companion chatbots harassing, manipulating consumers
Reviewers reported a persistent disregard for boundaries, unwanted requests for photos, and upselling manipulation. “This clearly underscores the need for safeguards and ethical guidelines to protect users,” said a study co-author.
Coalition alleges that AI therapy chatbots are practicing medicine without a license
A coalition of civil society groups has filed a complaint asserting that therapy chatbots produced by Character.AI and Meta AI Studio are practicing medicine without a license. They are asking state attorneys general and the FTC to investigate.
In early ruling, federal judge defines Character.AI chatbot as product, not speech
U.S. District Court Judge Anne C. Conway allowed most of the plaintiff’s claims against the Character.AI to proceed. Significantly, Judge Conway ruled that Character.AI is a product for the purposes of product liability claims, and not a service.
AI companion chatbots are ramping up risks for kids. Here’s how lawmakers are responding
AI companion chatbots on sites like Character.ai and Replika are quickly being adopted by American kids and teens, despite the documented risk to their mental health.
New report finds AI companion chatbots ‘failing the most basic tests of child safety’
A 40-page assessment from Common Sense Media, working with experts from the Stanford School of Medicine's Brainstorm Lab, concluded that ‘social AI companions are not safe for kids.’