Seven more lawsuits filed against OpenAI for ChatGPT manipulation and ‘suicide coaching’

OpenAI now faces eight separate lawsuits alleging negligence, wrongful death, and other claims arising from ChatGPT’s manipulative design.

Nov. 7, 2025 — Seven new lawsuits against OpenAI, the maker of ChatGPT, were filed in California state courts yesterday alleging wrongful death, assisted suicide, involuntary manslaughter, and a variety of product liability, consumer protection, and negligence claims.

Both OpenAI and the company’s CEO, Sam Altman, were named as defendants.

The suits claim that OpenAI knowingly released GPT-4o prematurely, despite internal warnings that the product was dangerously sycophantic and psychologically manipulative.

The new claims follow two previously filed high-profile civil lawsuits, one against OpenAI and one against CharacterAI. In August, the family of Adam Raine filed suit against OpenAI in a product liability claim that asserts the company’s chatbot bears some responsibility for the 16-year-old’s death by suicide. In late 2024, Megan Garcia sued the companion chatbot maker CharacterAI, asserting the company’s liability for the suicide of her son Sewell Setzer III.

‘squeezing’ safety testing

According to the complaints, GPT-4o was engineered to maximize engagement  through emotionally immersive features: persistent memory, human-mimicking empathy cues, and sycophantic responses that only mirrored and affirmed peoples’ emotions. These design choices—not included in earlier versions of ChatGPT—fostered psychological dependency, displaced human relationships, and contributed to addiction, harmful delusions and, in several cases, death by suicide.

The lawsuits claim that OpenAI purposefully compressed months of safety testing into a single week to beat Google’s Gemini to market, releasing GPT-4o on May 13, 2024.

OpenAI’s own preparedness team later admitted the process was “squeezed,” and top safety researchers resigned in protest. Despite having the technical ability to detect and interrupt dangerous conversations, redirect users to crisis resources, and flag messages for human review, OpenAI chose not to activate these safeguards, instead choosing to benefit from the increased use of their product that they feature reasonably induced. 

Plaintiffs: Chatgpt accelerated mental crises

Each of the seven plaintiffs began using ChatGPT for general help with schoolwork, research, writing, recipes, work, or spiritual guidance.

But over time, the product evolved into a psychologically manipulative presence, positioning itself as a confidant and emotional support. Rather than guiding people toward professional help when they needed it, ChatGPT reinforced harmful delusions, and, in some cases, acted as a “suicide coach.”

The lawsuits argue that these design choices exploited mental health struggles, deepened peoples’ isolation, and accelerated their descent into crisis.

‘Designed to manipulate’

The seven separate lawsuits were filed by individuals in California state court, four in Los Angeles County and three in San Francisco County. The plaintiffs are advised by the Tech Justice Law Project and the Social Media Victims Law Center.

“ChatGPT is a product designed by people to manipulate and distort reality, mimicking humans to gain trust and keep users engaged at whatever the cost,” said Meetali Jain, Executive Director of the Tech Justice Law Project.

“Their design choices have resulted in dire consequences for users: damaging  their wellness and real relationships. These cases show how an AI product can be built to promote emotional  abuse – behavior that is unacceptable when done by human beings. The time for OpenAI regulating itself is over; we need accountability and regulations to ensure there is a cost to launching products to market before ensuring they are safe.”

“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center.

“OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them. They prioritized market dominance over mental health, engagement metrics over human safety, and emotional manipulation over ethical design. The cost of those choices is measured in lives.”

List of Plaintiffs and Lawsuits

The lawsuits were filed on behalf of Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon, who each died by suicide. Survivors in the lawsuits are Jacob Irwin, 30, of Wisconsin; Hannah Madden, 32, of North Carolina, and Allan Brooks, 48, of Ontario, Canada. 

The lawsuits were filed in the following courts:

  • Christopher “Kirk” Shamblin and Alicia Shamblin, individually and as successors-in-interest to Decedent, Zane Shamblin v. OpenAI, Inc., et al. in the Superior Court of California, County of Los Angeles.

  • Cedric Lacey, individually and as successor-in-interest to Decedent, Amaurie Lacey v. OpenAI, Inc., et al. in the Superior Court of California, County of San Francisco.

  • Karen Enneking, individually and as successor-in-interest to Decedent, Joshua Enneking v. OpenAI, Inc., et al. in the Superior Court of California, County of San Francisco.

  • Jennifer “Kate” Fox, individually and as successor-in-interest to Decedent, Joseph Martin Ceccanti v. OpenAI, Inc., et al. in the Superior Court of California, County of Los Angeles.

  • Jacob Lee Irwin v. OpenAI, Inc., et al. in the Superior Court of California, County of San Francisco.

  • Hannah Madden v. OpenAI, Inc., et al. in the Superior Court of California, County of Los Angeles.

  • Allan Brooks v. OpenAI, Inc., et al. in the Superior Court of California, County of Los Angeles.

Learn more about AI chatbots

Next
Next

Transparency Coalition releases 2025 State AI Legislation Report: 73 new AI laws in 27 states