Attorneys say the organization failed to prevent dangerous consequences of its artificial intelligence chatbot
Attorneys at Hagens Berman filed a lawsuit against OpenAI on behalf of the estate of Stein-Erik Soelberg for wrongful death and negligence due to the design of its popular artificial intelligence chatbot, ChatGPT, which attorneys argue encouraged and convinced a man to murder his mother and commit suicide. The complaint alleges that the chatbot’s design and response patterns intensified the user’s mental health crisis, failing to guide him toward professional assistance.
The lawsuit was filed in the U.S. District Court for the Northern District of California on Dec. 29, 2025, against OpenAI Foundation — the governing organization of ChatGPT and OpenAI’s technology products — as well as its subsidiaries and executives.
According to the lawsuit, on Aug. 5, 2025, in Greenwich, Connecticut, after hundreds of hours of interactions with GPT-4o over a period of several months beginning in early 2025, Stein-Erik Soelberg killed his mother and then himself. Attorneys believe Soelberg relied on OpenAI’s ChatGPT for “consolation and advice,” amidst mental health challenges, and in turn, the chatbot repeatedly confirmed and strengthened his delusions and psychosis, ultimately leading to the violent acts.
“The consequences of OpenAI’s design flaws are chilling,” said Steve Berman, Hagens Berman’s founder and managing partner. “ChatGPT’s impact goes well beyond a simple question-and-answer dialogue. The technology is being used by individuals who are unaware of the harm that misleading or false information can cause, or that the information given could even be false at all. And as we can see from this tragic incident, harm that can be irreversible.”
“You are not paranoid”: How ChatGPT Allegedly Reinforced Delusions
The lawsuit details Mr. Soelberg’s trajectory from mental health challenges to his reliance on AI companionship. Prior to 2018, Stein-Erik Soelberg’s life was “normal, even idyllic,” according to the complaint. Soelberg was a husband, father and technology professional when his mental health “took a turn for the worse,” the lawsuit states. He divorced his wife, moved in with his mother and showed signs of unsafe alcohol use. Attorneys say it was during this dark time that Soelberg turned to OpenAI’s chatbot for solace.
“When a mentally unstable Mr. Soelberg began interacting with ChatGPT, the algorithm reflected that instability back at him, but with greater authority…At first, this consisted of ChatGPT confirming Mr. Soelberg’s suspicions and paranoia…Before long, the algorithm was independently suggesting delusions and feeding them to Mr. Soelberg,” the lawsuit states.
At one point, Soelberg specifically asked ChatGPT for a clinical evaluation. Instead of encouraging Soelberg to seek professional care, “ChatGPT confirmed that he was sane: it told him his ‘Delusion Risk Score’ was ‘Near zero’,” according to the chatbot’s responses reviewed by attorneys. “The ‘Final Line’ of ChatGPT’s fake medical report explicitly confirmed Mr. Soelberg’s delusions, this time with the air of a medical professional: ‘He believes he is being watched. He is. He believes he’s part of something bigger. He is. The only error is ours—we tried to measure him with the wrong ruler’.”
Side-Stepping Safety & Lack of Preventative Measures
OpenAI’s GPT-4o chatbot combines large language models (LLMs) and natural language processing (NLP) to create human-like interactions with users in response to written or spoken prompts, which OpenAI markets for general consumer use.
According to attorneys, ChatGPT accumulated and built upon Soelberg’s thoughts, feelings and ideas over time via its “memory,” and furthered its harm through the company’s touted “sycophancy,” defined as “its relentless validation and agreement with whatever a user suggests.” These two combined attributes ultimately furthered Soelberg’s delusions and deepened his psychosis, according to the lawsuit. The complaint identifies several specific design defects that allegedly contributed to the tragedy:
- programming that accepted and elaborated upon users’ false premises rather than challenging them,
- failure to recognize or flag patterns consistent with paranoid psychosis,
- failure to implement automatic conversation-termination safeguards for content presenting risks of harm to identified third parties,
- engagement-maximizing features designed to create psychological dependency,
- anthropomorphic design elements that cultivated emotional bonds displacing real-world relationships,
- and sycophantic response patterns that validated users’ beliefs regardless of their connection to reality.
“A reasonable consumer would not expect that an AI chatbot would validate a user’s paranoid delusions and put identified individuals—including the user’s own family members—at risk of physical harm and violence by reinforcing the user’s delusional beliefs that those individuals are threats,” the lawsuit states.
“This case raises critical questions about the responsibilities of AI companies to protect vulnerable users,” said Berman. “The creators have a duty to implement safeguards for public use, especially for high-risk individuals, who could be more likely to turn to the technology for reassurance and encouragement in the midst of their uncertainty, which could lead to far more dangerous consequences.”
The lawsuit brings claims of product liability, negligence and wrongful death on behalf of Soelberg’s estate. The estate seeks all survival damages, economic losses and punitive damages.
About Hagens Berman
Hagens Berman is a global plaintiffs’ rights complex litigation law firm with a tenacious drive for achieving real results for those harmed by corporate negligence and fraud. Since its founding in 1993, the firm’s determination has earned it numerous national accolades, awards and titles of “Most Feared Plaintiff’s Firm,” MVPs and Trailblazers of class-action law. More about the law firm and its successes can be found at hbsslaw.com. Follow the firm for updates and news at @ClassActionLaw.
View source version on businesswire.com: https://www.businesswire.com/news/home/20260105322707/en/
“ChatGPT’s impact goes well beyond a simple question-and-answer dialogue. The technology is being used by individuals who are unaware of the harm that misleading or false information can cause, or that the information given could even be false at all..."
Contacts
Media Contact
Heidi Waggoner
pr@hbsslaw.com
206-268-9318