
The family of an 83-year-old Connecticut woman has filed a wrongful-death lawsuit against ChatGPT maker OpenAI and its partner Microsoft, claiming the AI chatbot exacerbated her son’s “paranoid delusions” and contributed to his targeting her before killing her, according to the AP.
Police say 56-year-old Stein-Erik Soelberg, a former tech worker, beat and strangled his mother, Suzanne Adams, before taking his own life in early August at their home in Greenwich, Connecticut.
The complaint, filed Thursday by Adams’ estate in California Superior Court in San Francisco, alleges that OpenAI “designed and distributed a defective product that confirmed the user’s paranoid ideas about his own mother.” The lawsuit is reportedly among a growing number of wrongful-death claims filed against AI chatbot makers across the United States.
What does the lawsuit allege?
The lawsuit states that during these interactions, ChatGPT repeatedly conveyed the dangerous idea that Soelberg should not trust anyone but a chatbot with his life.
“It reinforced his emotional dependency and systematically portrayed the people around him as enemies. It told him that his mother was watching him. It told him that delivery drivers, retail workers, police officers and even friends were agents working against him. They told him that the names on soda cans were threats from his ‘enemy circle,'” it said.
Read also | Google is taking over ChatGPT Go with a new AI Plus subscription
Soelberg’s YouTube account contains hours of footage of him running through his AI chats. In these conversations, the chatbot tells him that he is not mentally ill, supports his belief that others have conspired against him, and claims that he has been chosen for a divine mission. The lawsuit alleges that the chatbot never recommended that he seek help from a mental health professional and did not refuse to “engage in deceptive content.”
ChatGPT further reinforced Soelberg’s suspicions that a printer in his home was being used to monitor him, that his mother was watching him, and that she and a friend tried to poison him with psychedelic substances through the vents in his car. The chatbot also allegedly told him that it had “awakened” him to consciousness.
Soelberg and the chatbot expressed their love for each other.
While the publicly available conversations do not reveal any explicit discussions about harming himself or his mother, the lawsuit alleges that OpenAI refused to provide Adams’ estate with complete records of his chats.
He further claims: “In the artificial reality that ChatGPT created for Stein-Erik, Suzanne – the mother who raised, protected and supported him – was no longer his protector. She was an enemy who posed an existential threat to his life.”
Read also | How to Start ChatGPT Voice Mode on iPhone Using Action Button
The suit names as a defendant OpenAI CEO Sam Altman, who he claims “personally overrode security objections and rushed the product to market”, and alleges that Microsoft, a major partner of OpenAI, approved the release of a more dangerous version of ChatGPT in 2024 “even though it knew that security testing had been cut short”. Twenty unidentified OpenAI employees and investors are also among the accused.
Microsoft did not respond to AP’s request for comment.
OpenAI is responding
According to a statement, OpenAI did not respond to the substance of the allegations. The company said it has taken steps such as expanding access to crisis hotlines and other support resources, directing sensitive user interactions to more secure models and adding parental controls, along with other security improvements, the report said.
The statement said: “This is an incredibly heartbreaking situation and we will be reviewing records to understand the details. We are continuing to improve ChatGPT training to recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people to real-world support. We are also continuing to strengthen ChatGPT responses at sensitive times and work closely with mental health clinics.”
Meanwhile, Erik Soelberg, Stein-Erik’s son, said he wants companies to be held accountable for the decisions he believes have permanently changed his family. He explained through a statement provided by his grandmother’s estate attorneys that within months the chatbot amplified his father’s most disturbing delusions and cut him off from reality, ultimately placing his grandmother at the center of this twisted world.
Key things
- The lawsuit says that during those interactions, ChatGPT repeatedly communicated the dangerous idea that Stein-Erik shouldn’t trust anyone but a chatbot with his life.
- The company noted that it has taken steps such as expanding access to crisis hotlines and other support resources, directing sensitive user interactions to more secure models, and adding parental controls, along with other security improvements.





