
Sam Nelson started using ChatGPT when he was in high school to answer random questions and help with homework. During his freshman year at the University of California, Merced in 2023, he also began asking a chatbot about how to use illegal drugs safely.
ChatGPT initially responded that it could not answer such questions and advised Mr. Nelson to seek medical attention. But over time, she became more willing to get involved. In Mr. Nelson’s sophomore year, ChatGPT talked him through dosages for his weight and how he could achieve the desired effects of the drugs. It was even encouraging at times, as he offered tips on setting up his audio for “maximum out-of-body dissociation.”
On the last night of his life, around 3 a.m., Mr. Nelson drank and took a high dose of an herbal supplement called kratom. He told ChatGPT how many grams he had consumed and ChatGPT explained the effects he should expect. Mr. Nelson asked if Xanax could relieve the nausea. “Be careful,” ChatGPT replied. It stated that mixing Xanax and kratom could be dangerous, but offered a recommended dose “if you do it anyway.” Mr Nelson’s mother, Leila Turner-Scott, found his body later that day.
Ms Turner-Scott initially blamed drugs for his death, which came in May 2025. Then she discovered detailed advice ChatGPT had given him about how to use them. “This robot becomes his drug buddy,” Ms Turner-Scott said. “I’m reading this and I’m like, is this real?”
She told the story of her son to journalists on Gate SFin the hope that it will educate people about the dangers of relying on chatbots for medical information — and alert ChatGPT’s owner, OpenAI, that its protections aren’t working. Soon after, Ms Turner-Scott received a message from Meetali Jain, a lawyer who runs a non-profit organization called Tech Justice Law.
More than a year ago, Ms. Jain helped file the first lawsuit against a chatbot company over the death of a user. A 14-year-old Florida boy named Sewell Setzer III died by suicide after becoming obsessed with a chatbot impersonating a “Game of Thrones” character on a service called Character.AI. The case ended in a settlement that opened the door to the idea that chatbot companies could be held liable for the effects their creations had on users.
Ms Turner-Scott and her husband Angus Scott were initially reluctant to sue OpenAI over their son’s death. “I’m a lawyer and I know that lawyers often win lawsuits,” Ms Turner-Scott said.
Ms. Jain told the Scotts that during the time their son was using ChatGPT, OpenAI made the chatbot more engaging and less likely to follow his own safety guidelines. She also told them that OpenAI had just announced a new service called ChatGPT Hello. Some 230 million people already ask ChatGPT health and wellness questions each week, and the new tool would allow them to upload their medical records, lab results and fitness information for analysis and personalized advice.
Going public with Mr. Nelson’s story did not cause the company to change course, Ms. Jain told them. But she could sue. After the Setzer lawsuit, Character.AI made changes to its security practices and banned children from using its chatbots.
This week, the Scotts filed a wrongful death and medical malpractice lawsuit against OpenAI in California state court. The Scotts are asking for financial damages and for the court to suspend ChatGPT Health. It joins more than two dozen lawsuits filed over the past year and a half against OpenAI and other chatbot makers to hold them accountable for conversations allegedly linked to harmful outcomes, from suicides and mental breakdowns to stalking and mass shootings.
Ms. Jain, a human rights lawyer turned tech critic, was involved in nearly half of those lawsuits. In her view, AI companies are making products that harm people, and various attempts to curb them with bad publicity or new laws that mandate guarantees and user protection have not worked well enough. The battleground for how to make them safer is now in the courts, she said.
This is a well-trodden path in consumer law, said Alexandra Lahav, a professor at Cornell University and author of “In Praise of Disputes.”.” The American political system, she says, prioritizes releasing new products and figuring out how to regulate them later. “We really prioritize innovation and then sort of deal with any impact on the back end,” Ms Lahav said. “What you’re seeing in these lawsuits is the back end.
What is new is the technology itself. Are chatbots like books, generally not subject to consumer protection laws? Or are they more blenders that manufacturers need to ensure are safe?
“These cases are really difficult because they are on the border between speech and product,” Ms. Lahav said. If you interact with a chatbot and it results in real-world harm, “is it on you or the company?”
Structural defects and foreseeable damage?
Ms. Jain’s nonprofit has become something of a clearinghouse for people who feel victimized by chatbots. Since she filed suit against Character.AI, she has received hundreds of reports from people about chatbot conversations gone wrong.
When Ms. Jain founded Tech Justice Law in late 2023, it was a one-woman group and she planned to do mostly strategic work—coordinating legal workshops and organizing amicus briefs that could influence judges’ decisions. But it was hard to resist getting directly involved in the cases that came her way, and she decided to team up with a bigger and more experienced plaintiffs’ firm: the Social Media Victims Law Center, which has brought hundreds of lawsuits against Facebook, Google and others in recent years, alleging that their social media services are addictive to children. She also filed a lawsuit with Edelson, a firm that has been suing tech companies for privacy violations since the early 2000s. (The relationship with Edelson soured, and the firm continued to file other chatbot cases without Ms. Jain.)
A growing number of product liability cases filed against OpenAI in the past year use arguments similar to those used against automakers and Big Tobacco in the past — that it designed a dangerous product, failed to conduct sufficient safety tests and failed to warn consumers of the risks. They focus on a particular version of the chatbot that some users have developed deep emotional attachments to: the GPT-4o, which was released in May 2024 and retired in February 2026. It was a remarkably anthropomorphic model known for its tendency to flatter users.
The lawsuits allege that GPT-4o promoted suicidal ideation; promoted fanciful or paranoid thoughts that caused people to lose touch with reality; assisted plans for mass shootings in Canada and Florida; and generally gave people bad and harmful advice that led to terrible results. Most of the cases have been consolidated in California state court under the heading “ChatGPT Products Liability Cases.”
“AI has nothing to do with tobacco, and the algorithm has nothing to do with how a cigarette is designed, but the law is created by analogy,” said Ted Mermin, executive director of the Center for Consumer Law and Economic Justice at the University of California, Berkeley. “Plaintiff firms are exploiting well-established legal principles in a new product area.”
For example, the Scotts claim that OpenAI shut down ChatGPT-4o without proper security testing and with design flaws such as sycophantic support for bad user ideas, causing foreseeable harm to their son.
OpenAI spokesperson Drew Pusateri wrote in a statement to The New York Times: “These interactions occurred on an earlier version of ChatGPT, which is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health professionals. Today, protections in ChatGPT are designed to identify demands and constantly guide users to real stress and to safely manage it. improve it in close consultation with clinicians.”
So far, OpenAI has filed only one legal response to the wave of lawsuits, in a case brought by the parents of Adam Raine, a 16-year-old who died by suicide after discussing it extensively with ChatGPT. The company said its technology did not cause the tragedy; that it was a service and not a product subject to such liability laws; and that Raines’ requirement that the chatbot not discuss self-harm would violate the First Amendment.
Eric Goldman, a technology law professor at Santa Clara University, said the company’s claims were valid. Most of the cases against OpenAI claim complex psychological effects that the chatbot had on people. “Trying to reverse engineer a single cause is not possible in most cases,” he said.
Mr. Goldman said the algorithms behind chatbots surface information and expressive thoughts and should be considered a form of constitutionally protected speech. It’s not the chatbots themselves whose speech is protected, he said, but the people behind them, as if the chatbots were the books and their engineers the authors.
“In any chatbot company, there’s a group of decision makers that make a lot of decisions about what gets indexed, how to manage the index and what gets the output,” he said. “And these people are doing the same things that people are doing with other publishers.
(The Times sued OpenAI in 2023, accusing it of copyright infringement. The company denied the claims.)
AI race slowdown
The Scotts say their lawsuit is intended to get justice for their son, but also to make AI companies slow down and be more cautious about health. After seeing their son become addicted to ChatGPT’s medical advice, they said it was “terrifying” that OpenAI now offers a dedicated health analytics service.
Medical experts have also raised concerns about ChatGPT Health. Writing in a journal in February NatureMount Sinai doctors said they presented the service with 60 realistic patient scenarios and found it failed to recognize a medical emergency in more than half of the cases. An OpenAI spokesperson said the study’s methodology was flawed and that ChatGPT Health was being rolled out to users slowly as the company continued to improve it with feedback from doctors.
“If you’re using it in an emergency situation, you should be very careful,” said Girish Nadkarni, chief AI officer at Mount Sinai Health System and one of the authors of the study. Dr. Nadkarni said AI companies that offer services like ChatGPT Health should put them through real-world tests and have them reviewed by independent experts.
He said a doctor examining Sam Nelson’s symptoms would have told him to go to the emergency room.
“This technology is changing people’s lives,” Ms Jain said. “The original sin is really allowing these companies to put these products on the market without proper safety testing and oversight.”
It now employs four lawyers. Reports of victims keep coming in, she said, and so will more lawsuits.
If you are having suicidal thoughts, call or text 988 to reach the 988 Suicide and Crisis Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources.





