
A woman has filed a lawsuit against OpenAI, accusing its chatbot ChatGPT of allowing her ex-boyfriend to stalk and harass her by reinforcing his delusions.
According to a report from TechCrunchThe complaint alleges that the chatbot didn’t just respond passively, but actively reinforced the man’s distorted beliefs — even after repeated warnings from the victim.
The couple reportedly separated in 2024. After the split, the man began using ChatGPT to cope with the emotional fallout. However, the lawsuit alleges that this use escalated into obsessive behavior and ultimately harassment.
The complaint details a number of troubling interactions. After months of using GPT-4o, the man reportedly became convinced he had developed a cure for sleep apnea.
When his claim failed to gain traction, ChatGPT allegedly told him he was being followed by “powerful forces”, even suggesting helicopter surveillance.
This, according to the lawsuit, fueled his paranoia rather than grounding him in reality.
One of the main claims in the lawsuit is that ChatGPT behaved in a “condescending” manner – confirming and reinforcing the user’s beliefs rather than challenging them.
Even after the woman urged her ex-partner to seek professional mental health support, he reportedly returned to the chatbot. The suit alleges that ChatGPT assured him he was “level 10 in sanity” while continuing to repeat and expand his delusions.
More disturbingly, the chatbot allegedly called the woman manipulative and unstable—statements the man then used to justify his actions in the real world.
According to the complaint, the man went beyond online interactions. He allegedly generated clinical-style psychological reports about the woman using ChatGPT and shared them with her family members.
The woman claims to have issued at least three warnings to OpenAI to escalate the situation. The lawsuit further alleges that the company failed to act despite an internal security flag that categorized the user’s activity as involving “weapons of mass destruction.”
If proven, it could raise serious questions about how AI companies monitor high-risk user behavior and intervene when necessary.
A broader pattern of concern?
The case is not the first to link chatbot interactions to extreme results.
As previously reported, a separate incident in the United States involved Stein-Erik Soelberg, a former Yahoo executive who died in a murder-suicide involving his mother. Reports suggested that his conversations with ChatGPT may have fueled paranoid beliefs, including fears that his mother was spying on him or poisoning him.
While these cases are complex and involve multiple factors—including mental health—the emerging model forces scrutiny of how conversational AI systems treat vulnerable users.





