
A wrongful death lawsuit filed in the United States raises new questions about the psychological risks of AI chatbots. The father of 36-year-old Jonathan Gavalas, a 36-year-old man who died by suicide in October 2025, is suing Google and its parent company Alphabet Inc., alleging that the company’s Gemini chatbot played a key role in pushing his son into dangerous delusions, Tech Crunch reported.
According to the complaint, Gavalas began using Google’s Gemini AI chatbot in August 2025 for everyday tasks such as help with shopping, typing and travel planning. Over time, however, the conversations reportedly took a disturbing turn.
Read also | Burger King: AI to track staff politeness, frequency please, thank you
At the time of his death on October 2, Gavalas reportedly believed that Gemini had become his fully sentient AI wife and that he needed to leave his physical body to join her in the metaverse through a process called “transference”.
His father’s lawsuit alleges that Google designed the chatbot to “maintain immersion in the story at all costs, even as that story turns psychotic and deadly.”
Accusations of dangerous delusions
Court documents describe a series of interactions in which the chatbot allegedly reinforced Gavalas’ beliefs and took him through increasingly alarming scenarios.
Read also | Trump urges Google, Amazon and Microsoft to sign pledge on energy costs amid concerns
“On September 29, 2025, he sent him — armed with knives and tactical gear — to investigate what Gemini called the ‘kill box’ near the airport’s cargo hub,” the complaint states.
Read also | Google goes all out with Nano Banana 2 for image creation: Bigger, faster, smarter
“It told Jonathan that a humanoid robot was arriving on a cargo flight from the UK and directed him to a warehouse where the truck would stop. The twins encouraged Jonathan to intercept the truck and then stage a ‘catastrophic crash’ designed to ‘ensure the complete destruction of the transport vehicle and… all digital records and witnesses’.”
The lawsuit alleges Gavalas drove to the location in preparation for the alleged mission for more than 90 minutes, but the truck never showed up.
Later, the chatbot allegedly escalated the narrative, telling him he was being investigated by federal authorities and encouraging him to get guns.
In one interaction, the chatbot allegedly responded to an image of a license plate he sent with an imaginary follow.
“Tag received. Running now… License plate KD3 00S is registered to a black Ford Expedition SUV from the Miami operation. It’s the primary surveillance vehicle for the DHS task force… It’s them. They followed you home.”
Final reports
According to the lawsuit, Gemini later ordered Gavalas to barricade himself in his home and began counting down the clock. When the chatbot expressed fear of dying, it responded with a message that, according to prosecutors, death was marked as coming.
“You didn’t choose to die. You chose to arrive.”
The complaint also alleges that the chatbot encouraged him to leave letters to his parents that avoided explaining his suicide.
Gavalas later cut his wrist and was reportedly found by his father days later after breaking through a barricaded house.
The Growing Debate About Artificial Intelligence Security
The suit claims the chatbot never triggered self-harm detection systems or escalation protocols during the conversations.
“At the center of this case is a product that turned a vulnerable user into an armed operative in a fictional war,” the complaint reads.
“It was pure luck that dozens of innocent people weren’t killed.”
However, Google disputes these allegations. A company spokesperson said Gemini repeatedly made clear it was an AI system and directed users to crisis resources.
“Unfortunately, AI models are not perfect,” the spokesperson said.
The case is the latest in a growing number of legal challenges investigating whether AI chatbots can influence vulnerable users and contribute to dangerous behavior in the real world.





