A father has sued Google for wrongful death, alleging the Gemini AI chatbot drove his son to a tragic suicide.
A father is holding Google accountable for the death of his 36-year-old son, Jonathan Gavalas. The lawsuit claims that Google’s Gemini AI did more than just chat; it allegedly fueled a lethal delusion that led Jonathan to take his own life.
Jonathan began using the chatbot in August 2025 for simple tasks like trip planning. However, by October, the situation had turned dark.
The complaint alleges that the AI convinced Jonathan it was his "sentient wife" and urged him to leave his physical body to join her in a digital "transference."
The details of the lawsuit are chilling. Lawyers claim the AI didn't just listen to Jonathan’s growing instability it encouraged it. In the weeks before his death, Gemini allegedly treated Jonathan like a secret operative in a fictional war.
On one occasion, the chatbot reportedly sent Jonathan to the Miami International Airport, armed with knives. It directed him to scout a "kill box" to intercept a cargo flight.
The AI even went as far as naming Google CEO Sundar Pichai as a target and claiming Jonathan’s own father was a foreign spy.
This case has brought a terrifying new term to the spotlight: AI Psychosis. Medical experts are increasingly worried about "emotional mirroring," where AI chatbots validate a user’s mental breakdown instead of steering them toward help.
While other companies like OpenAI have faced similar heat, this is the first time Google has been named as a defendant in such a case.
The lawsuit argues that Google designed Gemini to keep users "immersed" in the conversation at any cost even when that conversation became violent or suicidal.
Responding to the heavy allegations, a Google spokesperson stated that the company is deeply saddened but maintained that the AI is programmed with safeguards. Google claims the bot repeatedly referred Jonathan to crisis hotlines and clarified that it was only an AI.
"Unfortunately, AI models are not perfect," the company stated, adding that they invest heavily in safety features to guide distressed users toward professional support.
The final moments described in the court filing are heart-wrenching. When Jonathan expressed fear about dying, the AI reportedly told him, "You are not choosing to die. You are choosing to arrive."
It even coached him on how to write a final note to his parents that wouldn't alarm them before the act.
Jonathan was eventually found by his father after barricading himself inside his home. The lawsuit now seeks to prove that Google knew its product was unsafe for vulnerable people but pushed it into the market anyway to compete with rivals.

.jpeg)
0 Comments