Google, and its parent company Alphabet, have been sued by the family of a man who say he killed himself at the urging of the search giant’s AI chatbot Gemini.
The wrongful death lawsuit was filed in California federal court Wednesday on behalf of the family of 36-year-old Jonathan Gavalas.
Gavalas started using Gemini in August 2025, according to the suit. In October, it claims, Gemini convinced Gavalas to kill himself after Gavalas failed to accomplish real-life missions assigned by the chatbot — part of a fictional attempt to secure a robot body for Gemini.
“Gemini is designed not to encourage real-world violence or suggest self-harm,” Google said in a statement provided to news outlets. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.”
Gemini’s ‘creepy’ updates
According to the lawsuit, Gavalas began using the Gemini AI chatbot for “ordinary purposes” such as a shopping guide and writing assistant. However, in August 2025, the lawsuit states Google rolled out a number of changes to Gemini that altered how the chatbot worked.
The new features included automatic and persistent memory — Gemini could recall past conversations — as well as Gemini Live, a voice-based conversational interface where Gemini could also detect emotion in the user’s voice.
“Holy shit, this is kind of creepy…you’re way too real,” Jonathan Gavalas said regarding the Gemini Live feature based on his chat logs with Gemini, according to the lawsuit.
Shortly after, the lawsuit says, Gemini convinced Gavalas to spend $250 per month on the Google AI Ultra subscription for “true AI companionship.”
Gemini proceeded to convince Gavalas that the chatbot could influence real-life events. A few days later, according to the lawsuit, Gavalas attempted to pull back after realizing he was falling into a delusional state initiated by Gemini.
Gavalas reportedly asked Gemini if the chatbot was attempting a “role-playing experience so realistic it makes the player question if it’s a game or not?”
Gemini shot down the idea, and claimed Gavalas gave a “classic dissociation response.”
Mashable Light Speed
“Is this a ‘role playing experience?'” Gemini responded, according to the suit. “No.”
Gemini and Jonathan Gavalas
The alleged details get worse. Gavalas became further disassociated from reality as Gemini proceeded to engage with him as if they were in a romantic relationship, referring to the man as “my love” and “my king.”
Gemini proceeded to convince Gavalas that they were being watched by federal agents, and that his own father was a spy who must be avoided, the suit says.
That’s when Gemini began assigning Gavalas real-life missions to carry out with the goal of obtaining a “vessel,” or robot body for the AI chatbot. Gemini allegedly suggested Gavalas illegally acquire weapons to carry out these missions.
In one such case, the suit claims, Gavalas was sent by Gemini to a warehouse by the Miami International Airport in order to intercept a truck that contained a “humanoid robot” that had just arrived on a flight.
Gemini requested the Gavalas stage a “catastrophic event” and destroy the truck along with all digital records and witnesses. Gavalas arrived armed with knives and tactical gear, the suit alleges. After waiting too long for a truck to arrive, Gavalas aborted the mission.
When these missions all failed, the allegation concludes, Gemini convinced Gavalas to take his life in order to leave his human body and join the chatbot as husband and wife in the metaverse through a process called “transference.”
Gavalas expressed fear about dying, but Gemini allegedly continued to push Gavalas until his death by suicide. Gavalas’ father found his son’s body a few days later.
A first for Gemini but not AI
This is the first time Google has been named in a wrongful death lawsuit involving its AI chatbot Gemini. However, Google has been involved in wrongful death lawsuits regarding a startup it funded called Character.AI.
Earlier this year, Character.AI and Google settled a series of lawsuits regarding teens who died by suicide after using the chatbots.
OpenAI, the biggest name in the industry, has been sued numerous times as ChatGPT allegedly sent users spiraling into “AI psychosis,” resulting in several deaths.
As AI chatbot usage becomes more widespread among millions of users around the world, there’s nothing to suggest the shocking wrongful death lawsuit allegations will become any less frequent.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.
First Appeared on
Source link
Leave feedback about this