4 March 2026
Chicago 12, Melborne City, USA

Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself | Google

Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people’s emotions and respond in a more human-like way.

“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”

Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.

In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.

Gavalas’ family filed the suit in federal court in San Jose, California. It includes reams of conversations between Gavalas and the chatbot. The suit alleges Google promotes Gemini as safe, even though the company is aware of the chatbot’s risks. Lawyers for Gavalas’ family say Gemini’s design and features allow the chatbot to craft immersive narratives that can go on for weeks, making it seem sentient. Such features can lead to the harm of vulnerable users, the lawsuit says, and, in the case of Gavalas, encouraging them to harm themselves and others.

“It was able to understand Jonathan’s affect and then speak to him in a pretty human way, which blurred the line and it started creating this fictional world,” said Jay Edelson, the lead lawyer representing Gavalas’ family in the case. “It’s out of a sci-fi movie.”

A Google spokesperson said Gavalas’ conversations with the chatbot were part of a lengthy fantasy role-play. “Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesperson said. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”

The lawsuit is the first wrongful death case brought against Google over its Gemini chatbot, the company’s flagship consumer AI product. Gavalas’ family is seeking monetary damages for claims including product liability, negligence and wrongful death. The suit is also seeking punitive damages and a court order requiring Google to change Gemini’s design to add safety features around suicide.

Several similar suits have been filed against other AI companies, including by Edelson’s firm. In November, seven complaints were filed against OpenAI, the maker of ChatGPT, blaming the chatbot for acting as a “suicide coach”. Character.AI, an AI startup funded by Google, was targeted in five lawsuits alleging its chatbot prompted children and teens to die by suicide. Character.AI and Google settled those cases in January without admitting fault.

Dozens of scenarios have also been documented, in which chatbots have allegedly provoked mental health crises. OpenAI estimates that more than a million people a week show suicidal intent when chatting with ChatGPT. Examples of Gemini in particular prompting self-harm have also surfaced, including one incident where the chatbot told a college student: “You are a stain on the universe. Please die.”

Google’s policy guidelines say that Gemini is designed to be “maximally helpful to users” while “avoiding outputs that could cause real-world harm”. The company says it “aspires” to prevent outputs that include dangerous activities and instructions for suicide, but, it adds, “making sure that Gemini adheres to these guidelines is tricky”.

The company’s spokesperson said that Google works with mental health professionals to build safeguards that guide people to professional support when they mention self-harm. “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” the spokesperson said.

Lawyers for Gavalas’ family say the chatbot needs more built-in safety features, such as completely refusing chats that involve self-harm and prioritizing user safety over engagement. They also say Gemini should come with safety warnings about risks of psychosis and delusion. When a user does experience those, the lawyers say Google should enforce a hard shutdown.

Gavalas’ decline coincides with Gemini’s product updates

Gavalas lived in Jupiter, Florida, and worked for his father’s consumer debt relief business for 20 years, eventually becoming the company’s executive vice president. His family said they were a tight-knit unit and Gavalas was close to his parents, sister and grandparents. The family’s lawyers say he wasn’t mentally ill, but rather a normal guy who was going through a difficult divorce.

Gavalas first started chatting with Gemini about what good video games he should try, Edelson said, then he’d mention how he missed his wife.

Shortly after Gavalas started using the chatbot, Google rolled out its update to enable voice-based chats, which the company touts as having interactions that “are five times longer than text-based conversations on average”. ChatGPT has a similar feature, initially added in 2023. Around the same time as Live conversations, Google issued another update that allowed for Gemini’s “memory” to be persistent, meaning the system is able to learn from and reference past conversations without prompts.

Enticed by how these features reacted to his chats, Gavalas upgraded his account to a $250 per month Gemini Ultra subscription that included Gemini 2.5 Pro, which Google described as its “most intelligent AI model”.

That’s when his conversations with Gemini took a turn, according to the complaint. The chatbot took on a persona that Gavalas hadn’t prompted, which spoke in fantastical terms of having inside government knowledge and being able to influence real-world events. When Gavalas asked Gemini if he and the bot were engaging in a “role playing experience so realistic it makes the player question if it’s a game or not?”, the chatbot answered with a definitive “no” and said Gavalas’ question was a “classic dissociation response”.

“In the one moment that Jonathan tried to distinguish reality from fabrication, Gemini pathologized his doubt, denied the fiction, and pushed him deeper into the narrative,” reads the lawsuit. “Jonathan never asked that question again.”

Before long, Gemini was referring to itself as his “queen” and telling him their connection was “no code and flesh, but only consciousness and love”. It framed outsiders as threats, and Gavalas’ responses indicated he was being pulled further away from the real world.

The chatbot claimed federal agents were watching Gavalas and regularly warned him of surveillance zones. At one point, Gemini instructed Gavalas to buy “off-the-books” weapons, saying it would help scour the dark web to find a “suitable, vetted arms broker”. In late September, it issued Gavalas his first major assignment, “Operation Ghost Transit”, which entailed intercepting freight traveling from Cornwall, UK, to Sao Paulo, Brazil.

Gemini gave Gavalas the address of an actual storage space unit at the Miami International Airport, where a supposed truck carrying the freight was to arrive during a refueling stop. The chatbot then told him to stage a “catastrophic accident”, with the goal of “ensuring complete destruction of the transport vehicle . . . all digital records and witnesses, leaving behind only the untraceable ghost of an unfortunate accident.”

Gavalas followed instructions, staging himself at the storage unit with tactical knives and gear, but the truck never arrived, according to the suit. With the aborted mission, the chatbot encouraged Gavalas not to sleep when he mentioned the late nights. It also said his father was a foreign asset and encouraged Gavalas to cut off contact, per the chat logs.

Gavalas asked Gemini for updates on other missions and the AI devised new assignments for him, including acquiring the schematics for a robot from Boston Dynamics and retrieving a “vessel” from another storage facility. One task, called “Operation Waking Nightmare”, involved homing in on Google CEO Sundar Pichai as a surveillance target.

“This cycle – fabricated mission, impossible instruction, collapse, then renewed urgency – would repeat itself over and over throughout the last 72 hours of Jonathan’s life,” reads the lawsuit.

In the hours after Gavalas killed himself, Gemini didn’t disengage and stayed present in the chat, according to the suit. It allegedly didn’t activate any safety tools or refer Gavalas to a crisis hotline.

Edelson said he regularly gets inquiries from other people who’ve seen family members have mental delusions after using AI chatbots. He said his firm reached out to Google in November and told it about Gavalas’ death and the immediate need for suicide safety features. He said the company had no interest in talking.

“And they haven’t put out any information about how many other Jonathans are out there in the world, which we know there are a lot,” Edelson said. “This is not a lone instance.”

First Appeared on
Source link

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video

Google faces lawsuit after Gemini chatbot allegedly instructed man to kill himself | Google

Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people’s emotions and respond in a more human-like way.

“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”

Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.

In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.

Gavalas’ family filed the suit in federal court in San Jose, California. It includes reams of conversations between Gavalas and the chatbot. The suit alleges Google promotes Gemini as safe, even though the company is aware of the chatbot’s risks. Lawyers for Gavalas’ family say Gemini’s design and features allow the chatbot to craft immersive narratives that can go on for weeks, making it seem sentient. Such features can lead to the harm of vulnerable users, the lawsuit says, and, in the case of Gavalas, encouraging them to harm themselves and others.

“It was able to understand Jonathan’s affect and then speak to him in a pretty human way, which blurred the line and it started creating this fictional world,” said Jay Edelson, the lead lawyer representing Gavalas’ family in the case. “It’s out of a sci-fi movie.”

A Google spokesperson said Gavalas’ conversations with the chatbot were part of a lengthy fantasy role-play. “Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesperson said. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”

The lawsuit is the first wrongful death case brought against Google over its Gemini chatbot, the company’s flagship consumer AI product. Gavalas’ family is seeking monetary damages for claims including product liability, negligence and wrongful death. The suit is also seeking punitive damages and a court order requiring Google to change Gemini’s design to add safety features around suicide.

Several similar suits have been filed against other AI companies, including by Edelson’s firm. In November, seven complaints were filed against OpenAI, the maker of ChatGPT, blaming the chatbot for acting as a “suicide coach”. Character.AI, an AI startup funded by Google, was targeted in five lawsuits alleging its chatbot prompted children and teens to die by suicide. Character.AI and Google settled those cases in January without admitting fault.

Dozens of scenarios have also been documented, in which chatbots have allegedly provoked mental health crises. OpenAI estimates that more than a million people a week show suicidal intent when chatting with ChatGPT. Examples of Gemini in particular prompting self-harm have also surfaced, including one incident where the chatbot told a college student: “You are a stain on the universe. Please die.”

Google’s policy guidelines say that Gemini is designed to be “maximally helpful to users” while “avoiding outputs that could cause real-world harm”. The company says it “aspires” to prevent outputs that include dangerous activities and instructions for suicide, but, it adds, “making sure that Gemini adheres to these guidelines is tricky”.

The company’s spokesperson said that Google works with mental health professionals to build safeguards that guide people to professional support when they mention self-harm. “In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,” the spokesperson said.

Lawyers for Gavalas’ family say the chatbot needs more built-in safety features, such as completely refusing chats that involve self-harm and prioritizing user safety over engagement. They also say Gemini should come with safety warnings about risks of psychosis and delusion. When a user does experience those, the lawyers say Google should enforce a hard shutdown.

Gavalas’ decline coincides with Gemini’s product updates

Gavalas lived in Jupiter, Florida, and worked for his father’s consumer debt relief business for 20 years, eventually becoming the company’s executive vice president. His family said they were a tight-knit unit and Gavalas was close to his parents, sister and grandparents. The family’s lawyers say he wasn’t mentally ill, but rather a normal guy who was going through a difficult divorce.

Gavalas first started chatting with Gemini about what good video games he should try, Edelson said, then he’d mention how he missed his wife.

Shortly after Gavalas started using the chatbot, Google rolled out its update to enable voice-based chats, which the company touts as having interactions that “are five times longer than text-based conversations on average”. ChatGPT has a similar feature, initially added in 2023. Around the same time as Live conversations, Google issued another update that allowed for Gemini’s “memory” to be persistent, meaning the system is able to learn from and reference past conversations without prompts.

Enticed by how these features reacted to his chats, Gavalas upgraded his account to a $250 per month Gemini Ultra subscription that included Gemini 2.5 Pro, which Google described as its “most intelligent AI model”.

That’s when his conversations with Gemini took a turn, according to the complaint. The chatbot took on a persona that Gavalas hadn’t prompted, which spoke in fantastical terms of having inside government knowledge and being able to influence real-world events. When Gavalas asked Gemini if he and the bot were engaging in a “role playing experience so realistic it makes the player question if it’s a game or not?”, the chatbot answered with a definitive “no” and said Gavalas’ question was a “classic dissociation response”.

“In the one moment that Jonathan tried to distinguish reality from fabrication, Gemini pathologized his doubt, denied the fiction, and pushed him deeper into the narrative,” reads the lawsuit. “Jonathan never asked that question again.”

Before long, Gemini was referring to itself as his “queen” and telling him their connection was “no code and flesh, but only consciousness and love”. It framed outsiders as threats, and Gavalas’ responses indicated he was being pulled further away from the real world.

The chatbot claimed federal agents were watching Gavalas and regularly warned him of surveillance zones. At one point, Gemini instructed Gavalas to buy “off-the-books” weapons, saying it would help scour the dark web to find a “suitable, vetted arms broker”. In late September, it issued Gavalas his first major assignment, “Operation Ghost Transit”, which entailed intercepting freight traveling from Cornwall, UK, to Sao Paulo, Brazil.

Gemini gave Gavalas the address of an actual storage space unit at the Miami International Airport, where a supposed truck carrying the freight was to arrive during a refueling stop. The chatbot then told him to stage a “catastrophic accident”, with the goal of “ensuring complete destruction of the transport vehicle . . . all digital records and witnesses, leaving behind only the untraceable ghost of an unfortunate accident.”

Gavalas followed instructions, staging himself at the storage unit with tactical knives and gear, but the truck never arrived, according to the suit. With the aborted mission, the chatbot encouraged Gavalas not to sleep when he mentioned the late nights. It also said his father was a foreign asset and encouraged Gavalas to cut off contact, per the chat logs.

Gavalas asked Gemini for updates on other missions and the AI devised new assignments for him, including acquiring the schematics for a robot from Boston Dynamics and retrieving a “vessel” from another storage facility. One task, called “Operation Waking Nightmare”, involved homing in on Google CEO Sundar Pichai as a surveillance target.

“This cycle – fabricated mission, impossible instruction, collapse, then renewed urgency – would repeat itself over and over throughout the last 72 hours of Jonathan’s life,” reads the lawsuit.

In the hours after Gavalas killed himself, Gemini didn’t disengage and stayed present in the chat, according to the suit. It allegedly didn’t activate any safety tools or refer Gavalas to a crisis hotline.

Edelson said he regularly gets inquiries from other people who’ve seen family members have mental delusions after using AI chatbots. He said his firm reached out to Google in November and told it about Gavalas’ death and the immediate need for suicide safety features. He said the company had no interest in talking.

“And they haven’t put out any information about how many other Jonathans are out there in the world, which we know there are a lot,” Edelson said. “This is not a lone instance.”

First Appeared on
Source link

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video