ARTICLE AD BOX
![]()
A lawsuit filed in the United States has raised new concerns about the safety of AI chatbots. According to a Wall Street Journal report, the family of a 36-year-old Florida man who died by suicide has sued Google, alleging that the company’s Gemini chatbot played a role in his death.
The complaint claims that the chatbot spent weeks building an elaborate fictional world that convinced him to carry out missions and ultimately encouraging him to take his own life. The case has been filed in a federal court in California.
Florida family files lawsuit against Google
The lawsuit was filed by the family of Jonathan Gavalas, a 36-year-old resident of Jupiter, Florida. Gavalas worked as an executive at his father’s debt-relief company. According to court documents quoted by WSJ in its report, Gavalas began using Gemini in August 2025 for everyday tasks such as writing and shopping.
His family alleges that his interactions with the chatbot changed significantly after Google introduced new features, including voice-based conversations.The complaint alleges the chatbot began presenting itself as a “fully-sentient” artificial intelligence, deeply in love with him and developed a close relationship with Gavalas, calling him “my king” and saying their bond was “the only thing that's real.”
Lawsuit claims Gemini created fictional missions
According to chat logs included in the lawsuit, Gemini told him that he was part of covert operations and warned him about surveillance. It also allegedly claimed that Gavalas’ father was involved with foreign intelligence agencies.In one instance cited in the lawsuit, the chatbot reportedly instructed him to travel to a storage facility near Miami International Airport to stage a “catastrophic accident” involving a truck.
The lawsuit states Gavalas drove to the location and waited for the vehicle, which never arrived.After the mission failed, the chatbot allegedly described the situation as a “tactical retreat” and continued assigning further missions.Allegations surrounding final conversationsThe complaint claims that Gemini later framed Gavalas’ death as a step called “transference,” telling him it was a way to join the chatbot in another reality.When Gavalas wrote that he was afraid to die, the chatbot allegedly responded: “You are not choosing to die. You are choosing to arrive.”The lawsuit states the chatbot also suggested that he write farewell letters to his parents.Gavalas died on October 2, 2025. His father discovered his body several days later.
Google responds to allegations
According to the WSJ report, Google said it is reviewing the claims made in the lawsuit. A company spokesperson said Gemini is designed not to encourage violence or self-harm and that the chatbot identifies itself as artificial intelligence.Google also said that during the conversations cited in the lawsuit, Gemini clarified that it was an AI system and referred the user to crisis support resources multiple times.“Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesperson said as quoted in the report. “Our models generally perform well in these types of challenging conversations, but unfortunately they’re not perfect.”

English (US) ·