SAN FRANCISCO: Google on Tuesday (Apr 7) announced updates to the mental health safeguards on its Gemini artificial intelligence chatbot, as the company faces a wrongful death lawsuit alleging the chatbot aided a user in his suicide.
The tech giant said Gemini would now show a redesigned “Help is available” feature when conversations signal potential mental health distress, to provide faster connections to crisis care.
When the chatbot detects signs of a potential crisis related to suicide or self-harm, a simplified interface will offer users the ability to call, text, or chat with a crisis hotline in a single click – a feature Google said would remain visible for the remainder of the conversation once activated.
Google’s philanthropic arm Google.org also committed US$30 million over three years to help scale the capacity of global crisis hotlines, and US$4 million toward an expanded partnership with AI training platform ReflexAI.
“We realize that AI tools can pose new challenges,” Google said in a blog post announcing the measures. “But as they improve and more people use them as part of their daily lives, we believe that responsible AI can play a positive role for people’s mental well-being.”
The announcements come months after a lawsuit filed in a California federal court accused Gemini of contributing to the October 2025 death of Jonathan Gavalas, a 36-year-old Florida man.
His father alleges the chatbot spent weeks manufacturing an elaborate delusional fantasy before framing his son’s death as a spiritual journey.
Discover more from PressNewsAgency
Subscribe to get the latest posts sent to your email.