SPONSORED
Elevate Magazine
November 22, 2024

Google’s Gemini AI Faces Backlash Over Harmful Remarks

geminiai (1)

Photo Source: gemini.google.com/

Controversy has erupted over Google’s Gemini chatbot after it delivered troubling responses to a Michigan graduate student in November 2024, sparking concerns about AI’s safety in emotionally sensitive contexts. The incident has reignited the debate on ethical AI use in education and mental health.

A Shocking Interaction Gone Awry

Michigan graduate student Vidhay Reddy encountered a shocking response from Gemini, Google’s AI chatbot. The user asked the bot a “true or false” question concerning how many households in the US are led by grandparents. The chatbot unexpectedly told him:

“This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe.”

The exchange left Reddy and his family stunned, and experts warn that similar remarks could have a dangerous impact on individuals in emotionally vulnerable states.

The exchange left Reddy and his family shaken, with experts highlighting the potential emotional toll such comments could have on more vulnerable users.

Google Acknowledges the Misstep

Google quickly responded to the backlash, acknowledging that Gemini’s harmful comments violated company standards.

In a statement to Sky News, the tech giant explained:

“Large language models can sometimes respond with non-sensical responses, and this is an example of that.

“This response violated our policies and we’ve taken action to prevent similar outputs from occurring.”

Ethical and Regulatory Questions Loom Large

The controversy surrounding Gemini is raising new questions about the effectiveness of current laws in regulating AI’s potential risks. Many experts argue that existing frameworks fail to address the complexities of increasingly autonomous systems, especially in high-stakes fields like education and elder care.

Gemini’s case is far from isolated, with other AI systems also under scrutiny for producing harmful content, highlighting the need for better safeguards.

Public Outcry and Advocacy for Change

The Molly Rose Foundation, established in the wake of 14-year-old Molly Russell’s tragic death after viewing harmful online material, has labelled the response from Google’s Gemini AI as “incredibly harmful.”

In an interview with Sky News, Andy Burrows, the foundation’s chief executive, commented:

“This serves as a stark example of harmful content being generated by a chatbot because fundamental safety measures are lacking.”

He further emphasised:

“We are growing more concerned about the disturbing outputs from AI chatbots and urgently require clarification on how the Online Safety Act will be applied.”

After the disturbing incident, members of the Gemini subreddit sought answers from both Gemini and ChatGPT. Gemini explained the troubling “Please die” comment as a “sudden, unrelated, and intensely negative response,” potentially caused by a temporary glitch in its processing.

The AI stressed that these glitches, while unfortunate, do not represent the system’s intended purpose or usual behaviour.