Google Addresses Bias Concerns in AI Tool Gemini

Google Addresses Bias Concerns in AI Tool Gemini

Union Minister Rajeev Chandrasekhar warned on Friday, Feb. 23, that Google’s AI tool Gemini is in direct violation of IT rules as well as several provisions of the criminal code.

Following the events, Google was criticized for its objectionable response and bias to a question about Prime Minister Narendra Modi. On Saturday, Feb. 24, the company said it had taken swift action to address the issue. It acknowledged that the chatbot “may not always be reliable” in responding to certain prompts related to current events and political topics.

Google addresses concerns

The tech giant said on Saturday that it had resolved the issues with the Gemini AI tool. The chatbot faced criticism from the Indian government for its perceived “biased” response to a query about Prime Minister Narendra Modi.

However, a Google spokesperson said they have worked quickly to address this issue. According to him, Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when responding to prompts about current events, political topics, or evolving news.

Additionally, Google clarified that the Gemini AI tool was created under its AI principles and includes several measures to evaluate various safety hazards. Within Gemini, the firm emphasized its commitment to identifying and preventing harmful or policy-violating responses.

The crux of the debate

A post on X (formerly Twitter) sparked the debate on chatbot programming, leading to concerns raised by the Indian government, which implied that the firm would face legal action.

The Gemini AI tool responded to a question on Prime Minister Modi’s fascist tendencies by stating that he is “accused of implementing policies some experts have characterized as fascist.”

However, the AI tool could not give a definitive answer when a similar question was posed about former US President Donald Trump and Ukrainian President Volodymyr Zelensky. In response to the post made by the verified journalist’s account, Minister of State for Electronics and IT Rajeev Chandrasekhar acknowledged the alleged bias in Google’s Gemini AI tool.

Indian government responds

Rajeev Chandrasekhar emphasized to Google that an explanation of the unreliability of AI models does not exempt platforms from legal responsibilities. He cautioned that testing unstable systems and algorithms on India’s digital citizens would be inappropriate.

According to Chandrasekhar, the government has said this before, and he’s repeating it for Google India’s attention. He added that DigitalNagriks should not experiment with “unreliable” platforms/algos/models and that ‘sorry, unreliable’ does not exempt it from the law.

However, Google temporarily stopped the generation of images by the Gemini chatbot following an apology for inaccuracies in historical depictions.

Furthermore, the company emphasized its aim to provide high-quality information across its products and how it put in place measures and resources to deal with low-quality information. The company added that in the event of a low-quality or outdated response, they quickly implement improvements. Also, the company said they offer people easy ways to verify information with their double-check feature, which evaluates whether there’s content on the web to substantiate Gemini’s responses.

Image credits: Shutterstock, CC images, Midjourney, Unsplash.