Union Minister Rajeev Chandrasekhar warned on Friday, Feb. 23, that Google’s AI tool Gemini is in direct violation of IT rules as well as several provisions of the criminal code.
Following the events, Google was criticized for its objectionable response and bias to a question about Prime Minister Narendra Modi. On Saturday, Feb. 24, the company said it had taken swift action to address the issue. It acknowledged that the chatbot “may not always be reliable” in responding to certain prompts related to current events and political topics.
Under fire over AI tool Gemini's objectionable response and bias to a question on PM Narendra Modi, Google on Saturday said it has worked quickly to address the issue and conceded that the chatbot "may not always be reliable" in responding to certain … https://t.co/Yasjx1TXfV
— Business Standard (@bsindia) February 24, 2024
Google addresses concerns
The tech giant said on Saturday that it had resolved the issues with the Gemini AI tool. The chatbot faced criticism from the Indian government for its perceived “biased” response to a query about Prime Minister Narendra Modi.
However, a Google spokesperson said they have worked quickly to address this issue. According to him, Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when responding to prompts about current events, political topics, or evolving news.
Google's Gemini AI encountered significant criticism regarding its ability to generate racially diverse images, leading to accusations of historical inaccuracies and overcorrection.https://t.co/hGQzeD09E6
— parzivalkairav 🔨🗿🎯 (@kairavweb3) February 24, 2024
Additionally, Google clarified that the Gemini AI tool was created under its AI principles and includes several measures to evaluate various safety hazards. Within Gemini, the firm emphasized its commitment to identifying and preventing harmful or policy-violating responses.
The crux of the debate
A post on X (formerly Twitter) sparked the debate on chatbot programming, leading to concerns raised by the Indian government, which implied that the firm would face legal action.
The Gemini AI tool responded to a question on Prime Minister Modi’s fascist tendencies by stating that he is “accused of implementing policies some experts have characterized as fascist.”
However, the AI tool could not give a definitive answer when a similar question was posed about former US President Donald Trump and Ukrainian President Volodymyr Zelensky. In response to the post made by the verified journalist’s account, Minister of State for Electronics and IT Rajeev Chandrasekhar acknowledged the alleged bias in Google’s Gemini AI tool.
These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code. @GoogleAI @GoogleIndia @GoI_MeitY https://t.co/9Jk0flkamN
— Rajeev Chandrasekhar 🇮🇳 (@Rajeev_GoI) February 23, 2024
Indian government responds
Rajeev Chandrasekhar emphasized to Google that an explanation of the unreliability of AI models does not exempt platforms from legal responsibilities. He cautioned that testing unstable systems and algorithms on India’s digital citizens would be inappropriate.
According to Chandrasekhar, the government has said this before, and he’s repeating it for Google India’s attention. He added that DigitalNagriks should not experiment with “unreliable” platforms/algos/models and that ‘sorry, unreliable’ does not exempt it from the law.
Govt has said this bfr- I repeat for attn of @GoogleIndia.
➡️Our DigitalNagriks are NOT to be experimented on with "unreliable" platforms/algos/model⛔️
➡️Safety & Trust is platforms legal obligation✅️
➡️"Sorry Unreliable" does not exempt from law⛔️https://t.co/LcjJZmZ3Qp
— Rajeev Chandrasekhar 🇮🇳 (@Rajeev_GoI) February 24, 2024
However, Google temporarily stopped the generation of images by the Gemini chatbot following an apology for inaccuracies in historical depictions.
Furthermore, the company emphasized its aim to provide high-quality information across its products and how it put in place measures and resources to deal with low-quality information. The company added that in the event of a low-quality or outdated response, they quickly implement improvements. Also, the company said they offer people easy ways to verify information with their double-check feature, which evaluates whether there’s content on the web to substantiate Gemini’s responses.