Indian Government Mandates Permission for AI Platforms, Google Under Fire

Date:

Updated: 4:52 PM, Sat March 02, 2024

Platforms that currently offer under-testing/unreliable AI systems or large language models to Indian users must explicitly seek permission from the Centre before doing so and appropriately label the possible and inherent fallibility or unreliability of the output generated.

Google’s AI platform Gemini had recently come under fire from the Ministry of Electronics and Information Technology (MeitY) for answers generated by the platform on a question about Prime Minister Narendra Modi. The Indian Express had earlier reported that the government was planning to issue a show-cause notice to Google. This paper had also reported about Ola’s beta generative AI offering Krutrim’s hallucinations.

Minister of State for Electronics and IT Rajeev Chandrasekhar said that the advisory is a signal to the future course of legislative action that India will undertake to rein in generative AI platforms. In response to a question by The Indian Express, Chandrasekhar explained that the requirement for such companies to seek permission from the government will effectively create a sandbox and that the government may seek a demo of their platforms including the consent architecture they follow.

The notice was sent to all intermediaries including Google, OpenAI, on Friday evening. The advisory is also applicable to all platforms which allow users to create deepfakes. Chandrasekhar confirmed that it includes Adobe. The companies have been asked to submit an action taken report within 15 days.

The use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated, the advisory said.

All intermediaries or platforms ensure that their computer resources do not permit any bias or discrimination or threaten the integrity of the electoral process including via the use of Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s), it added.

Chandrasekhar added that the reason for the advisory specifically mentions the integrity of the electoral process keeping in mind the upcoming Lok Sabha elections later this year. We know that misinformation and deepfakes will be used in the run up to the election to try and impact or shape the outcome of the elections, he said, in response to a question on whether the advisory goes beyond the remit of existing IT Rules.

Frequently Asked Questions (FAQs) Related to the Above News

What is the recent mandate issued by the Indian government regarding AI platforms?

Platforms offering under-testing/unreliable AI systems or large language models to Indian users must seek explicit permission from the Centre and label the fallibility or unreliability of the output generated.

Why was Google's AI platform Gemini under fire from the Indian government?

Gemini generated problematic answers regarding Prime Minister Narendra Modi, prompting the Ministry of Electronics and Information Technology to issue a show-cause notice to Google.

Which other companies received the notice from the Indian government regarding AI platforms?

All intermediaries including OpenAI and platforms allowing users to create deepfakes such as Adobe also received the notice.

What action is required from the companies that received the notice?

They are asked to submit an action taken report within 15 days and ensure that their AI models or algorithms do not permit bias, discrimination, or threats to the electoral process.

Why did the government mention the integrity of the electoral process in the advisory?

The upcoming Lok Sabha elections and the potential use of misinformation and deepfakes to influence election outcomes motivated the government to include safeguards in the advisory.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Tanvi Shah
Tanvi Shah
Tanvi Shah is an expert author at The Reportify who explores the exciting world of artificial intelligence (AI). With a passion for AI advancements, Tanvi shares exciting news, breakthroughs, and applications in the Artificial Intelligence category. She can be reached at tanvi@thereportify.com for any inquiries or further information.

Share post:

Subscribe

Popular

More like this
Related

FTX Co-Founder Sam Bankman-Fried Settles for $1.3 Million Deal with Investors and Celebrities, US

FTX co-founder SBF settles for $1.3 million deal with investors and celebrities, including Tom Brady and Shaquille O'Neal.

Cue Health Inc (HLTH) Stock Soars 2.18% in Latest Session, Outperforming Industry – Earnings Report Expected in May

Discover Cue Health Inc (HLTH) stock's 2.18% surge, outperforming the industry - Earnings report expected in May. Stay informed on this growing company.

Brazilian Startups Lead AI Revolution in Agriculture for Sustainable Growth

Discover how Brazilian startups are leading the AI revolution in agriculture for sustainable growth. Learn how AI is transforming farming operations in Brazil.

Israeli Airstrike Kills 9, Including 6 Children, in Rafah Gaza Strip

Israeli airstrike in Rafah Gaza Strip kills 9, including 6 children, adding to tensions between Israel and Hamas. International calls for restraint.