After facing intense backlash on its first advisory on election integrity for artificial intelligence (AI) platforms, the IT Ministry has amended it and prepared a fresh advisory where it has scrapped a controversial provision to require government permission before rolling out “untested/unreliable” AI systems in India, The Indian Express has learnt.
“Under-tested/unreliable Artificial Intelligence foundational models)/ LLM/Generative Al, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labelling the possible inherent fallibility or unreliability of the output generated,” the new advisory, dated March 15, is learnt to say.
In its initial advisory issued to online intermediaries like Meta and Google earlier this month, the government had said that companies will have to seek its “explicit permission” before launching untested AI systems in India.
While the government had earlier clarified that the advisory would not apply to AI start-ups but to “large” platforms, the requirement to seek its nod now has been dropped altogether.
The first advisory was criticised by some startups in the generative AI space, including those invested in the ecosystem abroad, over fears of regulatory overreach of the yet nascent industry by the Indian government. Aravind Srinivas, founder of Perplexity AI, called the advisory a “bad move by India”, while Martin Casado, general partner at the US-based investment firm Andreessen Horowitz, had termed the move a “travesty”, which was “anti-innovation” and “anti-public”.
The advisory was issued keeping the upcoming Lok Sabha elections in mind as the government had asked companies that their AI services should not generate responses with biases and are illegal under Indian laws or “threaten the integrity of the electoral process”.
However, even as the government now has gone back on its position, the advisory did have an effect. Days after it was issued, Google said that it will restrict the types of election-related questions users can ask its AI chatbot Gemini in India.
“Out of an abundance of caution…we have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses. We take our responsibility for providing high-quality information for these types of queries seriously…,” the company said in a blog post recently.
At the heart of the disagreement is a tussle between lawmakers and tech companies over the future of safe harbour protections to generative AI platforms like Gemini and ChatGPT. It is also as much about the government’s view of outputs generated by some of these platforms and if it disagrees with them, even if they may or not be entirely unlawful.
However, even with the new advisory, the area of concern with respect to government overreach that remains is that even the new one — similar to the earlier advisory — has been sent as “due diligence” measures that online intermediaries need to follow under the current Information Technology Rules, 2021.
Though the advisories are not legally binding, questions were raised on the legal basis – under which law the government can issue guidelines to generative AI companies since India’s current technology laws do not directly cover large language models.