The story so far: On November 5, the Ministry of Electronics and Information Technology (MeitY) unveiled the India AI Governance Guidelines, a 66-page document outlining an approach to regulating and promoting the use of Artificial Intelligence (AI) technologies in Indian society. The guidelines’ launch marks one of the many steps the government is taking in the months leading up to the AI Impact Summit 2026, to be hosted by India in New Delhi.
Also Read | India’s new AI governance guidelines push hands-off approach
What do the guidelines seek to accomplish?
The guidelines flow from the government’s need to have a consistent way to regulate the AI industry and the use of its tools, especially in the light of their growing usage in India, the world’s second largest user of Large Language Models (LLMs) like ChatGPT after the U.S. “India’s goal is to harness the transformative potential of AI for inclusive development and global competitiveness, while addressing the risks it may pose to individuals and society,” the guidelines say. In previous multilateral AI summits in Bletchley Park (U.K.), Seoul and Paris, governments have generally agreed on rough starting points to managing the spread of LLMs and AI in their countries: watch out for and classify the risks that can emerge, create policies for who will be responsible when something goes wrong, and conduct safety research among other things.
The guidelines outline a strategy for India to approach this process. An earlier draft framework was prepared by a subcommittee under a Principal Scientific Adviser-led advisory group. These guidelines, however, have been finalised by a committee set up by MeitY in July, separate from that subcommittee. The committee is led by Balaraman Ravindran, who who leads the Centre for Responsible AI (CeRAI) at IIT Madras.
Also Read | IT Ministry proposes mandatory labelling of AI-generated content on social media
What do the rules recommend?
On the back of principles like people-centricity, accountability, fairness, and understandability (of AI models), the guidelines recommend setting up lines of communication between different parts of the government, like Ministries, sectoral regulators, standards setting agencies, etc. It is recommended that these groups meet often and suggest changes to the law, voluntary commitments, put out standards, and “[i]ncrease access to AI safety tools.” The overarching inter-ministerial body would be the proposed “AI Governance Group”. Beyond the Ministries, the framework recommends the RBI for the financial industry (the RBI has put out its own FREE-AI Committee report for the banking and finance industry in August), bodies like NITI Aayog, and standards organisations like the Bureau of Indian Standards.
The guidelines also include some advice to the private sector, namely to “ensure compliance with all Indian laws; adopt voluntary frameworks; publish transparency reports; provide grievance redressal mechanisms; [and] mitigate risks with techno-legal solutions.” Many of the safety-related recommendations rely on the AI Safety Institute (AISI), a framework that is in place in many countries, including in India; while there is no physical institute, the government has designated a group of academia brought together under the IndiaAI Mission as an online AISI.
A key differentiator from similar AI policies elsewhere is the emphasis the guidelines put on building infrastructure and making it accessible. The policy recommends that State governments “increase AI adoption through initiatives on infrastructure development and increasing access to data and computing resources.” On the other hand, the recommendations join other countries’ concerns around AI and intellectual property, and recommend legal changes in the copyright law to address the issues coming up in that area. The guidelines also reiterate other India-specific priorities that the government has expressed, such as building AI models for Indian languages: one recommendation pushes for the “use of locally relevant datasets to support the creation of culturally representative models and applications”.
Also Read | Parliamentary panel suggests licensing requirements for AI content creators
Are the guidelines consistent with what the government is planning around AI?
The Union government has largely followed a hands-off approach to pre-emptive AI regulation, as is the case in most countries around the world, with one sharp exception: the issue of deepfakes. “Content authentication,” as the guidelines put it, is a pressing issue, the guidelines say. In the weeks leading up to the guidelines, MeitY proposed rules that would require social media companies to label synthetically (AI-generated) images and videos.
There are other parts of the guidelines that are in line with what MeitY has already been doing: for instance, the IndiaAI Mission under the Ministry is already procuring Graphics Processing Units (GPUs) for a common compute facility and sharing access to that compute capacity with researchers and startups.
Another recommendation, to “[s]upport the integration of Digital Public Infrastructure (DPI) with AI with policy enablers,” also seems in motion: the Unique Identification Authority of India (UIDAI), which manages Aadhaar, easily India’s most recognisable example of DPI, has formed a committee this month to deliberate how to use AI to add value to the ID number.
While the guidelines are a result of the government’s main AI policymakers’ deliberations (such as Additional Secretary Abhishek Singh), IT Secretary S. Krishnan said at the launch that if evolving circumstances demanded quick action outside the framework envisioned by this document, the government “won’t hesitate” to act quickly, such as by passing a stringent law.
2 hours ago
5




English (US) ·