India’s Regulation of AI and Large Language Models

Posted by Written by Abhishek Dey and Melissa Cyrill Reading Time: 6 minutes

Presently, India lacks a dedicated regulation for artificial intelligence (AI). We outline some of the advisories, guidelines, and IT rules that offer legal oversight for the development of AI, Generative AI, and large language models (LLM) in India.


On March 1, 2024, the Indian government issued an advisory instructing platforms to obtain explicit permission from the Ministry of Electronics and Information Technology (MeitY) before implementing any “unreliable Artificial Intelligence (AI) models /Large Language Models (LLM)/Generative AI, software or algorithms” for users accessing the Indian Internet. Furthermore, intermediaries or platforms are required to ensure that their systems do not facilitate bias, discrimination, or compromise the integrity of the electoral process. Additionally, they must label all artificially generated media and text with unique identifiers or metadata to facilitate easy identification.

Generative AI like OpenAI’s ChatGPT (Chat Generative Pre-trained Transformer), refers to algorithms capable of generating various types of content such as audio, code, images, text, simulations, and videos. Recent advancements in this field have the potential to revolutionize content creation methods. Other competing Generative AI products are Google’s Gemini, Baidu’s Ernie Bot, Meta AI’s LLaMA, Anthropic’s Claude, and xAI’s Grok. Microsoft has also launched Copilot based on GPT-4; the tech giant has a profit-sharing partnership with OpenAI.

Large language models are foundation models trained extensively on vast datasets, enabling them to comprehend and generate natural language and other content for diverse tasks. LLMs have gained widespread recognition for popularizing Generative AI and are focal points for organizations seeking to integrate artificial intelligence into various business functions and applications. Prior to 2020, fine-tuning was the primary method for adapting models to specific tasks, but larger models like GPT-3 can now be prompt-engineered to achieve similar results. These models are believed to possess knowledge of syntax, semantics, and ontology present in human language data, along with any associated inaccuracies and biases. Notable LLMs include OpenAI’s GPT series (e.g., GPT-3.5 and GPT-4), Google’s PaLM and Gemini, xAI’s Grok, Meta’s LLaMA family, Anthropic’s Claude models, and Mistral AI’s open source models.

A clarification was soon issued by the Minister of State for Electronics and Information Technology Rajeev Chandrasekhar on X, saying that the advisory was only applicable on “significant platforms” and permission seeking from MeitY was required only by “large platforms and will not apply to startups”. Also, the advisory was only aimed at untested AI platforms deployed on the Indian internet.

Following public criticism, the government issued a revised advisory that removed the requirement for platforms to submit action taken-cum-status reports but maintained the immediate compliance obligation. The language was toned down, with platforms now simply mandated to label under-tested or unreliable AI models to caution users about potential inaccuracies. Social media intermediaries were directed to employ consent pop-ups to explicitly notify users about the unreliability of AI-generated content. Measures to detect and label deepfakes and misinformation were preserved, while the concept of “first originator” was eliminated.

These advisories shed light on how the regulatory environment is developing for AI/LLM in India.

The Indian government is actively investing in the artificial intelligence sector. Most recently, it sanctioned a substantial investment of INR 103 billion (US$1.25 billion) for AI projects over a period of five years. This funding will be allocated to diverse objectives, such as the development of computing infrastructure, large language models, and supporting AI startups. Additionally, a National Data Management Office will be established that will coordinate with various government departments and ministries to improve the quality of data and make them available for AI development and deployment. These investments aim to foster the creation of AI applications for the public sector.

AI regulatory landscape in India

Presently, India lacks a dedicated regulation for AI, but instead, it has established a series of initiatives and guidelines aimed at the responsible development and deployment of AI technologies.

Below, we note key guidelines and strategies that inform India’s regulatory landscape for AI technology.

National Artificial Intelligence Strategy

In 2018, Niti Ayog launched the first national AI strategy, #AIFORALL, which was to serve as an inclusive approach to artificial intelligence. The strategy identified critical areas for national priority in AI innovation and deployment, including healthcare, education, agriculture, smart cities, and transportation. Since then, some of the strategy’s recommendations have been executed, including the creation of high-quality datasets to promote research and innovation, as well as the construction of legislative frameworks for data protection and cybersecurity.

Principles for Responsible AI

In February 2021, NITI Aayog drafted the Principles for Responsible AI as a continuation of the National Artificial Intelligence Strategy. This document examines ethical considerations surrounding the implementation of AI solutions in India, categorized into system and societal considerations. While system considerations primarily address decision-making principles, fair inclusion of beneficiaries, and accountability, the societal considerations concentrate on automation’s impact on job creation and employment.

The paper outlines seven overarching principles for the responsible governance of AI systems: safety and reliability; inclusivity and non-discrimination; equality; privacy and security; transparency; accountability; and protection and reinforcement of positive human values.

Operationalizing Principles for Responsible AI

In August 2021, NITI Aayog published the second segment of the principles for responsible AI, which focuses on putting into practice the principles derived from the ethical considerations explored in the first part. The document underscores the significance of government involvement in promoting responsible AI implementation in social sectors, in collaboration with the private sector and research organizations. It stresses the necessity of regulatory and policy actions, capacity enhancement, and encouraging ethical practices by integrating a responsible mindset among private entities regarding AI.

DPDP Act

The Digital Personal Data Protection Act, 2023, was signed into force by the President of India on August 11, 2023. Effective immediately, this Act governs the processing of digital personal data in India, irrespective of its original format, and can be utilized to tackle some of the privacy issues related to AI platforms.

Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021

The Information Technology Rules (Intermediary Guidelines and Digital Media Ethics Code), 2021 (IT Rules 2021), issued by the Government of India under the Information Technology Act of 2000, serve as a framework to oversee various entities, including social media intermediaries, OTT platforms, and digital news media. These rules were implemented on May 26, 2021 and updated on April 6, 2023.

Draft National Data Governance Framework Policy

The MeitY released the draft National Data Governance Framework Policy (NDGFP) on May 26, 2022. This policy aims to modernize and enhance government data collection and management procedures. The core objective of the NDGFP, as outlined in the draft, is to cultivate an ecosystem conducive to AI and data-driven research and startups in India by establishing a comprehensive repository of datasets.

Framing key standards

The Ministry of Electronics and Information Technology has instituted committees on AI tasked with delivering reports on AI development, safety, and ethical concerns. Similarly, the Bureau of Indian Standards, serving as India’s national standards body, has set up a committee dedicated to AI, which is in the process of proposing draft Indian standards for the field.

Rules against Deepfakes

Deepfakes are digitally manipulated media, including videos, audio, and images, created using AI. These digitally falsified media have the potential to harm reputations, fabricate evidence, and erode trust in institutions due to their hyper-realistic nature.

India currently lacks specific laws directly addressing generative AI, deepfakes, and AI-related crimes, although the government has said relevant legislation is in the works.

At present, various provisions within existing legislation offer both civil and criminal remedies. For instance, Section 66E of the Information Technology Act, 2000, addresses deepfake crimes related to privacy violations, punishable by imprisonment of up to three years or a fine of INR 200,000. Section 66D targets malicious use of communication devices or computer resources, with penalties including imprisonment and/or fines. Additionally, Sections 67, 67A, and 67B of the IT Act can prosecute publishing or transmitting obscene deepfakes. The IT Rules mandate social media platforms to swiftly remove such content, risking loss of ‘safe harbour’ protection otherwise.

The Indian Penal Code provides further recourse for deepfake-related cybercrimes under Sections 509 (insulting modesty of a woman), 499 (criminal defamation), and 153 (a) and (b) (spreading hate on communal lines), among others. Notably, the Copyright Act of 1957 can address cases involving the unauthorized use of copyrighted material for creating deepfakes, with Section 51 prohibiting such acts. Additionally, recent cases demonstrate law enforcement’s application of forgery-related sections in deepfake incidents.

Due diligence advisory for AI intermediaries and consequences for non-compliance

On March 15, 2024, the MeitY announced a new advisory, replacing the previous advisory eNo.2(4)/2023-CyberLaws-3 from March 1, 2024. This advisory must be read with advisory No. 2(4)/2023-CyberLaws dated December 26, 2023, and highlights concerns regarding intermediaries and platforms, noting their frequent neglect of due diligence obligations outlined in the IT Rules 2021.

  • Intermediaries and platforms must ensure that their use of AI models, LLM, Generative AI, software, or algorithms does not allow users to host, display, upload, modify, publish, transmit, store, update, or share any unlawful content as outlined in Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act 2000 or other applicable laws.
  • Intermediaries should ensure that their computer resources, whether through AI models, LLM, Generative AI, software, or algorithms, do not introduce bias or discrimination or compromise the integrity of the electoral process.
  • AI foundational models, LLM, Generative AI, software, or algorithms that are under-tested or unreliable, or any further development on such models, should only be made available to users in India after accurately labeling the generated output.
  • Users must be informed through terms of service and user agreements about the consequences of dealing with unlawful information, including access restrictions, account suspension or termination, and punishment under applicable laws.
  • Intermediaries facilitating the creation, generation, or modification of text, audio, visual, or audio-visual information that could be used as misinformation or deepfakes should label or embed such information with permanent unique metadata or identifiers. Additionally, metadata should allow identification of users or computer resources responsible for any changes made.
  • Non-compliance with the IT Act 2000 and/or the IT Rules may lead to prosecution under the IT Act 2000 and other criminal laws for intermediaries, platforms, and their users.

About Us

India Briefing is one of five regional publications under the Asia Briefing brand. It is supported by Dezan Shira & Associates, a pan-Asia, multi-disciplinary professional services firm that assists foreign investors throughout Asia, including through offices in Delhi, Mumbai, and Bengaluru in India. Readers may write to india@dezshira.com for support on doing business in India. For a complimentary subscription to India Briefing’s content products, please click here.

Dezan Shira & Associates also maintains offices or has alliance partners assisting foreign investors in China, Hong Kong SAR, Dubai (UAE), Indonesia, Singapore, Vietnam, Philippines, Malaysia, Thailand, Bangladesh, Italy, Germany, the United States, and Australia.