Deepfake, Content Labeling & Safe Harbor Risks for Global Platforms: India’s 2026 AI Regulation

Posted by Written by Archana Rao Reading Time: 6 minutes

India has strengthened its AI regulation through amendments to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, effective February 20. The revised rules mandate prominent labeling of AI-generated content, introduce expedited takedown timelines as short as two to three hours.

Social media platforms and technology companies operating in India must proactively align their compliance systems with the new regulatory mandate to mitigate enforcement risk, monetary penalties, and potential legal proceedings.


The Ministry of Electronics and Information Technology (MeitY) has formally notified changes to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised framework introduces new compliance obligations for social media intermediaries, particularly in relation to artificial intelligence (AI)-generated content and expedited content takedown timelines.

While the central government has relaxed certain earlier proposals concerning AI labeling requirements, it has simultaneously imposed significantly stricter timelines for the removal of unlawful content.

The 2026 IT amendments, vide gazette notification number G.S.R. 120(E), shall come into force on February, 20 2026.

How India regulates AI-generated content under the IT Act

India does not regulate AI as a standalone technology. Instead, it regulates the outputs of AI systems when such outputs are hosted, transmitted, or enabled by digital intermediaries and violate Indian law.

Changes to the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, officially notified on February 10, 2026, expand compliance obligations around:

  • Synthetic or AI-generated content,
  • Deepfakes and impersonation,
  • Non-consensual sensitive imagery,
  • Misleading and harmful content,
  • Expedited removal timelines.

For foreign AI companies, generative AI platforms, social media intermediaries, and content-hosting services operating in India, compliance is now product-level and real-time based.

Deepfake regulation and AI content labeling requirements

The latest rules in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, impose explicit labeling obligations. Online platforms must adhere to the following requirements:

  • Clearl and prominently label AI-generated or synthetically generated content in a manner visible to users.
  • Ensure that AI-related labels, watermarks, or metadata cannot be removed, altered, or suppressed.
  • Obtain user declarations where content has been created or materially altered using AI systems.
  • Implement reasonable technical measures to verify and track AI-origin information.

Although the earlier proposal requiring AI labels to occupy 10 percent of screen space has been withdrawn, the requirement of “prominence” remains legally enforceable. Platforms must therefore ensure that disclosures are conspicuous, accessible, and not designed in a way that dilutes visibility.

Compliance implication for platforms

These obligations extend beyond policy disclosures and require product-level implementation, including:

  • Preservation of backend metadata and watermark integrity,
  • Deployment of provenance-tracking mechanisms,
  • Maintenance of audit logs for regulatory review.

Once content qualifies as synthetic or AI-generated under the rules, labeling is mandatory.

ALSO READ: Managing AI Slop: Essential Due Diligence for Businesses Using Generative AI

Compressed takedown timelines

India has introduced some of the most aggressive AI content removal timeline. The current legal mandates include:

  1. Non-consensual intimate imagery (including AI-generated deepfake imagery): 2 hours
  2. Other unlawful content (including AI-generated misinformation or impersonation): 3 hours
  3. Privacy or impersonation complaints: 24 hours
  4. Grievance resolution: 72 hours

These timelines apply irrespective of whether the content is AI-generated or manually created.

Operational risk

Failure to act within prescribed timelines may result in:

  • Loss of safe harbor protection,
  • Criminal liability exposure,
  • Blocking orders under Section 69A,
  • Regulatory enforcement actions.

For AI platforms, detection latency becomes a legal risk variable.

Legal basis for regulating AI content in India

Statutory foundation

AI-generated content in India is regulated under two laws:

  1. The Information Technology (IT) Act, 2000
  2. The IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021

The law does not prohibit AI systems; however, it regulates unlawful outcomes arising from AI-generated or synthetically generated information (SGI). Safe harbor protection under Section 79 of the IT Act, 2000, applies only if intermediaries comply with due diligence requirements.

Section 79 of the IT Act provides legal protection to intermediaries (referred to as “network service providers”) against liability for third-party content hosted or transmitted through their platforms. In simple terms, an intermediary, such as a social media platform, hosting provider, or online marketplace, will not be held legally responsible for unlawful content posted by users, provided that:

  • The unlawful act was committed without the intermediary’s knowledge; or
  • The intermediary exercised due diligence to prevent such violations.
  • This protection applies only to content created or uploaded by third parties, not content created by the intermediary itself.

Section 79 does not grant blanket immunity. It protects platforms only when they act responsibly and in good faith. Once they gain knowledge of unlawful content and fail to act appropriately, the protection can be withdrawn.

Definition of SGI

SGI includes content created, altered, or manipulated using AI systems.

Regulatory scrutiny intensifies where AI outputs:

  1. Impersonate real individuals,
  2. Fabricate real-world events,
  3. Disseminate misinformation,
  4. Produce non-consensual intimate imagery,
  5. Harm women or children,
  6. Threaten public order or national security.

The framework excludes routine audio/video editing, quality-enhancement uses, and assistive AI tools that do not misrepresent identity or facts. 

The determining factor is not the use of AI but whether the output violates India’s IT laws.

Safe harbor conditionality and liability exposure

Safe harbor under Section 79 shields intermediaries from liability for user-generated content—provided they:

  • Follow due diligence norms.
  • Remove unlawful AI content within timelines.
  • Comply with court or central government directions.

Loss of safe harbor exposes platforms to:

  • IT Act criminal penalties (Sections 66C, 66D, 66E, 67, 67A, 67B),
  • Liability under the Bharatiya Nyaya Sanhita (misinformation, obscenity),
  • POCSO Act exposure (child exploitation content),
  • Civil claims and reputational harm.

For foreign companies, this shifts AI misuse from reputational risk to regulatory risk.

Proactive safeguard obligations for AI-enabled platforms

Platforms that enable AI content creation must deploy:

  • “Reasonable and appropriate” technical safeguards,
  • Systems to prevent impersonation and misrepresentation,
  • Rapid disablement tools,
  • Account suspension mechanisms,
  • Monitoring workflows for synthetic content misuse.

The compliance standard moves beyond reactive removal. Platforms are expected to implement risk-mitigation architecture.

Additional requirements for large platforms (SSMIs)

Platforms classified as Significant Social Media Intermediaries (SSMIs) are subject to enhanced compliance requirements under India’s IT Rules. This classification typically applies to large platforms exceeding prescribed user thresholds in India. SSMIs must:

  1. Appoint a Chief Compliance Officer (resident in India).
  2. Appoint a Nodal Contact Person (24/7 law enforcement coordination).
  3. Appoint a Resident Grievance Officer.
  4. Publish monthly compliance reports.
  5. Enable traceability of message originators where legally mandated, particularly in cases involving serious offences.

This introduces potential personal liability for local compliance officers.

Enforcement trends relevant to AI platforms

Recent enforcement patterns include:

  1. Blocking of websites hosting child sexual abuse material,
  2. Directions to disable services facilitating non-consensual imagery,
  3. Platform bans (including OTT services),
  4. Advisories requiring strengthened moderation systems,
  5. Emphasis on deepfakes and AI misuse involving women.

The regulatory approach is executive-driven and enforcement-oriented.

Frequently asked questions (FAQs)

1. Does India regulate AI models directly?

No. The regulatory framework targets unlawful AI-generated outputs hosted or enabled by intermediaries.

2. Is labeling mandatory for all AI content?

Labeling is required where content qualifies as SGI. Platforms must ensure clear and prominent disclosure once verified.

3. What happens if AI-generated deepfake content goes viral before removal?

If removal timelines (two-three hours) are missed, platforms risk losing safe harbor protection, exposing them to direct liability. Virality does not mitigate compliance obligations.

4. Do foreign AI companies fall under Indian law?

Yes. If a platform offers services to Indian users, has users in India, and/or targets the Indian market, it must comply, regardless of incorporation jurisdiction.

5. Does encryption protect platforms from liability?

Encryption does not override compliance obligations. SSMIs may face traceability requirements in certain cases.

6. How does data protection intersect with AI regulation?

Under the Digital Personal Data Protection (DPDP) Act, 2023:

  1. Children’s data requires parental consent.
  2. Tracking and targeted advertising toward children are restricted.
  3. Processing identifiable personal data through AI must align with lawful purpose and consent.

Deepfake generation involving identifiable individuals may trigger additional liabilities.

Why India’s 2026 AI regulation matters for global firms

Moderation must be near real-time

AI misuse detection systems must operate continuously and be tuned to Indian legal thresholds. Human-in-the-loop review processes must support compressed timelines.

Product design must embed compliance

To avoid any regulatory violation, companies should implement:

  1. Synthetic content watermarking,
  2. Metadata integrity controls,
  3. Provenance verification,
  4. India-specific geofenced safeguards,
  5. Audit logging for regulatory review.

AI governance must be integrated at the architecture level.

Establish India-specific compliance framework

Foreign companies should maintain local compliance officers (if qualifying as SSMI) and develop India-dedicated moderation pipelines. Additionally, companies must conduct periodic legal audits, and in parallel develop crisis-response playbooks for deepfake incidents.

Reassess risk exposure

AI misuse involving political misinformation, deepfake harassment, child exploitation, and public order disruption may trigger rapid regulatory escalation. Foreign companies should incorporate India-specific regulatory risk modeling into enterprise risk management frameworks.

Conclusion

India’s regulatory model does not ban AI-generated content but subjects it to strict accountability standards. Mandatory labeling, compressed removal timelines, proactive safeguard requirements, and conditional safe harbor collectively create a high-compliance environment.

For foreign AI companies, generative AI providers, and global digital platforms, India requires:

  • Real-time moderation capabilities,
  • Embedded AI governance mechanisms,
  • Local compliance infrastructure,
  • Executive-level regulatory oversight.

In practical terms, AI governance in India is no longer a voluntary ethical layer—it is a legal operating condition.

Digital compliance is an increasing area of regulatory scrutiny for multinational technology companies operating in India. Our experts provide end-to-end support on intermediary compliance, AI governance, data protection advisory, regulatory risk assessments, and safe harbor mitigation to ensure legally sound digital operations.

For tailored guidance on India’s IT Rules and AI regulation, contact our advisory team at: India@dezshira.com

About Us

India Briefing is one of five regional publications under the Asia Briefing brand. It is supported by Dezan Shira & Associates, a pan-Asia, multi-disciplinary professional services firm that assists foreign investors throughout Asia, including through offices in Delhi, Mumbai, and Bengaluru in India. Dezan Shira & Associates also maintains offices or has alliance partners assisting foreign investors in China, Hong Kong SAR, Vietnam, Indonesia, Singapore, Malaysia, Mongolia, Dubai (UAE), Japan, South Korea, Nepal, The Philippines, Sri Lanka, Thailand, Italy, Germany, Bangladesh, Australia, United States, and United Kingdom and Ireland.

For a complimentary subscription to India Briefing’s content products, please click here. For support with establishing a business in India or for assistance in analyzing and entering markets, please contact the firm at india@dezshira.com or visit our website at www.dezshira.com.