Deepfake, Personality Rights and Corporate Liability in India in 2026

Posted by Written by Archana Rao Reading Time: 4 minutes

India is moving from fragmented personality-rights enforcement toward a more explicit deepfake governance regime through the 2026 IT Rules amendments, while liability exposure continues to arise simultaneously under privacy, intermediary, consumer protection, IP, employment, and cyber laws.


The artificial intelligence governance framework in India is at a decisive enforcement phase in 2026. This is highlighted by the official rules about deepfakes and synthetically generated information (SGI) in the updated Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2026.

The regulatory shift marks India’s first dedicated operational framework governing AI-generated content, deepfake dissemination, synthetic impersonation, and intermediary accountability. The 2026 IT amendment rules, notified on February 10, 2026, became operational on February 20, 2026.

For businesses operating in India, deepfakes now create material exposure across corporate governance. These violations can cause harm to brand integrity, cybersecurity, advertising liability, data protection, investor relations, and platform compliance.

Dedicated deepfake compliance framework

India’s IT Amendment Rules, 2026, regulate “synthetically generated information” (SGI), covering AI-generated or manipulated audio, video, image, and text content. It must be noted that these rules apply not only to social media intermediaries but also to platforms, publishers, AI-enabled services, and entities distributing or hosting synthetic content.

The IT Amendment Rules, 2026, introduces several operational obligations, including:

  1. Mandatory labelling requirements for AI-generated content
  2. Disclosure obligations for synthetic media
  3. Expedited takedown timelines for unlawful deepfakes
  4. Metadata and provenance expectations
  5. Enhanced due diligence obligations for Significant Social Media Intermediaries (SSMIs)
  6. Additional accountability standards linked to intermediary safe harbor protections

One of the most commercially critical changes is the tightening of takedown timelines.

CLICK TO KNOW MORE: Deepfake, Content Labeling & Safe Harbor Risks for Global Platforms: India’s 2026 AI Regulation

Corporate liability emerging across multiple business functions

For businesses operating in India, expansion of deepfake-related liabilities has become unavoidable.

Advertising and marketing

AI-assisted advertising campaigns may inadvertently reproduce protected likenesses, voices, or recognizable personality traits without authorization.

Brands using synthetic influencers, AI avatars, or cloned voices now face heightened exposure under personality rights, misleading advertising rules, and consumer protection laws.

Media and entertainment

Studios, OTT platforms, production houses, and talent agencies increasingly need contractual clarity regarding:

  1. AI-generated performances
  2. Voice cloning rights
  3. Digital replicas
  4. Post-production synthetic modifications
  5. Posthumous exploitation rights.

The distinction between licensed digital enhancement and unlawful synthetic impersonation is becoming commercially important.

Financial services and corporate communications

Deepfake-enabled fraud schemes targeting treasury functions, payment approvals, and investor communications have become a major governance concern globally.

For regulated entities, insufficient controls may eventually trigger not only cybersecurity liability but also governance and disclosure scrutiny.

HR and recruitment functions

Synthetic candidate impersonation, AI-generated credentials, and manipulated interview content are creating new compliance and verification challenges.

Businesses increasingly need stronger authentication and verification procedures during recruitment and remote onboarding.

Legal intersectionality on deepfake governance

Deepfake and synthetic media framework operates in India via multiple interconnected legal regimes. Depending on the nature of the violation, businesses may also face simultaneous exposure under cyberlaw, intellectual property, privacy, defamation, and contractual liability frameworks.

Legal framework

Relevant sections / provisions

Legal exposure

IT Act, 2000

  • Section 66C (identity theft)
  • Section 66D (cheating by impersonation using computer resources)
  • Section 67 and 67A (publishing or transmitting objectionable content)
  • Section 69A (blocking powers)
  • Section 79 (intermediary safe harbor)

Identity impersonation, fraudulent synthetic communications, unlawful digital content dissemination

Digital Personal Data Protection Act, 2023 (DPDPA)

Sections relating to lawful processing, consent, data fiduciary obligations, reasonable security safeguards, and breach notification obligations

Unauthorized use of facial data, voice cloning, misuse of personal identifiers

Copyright Act, 1957

  • Section 14 (exclusive rights of copyright owner)
  • Section 51 (copyright infringement)
  • Section 57 (moral rights of authors and performers)

Unauthorized AI-generated reproduction of protected audio, video, images, performances, or artistic works

Trade Marks Act, 1999

Section 29 (trademark infringement)

Synthetic brand endorsements, deceptive AI-generated advertising, impersonation of brands or personalities

Consumer Protection Act, 2019

Sections governing misleading advertisements, unfair trade practices, and consumer deception

Deepfake advertisements, manipulated endorsements, fraudulent promotional content

Bharatiya Nyaya Sanhita (BNS), 2023

Provisions relating to defamation, cheating, forgery, identity fraud, and cyber-enabled deception

Synthetic impersonation, manipulated reputational attacks, fraudulent communications

Defamation law

Civil defamation principles and criminal defamation provisions under BNS

Reputational harm caused by fabricated or manipulated AI-generated content

Indian Contract Act, 1872

Licensing and endorsement agreements; confidentiality clauses

Breach of AI licensing restrictions, misuse of digital likeness rights, unauthorized synthetic replication

Cybersecurity and fraud regulations

Reserve Bank of India (RBI) cybersecurity directions, sectoral cybersecurity frameworks, CERT-In reporting obligations

Voice cloning fraud, executive impersonation scams, synthetic phishing attacks

Source: Ministry of Electronics and Information Technology, GoI

Commercial significance

A single deepfake incident, such as an AI-generated CEO statement, synthetic celebrity endorsement, or manipulated investor communication, may simultaneously trigger:

  1. Regulatory enforcement
  2. Intermediary compliance scrutiny
  3. IP claims
  4. Consumer protection investigations
  5. Privacy violations
  6. Cybercrime proceedings
  7. Contractual disputes.

What businesses operating in India should prioritize

1. Establish enterprise AI governance policies: Organizations should implement internal governance frameworks governing synthetic media generation, approval, disclosure, and monitoring.

2. Conduct AI vendor due diligence: Third-party AI providers, marketing agencies, and platform vendors should face clear contractual obligations regarding compliance, indemnity, consent verification, and content authenticity.

3. Develop deepfake incident response protocols: Businesses should establish escalation and crisis-management systems for impersonation attacks, misinformation campaigns, and AI-enabled fraud.

4. Strengthen identity verification controls: Companies should reassess executive approval systems, payment authorization protocols, and recruitment verification processes.

5. Review advertising and influencer practices: Marketing teams should ensure AI-assisted campaigns do not unlawfully exploit protected identities or create deceptive synthetic endorsements.

6. Maintain transparent disclosure practices: Proactive disclosure and labeling mechanisms may reduce future litigation and regulatory exposure even where not expressly mandated.

MUST READ: Managing AI Slop: Why Due Diligence is Essential for Businesses Using Generative AI

Strategic implication for India Inc.

India’s 2026 deepfake framework signals a broader transformation in digital regulation. The country is moving toward a governance model where AI-generated content is treated as a matter of economic trust, commercial accountability, and cybersecurity resilience.

As India strengthens its position as a global digital economy, technology services hub, and AI adoption market, regulators are increasingly prioritizing authenticity, traceability, and platform responsibility.

For businesses, the implications are immediate.

Deepfake governance is no longer a future-facing policy discussion. It is now an operational compliance requirement with direct consequences for legal exposure, enterprise risk management, and corporate reputation.

About Us

India Briefing is one of five regional publications under the Asia Briefing brand. It is supported by Dezan Shira & Associates, a pan-Asia, multi-disciplinary professional services firm that assists foreign investors throughout Asia, including through offices in Delhi, Mumbai, and Bengaluru in India. Dezan Shira & Associates also maintains offices or has alliance partners assisting foreign investors in China, Hong Kong SAR, Vietnam, Indonesia, Singapore, Malaysia, Mongolia, Dubai (UAE), Japan, South Korea, Nepal, The Philippines, Sri Lanka, Thailand, Italy, Germany, Bangladesh, Australia, United States, and United Kingdom and Ireland.

For a complimentary subscription to India Briefing’s content products, please click here. For support with establishing a business in India or for assistance in analyzing and entering markets, please contact the firm at india@dezshira.com or visit our website at www.dezshira.com.