MeitY Notifies New Amendments to IT Rules on Synthetic Media


The Ministry of Electronics and Information Technology (MeitY) on February 10 notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, explicitly bringing synthetically generated information, including deepfakes, within the scope of the IT Rules’ due diligence framework. The amendments regulating synthetic media will come into force on February 20, 2026, giving platforms a 10-day compliance window.

The changes expand due diligence obligations for intermediaries, introduce new definitions, shorten takedown and grievance timelines, and lay down a framework for how platforms must handle synthetic audio-visual content online.

What are the changes in the IT Rules for Synthetic Media?

1. Synthetic media is no longer outside the IT Rules obligations

The amendments introduce a statutory definition of “synthetically generated information” (SGI) and insert Rule 2(1A). This provision clarifies that references to “information” used to commit an unlawful act also include SGI. The clarification applies across key due-diligence provisions, including Rule 3(1)(b), Rule 3(1)(d), and Rules 4(2) and 4(4).

As a result, the rules no longer treat synthetic media as a separate category requiring special handling. Instead, they explicitly bring SGI within the same unlawful-content compliance framework that already governs other forms of illegal online information.

2. What counts as synthetically generated information

The notification first defines “audio, visual or audio-visual information” broadly. It covers any audio, image, photograph, graphic, video, moving visual recording, sound recording, or similar content, with or without accompanying audio, whether created, generated, modified, or altered through any computer resource.

It then defines “synthetically generated information” as such audio-visual content that a computer resource artificially or algorithmically creates or alters in a manner that appears real, authentic, or true, and depicts or portrays an individual or event in a way that is, or is likely to be perceived as, indistinguishable from a natural person or a real-world event.

Notably, the definition focuses on how real the content appears to an ordinary viewer, rather than merely on whether artificial intelligence tools were involved in its creation.

3. Routine editing and accessibility uses are excluded

The SGI definition expressly excludes certain categories of content. The rules do not treat audio-visual content as SGI where it arises from:

  • routine or good-faith editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression that does not materially alter, distort, or misrepresent the substance, context, or meaning of the underlying content;
  • routine or good-faith creation or presentation of documents, presentations, PDF files, educational or training materials, or research outputs, including illustrative, hypothetical, draft, template-based, or conceptual content, where such creation or presentation does not result in any false document or false electronic record; or
  • use of computer resources solely to improve accessibility, clarity, quality, translation, description, searchability, or discoverability, without generating, altering, or manipulating any material part of the underlying content.

These carve-outs ensure that everyday digital editing and assistive uses of AI do not automatically trigger compliance obligations meant for deceptive synthetic media.

4. The rules distinguish between prohibited SGI and labelled SGI

Rule 3(3)(a) divides synthetically generated information into two categories.

Under Rule 3(3)(a)(i), intermediaries offering computer resources that enable or facilitate SGI must not allow users to create or share SGI that violates any law in force. For all other SGIs, Rule 3(3)(a)(ii) allows such intermediaries to host the content, provided they clearly label it as synthetically generated and attach provenance information, where technically feasible.

This distinction determines whether an intermediary must block synthetic content outright or allow it to remain online with disclosures.

5. Platforms must deploy technical measures to prevent unlawful SGI

Rule 3(3)(a)(i) requires intermediaries that offer computer resources enabling or facilitating SGI to deploy reasonable and appropriate technical measures, including automated tools, to not allow unlawful SGI. This includes SGI that:

  • contains child sexual exploitative and abuse material or non-consensual intimate imagery, or is obscene, pornographic, paedophilic, invasive of privacy (including bodily privacy), vulgar, indecent, or sexually explicit;
  • results in the creation, generation, modification, or alteration of any false document or false electronic record;
  • relates to the preparation, development, or procurement of explosive material, arms, or ammunition; or
  • falsely depicts or portrays a natural person or real-world event by misrepresenting, in a manner likely to deceive, a person’s identity, voice, conduct, action, statement, or an event as having occurred, with or without the involvement of a natural person.

By using the phrase “not allow,” the rule creates an expectation of proactive technical controls rather than purely reactive takedowns.

6. Platforms must label and attach provenance to permitted SGI

For SGI that does not fall within the prohibited category, Rule 3(3)(a)(ii) requires intermediaries to:

  • prominently label the content in a manner that is easily noticeable and adequately perceivable, or, in the case of audio content, provide a prominently prefixed audio disclosure; and
  • embed permanent metadata or other appropriate technical provenance mechanisms, to the extent technically feasible, including a unique identifier that identifies the intermediary’s computer resource used to create, generate, modify, or alter the content.

Rule 3(3)(b) further requires intermediaries not to enable the modification, suppression, or removal of these labels or provenance markers.

These provisions aim to enable traceability of synthetic media without mandating a single technical standard.

7. Platform “awareness” triggers mandatory action

Rule 3(1)(cb) sets out the steps an intermediary must take once it becomes aware of a violation involving SGI that falls within the labelling-and-disclosure category.

The rules treat an intermediary as becoming aware either on its own accord, upon receipt of actual knowledge through lawful notices, or on the basis of any grievance, complaint, or information received under the rules.

Once awareness arises, the intermediary must take expeditious and appropriate action. This may include disabling access to the content, suspending or terminating the relevant user account without vitiating evidence, disclosing the user’s identity to a victim-complainant in accordance with applicable law, and reporting the matter to authorities where mandatory reporting obligations apply.

Even where a platform initially permitted SGI to be published with labels, failure to act after awareness is established can amount to a due diligence failure.

8. Significant social media intermediaries must seek declarations and verify before publishing

Rule 4(1A) mandates a significant social media intermediary to, (prior to display, upload, or publication):

  • ask users to declare whether information is SGI;
  • deploy appropriate technical measures (including automated tools or other suitable mechanisms) to verify the accuracy of the declaration, having regard to the nature, format, and source; and
  • where declaration or verification confirms SGI, ensure it is clearly and prominently displayed with an appropriate label or notice.

The proviso retains the “knowingly permitted, promoted, or failed to act upon” standard for due diligence failure.

Collectively, this creates a pre-publication obligation that goes beyond notice-and-takedown for large platforms.

9. Platforms must issue enhanced quarterly user notices

Rule 3(1)(c) requires intermediaries to inform users at least once every three months that platforms may remove content, suspend or terminate accounts, impose legal liability for unlawful content, and report offences that require mandatory reporting.

Rule 3(1)(ca) adds an SGI-specific notice for intermediaries offering computer resources under Rule 3(3). This notice must warn users that violations may lead to content removal, account suspension or termination without vitiating evidence, disclosure of identity to a victim-complainant in accordance with law, and reporting to authorities where required.

The rules, therefore, convert user-facing deterrence into a formal compliance obligation.

10. What platforms can do when violations occur

Rule 3(1)(ca)(ii) lists the consequences platforms may impose for violations. These include immediate disabling or removal of content, suspension or termination of user accounts without vitiating evidence, disclosure of the violator’s identity to a victim or their representative in accordance with law, and reporting to authorities where mandatory reporting applies.

The emphasis on preserving evidence aligns platform enforcement with criminal investigation requirements.

11. Takedown and grievance timelines are tightened

The notification replaces existing timelines as follows:

  • 36 hours with within three hours under Rule 3(1)(d);
  • 15 days with seven days under Rule 3(2)(a)(i);
  • 72 hours with 36 hours under the first proviso to Rule 3(2)(a)(i); and
  • 24 hours with two hours under Rule 3(2)(b).

The rules also require that police-authority intimations come from officers not below the rank of Deputy Inspector General of Police, specifically authorised by the appropriate government through a written order.

These changes significantly narrow discretion and accelerate response obligations for platforms.

12. Safe harbour is protected for compliant and proactive removals

Finally, Rule 2(1B) clarifies that intermediaries do not violate Section 79(2)(a) or (b) of the Information Technology Act when they remove or disable access to content, including SGI, in compliance with the rules. This protection extends to actions taken through reasonable and appropriate technical measures, including automated tools.

The provision seeks to reassure platforms that proactive moderation carried out in accordance with the rules will not, by itself, jeopardise intermediary safe harbour.

Also read:

Support our journalism by subscribing

For You


Source link

Recent Articles

spot_img

Related Stories