/tice-news-prod/media/media_files/2026/02/10/it-rules-of-ai-content-2026-02-10-23-43-55.jpg)
India has officially moved to regulate AI-generated content, tightening the screws on digital platforms and significantly accelerating content takedown timelines.
In a major regulatory update, the Central government has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, formally bringing AI-generated and synthetic content within the country’s intermediary compliance framework. The amendments, notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, will come into effect from February 20, 2026.
Issued under the rule-making powers of the Information Technology Act, 2000, the changes mark one of India’s most decisive steps yet to address the risks posed by deepfakes, synthetic media, and AI-powered misinformation.
IT Rules for AI Content
At the heart of the amendment is a clarification that “information” used for unlawful acts under the IT Rules will now explicitly include synthetically generated information. This effectively extends intermediary due diligence, enforcement, and takedown obligations to AI-generated content—placing new responsibilities on social media platforms, content hosts, and digital intermediaries.
The rules introduce a formal definition of synthetically generated information, covering audio-visual content that is artificially or algorithmically created, modified, or altered using computer resources in a way that appears real or authentic—and is likely to be mistaken for a real person or real-world event.
However, the government has carved out important exceptions. Routine and good-faith activities such as editing, formatting, transcription, translation, accessibility enhancements, educational or training material, and research outputs are excluded—provided they do not result in false or misleading electronic records.
Mandatory labelling of AI-generated content
One of the most consequential changes is the introduction of mandatory labelling for AI-generated content.
Intermediaries that allow the creation or dissemination of such content must now ensure it is clearly and prominently labelled as synthetically generated. Where technically feasible, platforms are also required to embed permanent metadata or provenance markers—such as a unique identifier—enabling traceability of the computer resource used to generate or modify the content.
Crucially, intermediaries are prohibited from enabling the removal, suppression, or alteration of these labels or metadata, closing a loophole often exploited to disguise deepfakes.
New obligations for social media platforms
Significant social media intermediaries face additional duties. Users will now be required to declare whether content is AI-generated before it is uploaded or published.
Platforms must deploy suitable technical measures, including automated verification tools, to check the accuracy of these declarations. If content is confirmed to be AI-generated, it must be displayed with a clear and prominent notice indicating its synthetic nature.
Stronger user warnings and compliance messaging
The amendments also strengthen user-notification requirements. Platforms must now inform users at least once every three months that violating platform rules, privacy policies, or user agreements could lead to immediate suspension, termination of access, content removal—or both.
Users must also be explicitly warned that unlawful activities may attract penalties under applicable laws, and that offences requiring mandatory reporting—including those related to child protection and criminal procedure—will be reported to appropriate authorities.
Takedown timelines drastically reduced
Perhaps the most disruptive change for platforms is the sharp reduction in compliance timelines.
The amended rules significantly tighten enforcement deadlines under Rule 3 of the IT Rules:
Compliance with lawful takedown orders has been reduced from 36 hours to just 3 hours
Grievance redressal timelines have been cut from 15 days to 7 days
Urgent complaints must now be acted upon within 36 hours, down from 72 hours
Certain specified content removal complaints must be addressed within 2 hours, compared to the earlier 24-hour window
Intermediaries are also required to act expeditiously once they become aware of violations involving synthetically generated information—whether through complaints or their own detection mechanisms. Actions may include disabling access to content, suspending user accounts, and reporting cases to authorities where legally mandated.
Safe harbour protection clarified
Importantly, the government has clarified that removing or disabling access to AI-generated or synthetic content in compliance with the IT Rules will not jeopardise safe harbour protections under Section 79(2) of the IT Act—offering platforms legal certainty as they step up enforcement.
/tice-news-prod/media/agency_attachments/EPJ25TmWqnDXQon5S3Mc.png)
Follow Us