The Ministry of Electronics and Information Technology (MeitY) is officially tightening its grip on the digital landscape. With the new IT Rules, social media intermediaries are responsible for identifying and labeling all “Synthetically Generated Information” (SGI) – including deepfake videos, synthetic audio, and algorithmically altered images.
The 3-Hour Ultimatum
The most dramatic feature of the 2026 amendment is that it speeds up the removal of bad content significantly. Platforms now have three hours to respond when legally ordered by a court or government agency to delete illegal or deceptive AI content. It is a substantial decrease from the 36-hour span used before, and demonstrates the government’s haste to prevent the viral spread of dangerous deepfakes. Labeling and Metadata: Required The rules establish a foundation for transparency on principle:
- Be Clear with Disclosures: Everything produced or modified by artificial intelligence should bear a “clear, prominent and visible” label. Whereas a prior proposal suggested a 10% screen-space watermark, the final rule is more open to design considerations if the disclosure is clearly indicated.
- Static tags: Persistent metadata and unique identifiers must ensure that platforms embed the permanent metadata and unique identifiers behind their synthetic content. The digital ‘fingerprints’ required by the product have to leave traces that cannot be tampered; only the source provides the basis for tracing such digital fingerprints to ensure that bad actors cannot quickly remove labels.
- User Confirmation: Users will be asked to declare if their content is AI-generated prior to uploading. Platforms would then need to employ automated tools to verify these assertions.
- Accountability and Safe Harbour. The government has also stated that so-called “Safe Harbour” protection that is, coverage that shields platforms from liability for user-generated content is contingent.
They can lose this immunity legally if the intermediary does not put in reasonable technical steps to detect AI content or else don’t take the 3 hour takedown window into account and be prosecuted under Indian law if they do.
Quarterly Warnings
This is why platforms must notify their users every three months at this point to keep the public informed about the use of AI and the legal ramifications of spreading fake synthetic media.