Beware: Posting Online Content

Effective February 20, 2026, any harmful or unlawful synthetically generated information posted online will attract penalty under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“Rules“). “Synthetically generated information,” (“SGI”) shall include audio, visual, or audiovisual content created or altered with computer resources in a way that looks real or indistinguishable from reality. The Rules show the government’s aim to control new technologies like deepfakes and AI-generated media while allowing for exceptions for basic editing, formatting, or improvements for accessibility.

Under the Rules, SGI is explicitly recognized as “information” when it is used for illegal purposes. Intermediaries are now required to deploy reasonable and appropriate technical measures, including automated tools and other suitable mechanisms, to prevent, detect, label, and where necessary, disable access to unlawful SGI. Any person generating, creating, modifying, or disseminating harmful or unlawful SGI shall be liable for appropriate punishment in accordance with the provisions of the Act and any other applicable laws in force.

The Rules also tighten compliance timelines. For instance, intermediaries are now required to respond to specific takedown requests within three hours, instead of thirty-six hours. The time for addressing user grievances has also been reduced from fifteen days to seven days. Moreover, intermediaries that allow the creation or sharing of SGI must make users aware of the legal consequences of misuse, at least once every three months. 

Finally, the Rules mandate intermediaries to ensure that any SGI created, generated, modified, or altered using a computer resource, must be clearly and prominently disclosed as such. Such content must carry a visible label in the case of visual material, ensuring that the disclosure is easily noticeable and adequately perceivable to users. In the case of audio content, a clear and prominently prefixed audio disclosure must be provided so that listeners can immediately identify the content as synthetically generated.

Additionally, the intermediaries shall adopt appropriate technical mechanisms, to the extent of technical feasibility, including the use of a unique identifier. This is to ensure traceability of the computer resource used to create, generate, modify, or alter any content. Together, these amendments represent a crucial regulatory move to tackle the risks posed by AI-driven content manipulation, balancing innovation with accountability and user protection.