Beware: Posting Online Content
Effective February 20, 2026, any harmful or unlawful synthetically generated information posted online will attract penalty under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“Rules“). “Synthetically generated information,” (“SGI”) shall include audio, visual, or audiovisual content created or altered with computer resources in a way that looks real or indistinguishable from reality. The Rules show the government’s aim to control new technologies like deepfakes and AI-generated media while allowing for exceptions for basic editing, formatting, or improvements for accessibility.
Under the Rules, SGI is explicitly recognized as “information” when it is used for illegal purposes. Intermediaries are now required to deploy reasonable and appropriate technical measures, including automated tools and other suitable mechanisms, to prevent, detect, label, and where necessary, disable access to unlawful SGI. Any person generating, creating, modifying, or disseminating harmful or unlawful SGI shall be liable for appropriate punishment in accordance with the provisions of the Act and any other applicable laws in force.
The Rules also tighten compliance timelines. For instance, intermediaries are now required to respond to specific takedown requests within three hours, instead of thirty-six hours. The time for addressing user grievances has also been reduced from fifteen days to seven days. Moreover, intermediaries that allow the creation or sharing of SGI must make users aware of the legal consequences of misuse, at least once every three months.
Finally, the Rules mandate intermediaries to ensure that any SGI created, generated, modified, or altered using a computer resource, must be clearly and prominently disclosed as such. Such content must carry a visible label in the case of visual material, ensuring that the disclosure is easily noticeable and adequately perceivable to users. In the case of audio content, a clear and prominently prefixed audio disclosure must be provided so that listeners can immediately identify the content as synthetically generated.
Additionally, the intermediaries shall adopt appropriate technical mechanisms, to the extent of technical feasibility, including the use of a unique identifier. This is to ensure traceability of the computer resource used to create, generate, modify, or alter any content. Together, these amendments represent a crucial regulatory move to tackle the risks posed by AI-driven content manipulation, balancing innovation with accountability and user protection.
Recent Posts
- Beware: Posting Online Content
- India’s New Startup Recognition Regime: Special Focus on Deep Tech Startups
- COMPARATIVE NOTE - LABOUR LAW CODES
- SEBI Issues Master Circular to Streamline Disclosure and Compliance in Debt Markets
- SEBI’S Approach to Artificial Intelligence and Machine Learning: Exploring SEBI’S AI/ML Consultation Paper
- Clove Legal has successfully obtained a landmark order from the Hon’ble Bombay High Court directing MahaRERA to implement structured guidelines governing its hearing procedures and functioning framework.
- Regulatory Roundup: Key Corporate & Financial Law Updates
- Bombay High Court reprimands MahaRERA for not holding in-person hearings - Writ Petition filed by Clove Legal on behalf of aggrieved home buyer.
- Bombay High Court Clarifies Interplay between Information Technology Act, 2000 and Indian Penal Code 1860, in Cybercrime Cases
- Auction Sale Set Aside by DRT Due to Default by the Bank: Supreme Court Enhances Rate of Interest to be Paid by the Bank Along with Refund Amount to the Successful Auction Purchaser | Tenants/Appellants being Successful Bidders Reverted to the Status of Tenants and Protected from Being Dispossessed by the Bank
