The Government of India has notified sweeping amendments to the country’s digital intermediary framework, significantly tightening compliance obligations for social media platforms and online intermediaries. The revised rules, issued by the Ministry of Electronics and Information Technology, introduce a sharply reduced content takedown timeline and new disclosure requirements for artificial intelligence-generated material. The changes will come into effect on February 20, 2026.
The amendments modify the existing Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, a regulatory framework that governs how digital platforms operate in India. The updated provisions reflect the government’s growing concern over the rapid spread of misinformation, deepfakes, and synthetic media content, particularly in an era of advanced generative AI tools.
One of the most significant changes is the reduction of the takedown window for unlawful content. Social media intermediaries will now be required to remove or disable access to content within three hours of receiving a valid order from a competent court or designated government authority. Previously, platforms were given up to 36 hours to comply. The government has stated that the compressed timeline is intended to limit the virality of harmful content, especially material that could threaten public order, national security, or individual reputations.
The amended rules also formally define “synthetically generated” or AI-generated content. This includes audio, video, and visual material that has been artificially created or manipulated to appear authentic. The government has clarified that the intention is to address malicious deepfakes and misleading synthetic media while distinguishing them from routine digital editing or creative transformations that do not misrepresent reality.
Under the new framework, platforms must ensure that AI-generated or synthetic content is clearly labeled. Where technically feasible, identifiers or metadata should be embedded to signal that the material is not organically produced. In addition, intermediaries are required to obtain user disclosures at the time of upload if the content has been generated or significantly altered using artificial intelligence tools. The aim is to enhance transparency for users and help them make informed judgments about the authenticity of what they see online.
The rules further strengthen grievance redressal obligations. Platforms must acknowledge user complaints within a shorter timeframe and act more promptly on verified grievances. They are also expected to periodically inform users about prohibited content categories and the legal consequences of uploading unlawful material. These include content that may violate criminal law, impersonate individuals, spread obscenity, or incite violence.
Government officials have framed the amendments as a necessary step in adapting regulatory systems to technological change. With generative AI tools becoming more accessible and capable of producing highly realistic synthetic content, policymakers argue that the potential for misuse has increased substantially. The tighter compliance regime is intended to balance technological innovation with accountability and user safety.
At the same time, the revised rules have sparked debate among digital rights advocates and industry stakeholders. Critics have raised concerns that a three-hour compliance window may place operational strain on platforms, particularly those managing high volumes of content across multiple languages. Some experts have cautioned that accelerated takedown timelines could incentivize over-removal of content in order to avoid liability, potentially affecting lawful speech.
Under India’s Information Technology Act, intermediaries benefit from “safe harbour” protections, which shield them from liability for third-party content provided they comply with due diligence requirements. The amended rules reiterate that adherence to mandated timelines and procedural safeguards is critical to retaining these protections.
Industry observers expect major global platforms operating in India to update their internal moderation systems, compliance workflows, and AI detection tools in the coming weeks. Companies may also need to invest in enhanced monitoring infrastructure and automated flagging systems to meet the shortened deadlines.
For Indian users, the immediate impact will likely be greater visibility of AI labels and potentially faster removal of harmful or misleading content. The government has positioned the amendments as part of a broader effort to create a safer digital ecosystem, particularly as artificial intelligence becomes more embedded in everyday online interactions.
As the February 20, 2026 implementation date approaches, the amended rules mark a pivotal moment in India’s digital governance landscape. They signal a stronger regulatory stance toward platform accountability while reopening longstanding debates about the balance between free expression, technological innovation, and state oversight in the world’s largest democracy.
Disclaimer: The information presented in this article is intended for general informational purposes only. While every effort is made to ensure accuracy, completeness, and timeliness, data such as prices, market figures, government notifications, weather updates, holiday announcements, and public advisories are subject to change and may vary based on location and official revisions. Readers are strongly encouraged to verify details from relevant official sources before making financial, investment, career, travel, or personal decisions. This publication does not provide financial, investment, legal, or professional advice and shall not be held liable for any losses, damages, or actions taken in reliance on the information provided.
Last Updated on: Wednesday, February 11, 2026 11:44 am by News Proton Team | Published by: News Proton Team on Wednesday, February 11, 2026 11:44 am | News Categories: General
