
Centre Tightens IT Rules: AI Content Must Be Labelled, Platforms Given 3 Hours to Remove Violations
New Delhi: The Union government has amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introducing stricter norms for artificial intelligence–generated content and faster compliance timelines for digital platforms.
Under the revised framework, any content created or altered using AI tools must carry clear disclosure. Users posting such material will be required to indicate whether their content has been generated or modified with artificial intelligence, signalling a push for transparency in the rapidly expanding digital ecosystem.
The amendments also impose sharper compliance obligations on intermediaries, directing them to remove specified categories of unlawful or harmful content within three hours of receiving official notice — a significant tightening of response deadlines.
Further, social media platforms must explicitly outline in their terms of service and user agreements the consequences of sharing unlawful material, including post removal, account suspension, or termination.
The updated guidelines are slated to take effect on February 20.
However, the stricter timelines have sparked debate within the tech and legal community. Technology law specialist Akash Karmakar of Panag & Babu told Reuters that enforcing a three-hour takedown window may be impractical, arguing it leaves little room for due diligence or assessment before compliance.
The changes are likely to sharpen friction between authorities and major social media companies, including Elon Musk-owned X, amid a broader pattern of frequent takedown directives in recent years — a trend that has drawn criticism from digital rights advocates concerned about potential curbs on online expression.
