Technology

India Tightens Rules on AI-Generated Content: Offensive Material Must Be Removed Within 3 Hours

The Indian government has introduced major amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, targeting fake and misleading content generated using AI and deepfake technologies. The new rules will come into effect from February 20, 2026. Under the updated regulations, social media platforms and digital intermediaries bear increased responsibility for handling AI-generated content. The government has explicitly defined “synthetically generated information” to include AI-created audio, video, images, or other content that can mislead viewers into believing it is real. Routine edits, color correction, translations, or document preparation are exempt, provided they do not create misleading records.

A key change is the drastic reduction in the timeframe for removing flagged content. Platforms must now take down unlawful or misleading AI content within three hours, compared to the earlier 36-hour window. Additionally, law-and-order-related information will only be shared by officers of DIG rank or higher. Platforms are also required to inform users every three months about the rules and potential legal consequences under the IT Act, IPC 2023, POCSO Act, Representation of the People Act, and laws against obscene depictions of women.

AI-generated content must be clearly identified using technological tools, labeled appropriately, and embedded with permanent digital identifiers or metadata that cannot be removed. These amendments mark a significant step by India to regulate AI-generated content and curb the spread of misleading digital information.