India's New AI Content Rules: What Platforms and Users Must Know

India Takes the Lead on AI Content Regulation

India just became one of the first major economies to comprehensively regulate AI-generated content on social media platforms. On February 10, 2026, the central government notified sweeping amendments to the IT Intermediary Rules, 2021, marking a watershed moment in digital governance. These rules, effective February 20, 2026, establish clear definitions for synthetic media, mandate labeling requirements, and impose strict compliance timelines that will reshape how platforms like Instagram, YouTube, and Facebook operate in the world's largest democracy.

Signed by MeitY Joint Secretary Ajit Kumar under notification G.S.R. 120(E), these amendments represent the culmination of months of deliberation. The government initially released a draft in October 2025, collected public feedback until November 13, 2025, and notably softened some provisions after receiving industry input from the Internet and Mobile Association of India (IAMAI).

Understanding Synthetically Generated Information

The new rules introduce a precise legal definition for what the government calls Synthetically Generated Information (SGI). This encompasses audio, visual, or audio-visual content that has been artificially created, modified, or altered using computer resources to appear real or authentic while depicting people or events in a deceptive manner.

The scope is deliberately broad, covering deepfakes that swap faces onto different bodies, AI-generated voiceovers that mimic real people, and manipulated videos that show events that never occurred. However, the rules include important exemptions that address industry concerns raised during the consultation period. Routine technical edits like color correction, image compression, format conversion, and conceptual content within documents fall outside the SGI definition.

This nuanced approach reflects the government's recognition that not all digital manipulation constitutes harmful synthetic media. The focus remains squarely on content designed to deceive viewers about real people or events, rather than legitimate creative or technical processes.

Platform Obligations and Compliance Requirements

Social media intermediaries face three primary obligations under the new framework. First, they must implement clear labeling systems for all SGI before users can engage with the content through likes, shares, or comments. This proactive identification requirement places the burden on platforms to develop sophisticated detection mechanisms.

Second, platforms must deploy automated tools to identify and block illegal SGI, particularly content involving child sexual abuse material or deepfakes created with malicious intent. The rules recognize that manual moderation alone cannot handle the scale of content uploaded daily across major platforms.

Third, compliance timelines have been dramatically tightened across multiple categories. Platforms now have just 3 hours to respond to certain government takedown orders, compared to the previous 36-hour window. Standard removal requests must be processed within 7 days instead of 15, while urgent cases require action within 12 hours rather than 24.

Platforms must also send quarterly reminders to users about community guidelines and potential penalties, while updating their terms of service to reference the Bharatiya Nyaya Sanhita, 2023, India's new criminal code that replaced the Indian Penal Code.

User Responsibilities and Legal Implications

Users uploading content will encounter new declaration requirements when posting material that involves AI generation or significant synthetic elements. The rules establish a framework where misrepresenting AI-generated content as authentic could trigger penalties under the Bharatiya Nyaya Sanhita or, in cases involving minors, the Protection of Children from Sexual Offences (POCSO) Act.

This creates a dual responsibility system where both platforms and users share accountability for synthetic content. While platforms must detect and label SGI, users bear responsibility for honest disclosure during the upload process. The government's approach acknowledges that effective regulation requires cooperation from all stakeholders in the digital ecosystem.

The penalties reflect the serious nature of potential harms from malicious deepfakes, particularly non-consensual intimate imagery or content designed to influence democratic processes. By referencing existing criminal statutes, the rules leverage established legal frameworks rather than creating entirely new penalty structures.

Industry Impact and Global Implications

The evolution from draft to final rules reveals significant industry influence on policy development. The government initially proposed rigid labeling thresholds, such as requiring labels when SGI comprised more than 10% of visual content area. After extensive feedback from IAMAI and individual platforms, these mechanical standards were replaced with more flexible, context-sensitive approaches.

This regulatory framework positions India as a potential model for other democracies grappling with AI-generated content. The European Union's Digital Services Act and proposed AI Act share similar concerns but take different approaches to platform accountability and content moderation.

For global platforms, India's rules create new compliance costs but also provide regulatory clarity in a market of 1.4 billion people. Companies that successfully adapt their systems for India may find themselves better positioned for similar regulations elsewhere.

The February 20 implementation date gives platforms just ten days to deploy new systems, update user interfaces, and train content moderation teams. This tight timeline suggests the government views synthetic media regulation as an urgent priority, likely driven by concerns about AI-generated content's potential impact on India's democratic processes and social fabric.

As these rules take effect, they will test whether comprehensive AI content regulation can balance innovation with harm prevention, setting precedents that will influence digital governance globally for years to come.

Source

Times of India