India's new AI content rules, explained: What's AI-generated content, what social media platforms must do now, and what users need to know

1 hour ago 3
ARTICLE AD BOX

 What's AI-generated content, what social media platforms must do now, and what users need to know

India has officially regulated AI-generated content, mandating clear labels on platforms like Instagram and YouTube. Users must now declare AI use, with significant penalties for misrepresentation. Platforms face stricter takedown timelines and must actively block illegal synthetic media, ensuring transparency for the public.

The central government on February 10 notified amendments to the IT intermediary rules that bring AI-generated content under formal regulation for the first time. Filed as G.S.R.

120(E) and signed by MeitY Joint Secretary Ajit Kumar, the changes amend the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. They take effect from February 20.Here's what the new rules actually say, what they mean for social media platforms, and users.

So what counts as 'synthetically generated' content

The gazette notification defines synthetically generated information (SGI) as any audio, visual or audio-visual content that is artificially or algorithmically created, modified or altered using a computer resource.

The key qualifier: it must appear real or authentic, and depict people or events in a way that could be mistaken for genuine.

India’s AI Rise Gets Global Push As UN Chief Praises Leadership, Nvidia CEO Predicts Job Surge

That covers deepfake videos, AI-generated voiceovers, face-swapped images—basically anything where a machine has done the work to make something look or sound real. The language also catches content that "portrays" individuals or events, which means even AI-generated images of fictional scenarios involving real people could fall under this definition.

But the government has carved out exemptions too. Routine editing—colour correction, noise reduction, compression, transcription, translation, accessibility tweaks—doesn't count, as long as it doesn't distort the original meaning. Same for illustrative or conceptual content in documents, research papers, PDFs, presentations or training materials. The notification also specifically excludes content created for "hypothetical, draft, template-based or conceptual" purposes.

Your office PowerPoint with a stock AI illustration? Not SGI. A deepfake of a politician giving a speech they never gave? Squarely within scope.

Every platform that touches AI content now has obligations

Any intermediary that enables or facilitates the creation or spread of SGI must label it. Not in fine print. Not buried in metadata no one checks. The label must be clear, prominent and unambiguous—visible on the content itself.Platforms must also embed persistent metadata and unique identifiers into SGI, to the extent technically feasible, so it can be traced back to the intermediary's system.

Once those markers are applied, the rules bar platforms from enabling their removal or tampering. That closes a real gap—previously, a label could exist in theory but vanish the moment someone downloaded and re-uploaded the file.

Big platforms face the toughest ask

Significant social media intermediaries—Instagram, YouTube, Facebook and the like—get additional obligations under the new Rule 4(1A). Before any upload goes live, they must require users to declare whether the content is synthetically generated.

They must then deploy automated tools to verify those declarations.If a declaration or technical check confirms the content is AI-made, the platform must display it with a visible label or notice. Miss this, and there's a liability hook built in: if a platform knowingly permits or promotes unlabelled synthetic content, it's deemed to have failed due diligence. That's the kind of language that can cost you safe harbour protection.One notable rollback: the October 2025 draft had proposed that visual labels cover at least 10% of display area, and audio markers play during the first 10% of a clip. Industry bodies like IAMAI called it rigid and unworkable. The final version drops that threshold. Labels are still mandatory—just not a giant watermark plastered across a tenth of your screen.

Three-hour takedowns and quarterly user warnings

The amendments compress existing timelines sharply. For lawful government orders in certain cases, platforms now get three hours to act.

That's down from 36. Other windows have been cut from 15 days to seven and from 24 hours to 12.Platforms must also use automated tools to actively block SGI that violates the law. The rules name specific categories: child sexual abuse material, obscene or pornographic content, false electronic records, content related to explosives or weapons, and deepfakes that misrepresent real people or events with intent to deceive.There's a user-facing obligation too. Intermediaries must now warn users at least once every three months—through their terms, privacy policy or other means, in English or any Eighth Schedule language—about penalties for misusing AI content. Consequences range from account termination to mandatory reporting to law enforcement under the BNS or POCSO Act.The gazette also updates the legal plumbing. References to the Indian Penal Code have been swapped out for the Bharatiya Nyaya Sanhita, 2023, reflecting the new criminal law framework.

What changes for users?

Here’s a easier letdown for most of you reading—the social media users. For you, the most visible shift will be labels. AI-generated posts, reels, videos and audio clips on major platforms will now carry a disclosure tag—something you'll see before you engage with the content. The idea is simple: you should know what's real and what's machine-made before you like, share or forward it.There's also a declaration step coming. If you upload content on a platform like Instagram or YouTube, you may be asked to confirm whether it was created or altered using AI tools.

Misrepresenting that declaration isn't just a terms-of-service issue anymore—it could attract penalties under the BNS or POCSO Act, depending on the nature of the content.Platforms are also required to send periodic reminders—at least once every three months—about the rules around AI content and the consequences of violating them. Expect to see these show up in updated terms of service, privacy policies or in-app notifications.The draft rules were first published in October 2025, with public feedback invited until November 13 after an extension. The final notification is now live, and platforms have until February 20 to comply.

Read Entire Article