MeitY Moves to Label AI-Generated Content: India’s Big Push Against Deepfakes

India’s IT ministry proposes mandatory labelling for all AI-generated content to combat deepfakes. Here’s what the new draft rules mean for users, creators, and platforms.

author-image
Team TICE
New Update
Meity

It started as a futuristic fantasy — the idea that artificial intelligence could create videos, voices, and faces indistinguishable from reality. But what once seemed like harmless tech magic has now spiraled into a growing nightmare: deepfakes. And India is finally taking a decisive step to rein it in.

Advertisment

In a landmark move, the Ministry of Electronics and Information Technology (MeitY) has proposed mandatory labelling for all AI-generated content, aiming to curb the rapid spread of deepfakes and restore some much-needed trust in the digital ecosystem.

On Wednesday, MeitY released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, inviting public feedback on the proposed changes by November 6.

The Rising Deepfake Dilemma

In recent years, India’s digital spaces have turned into a battleground for truth. Deepfake technology — which allows the creation of hyper-realistic but entirely fabricated videos, audios, and images — has evolved faster than the regulations meant to contain it.

Advertisment

What began as playful internet experiments has swiftly become a dangerous tool for misinformation, character assassination, and identity theft. In fact, over the past year, deepfake clips featuring some of India’s biggest stars — Amitabh Bachchan, Akshay Kumar, Hrithik Roshan, Aishwarya Rai Bachchan, Anil Kapoor, and Abhishek Bachchan, among others — have been circulated online, forcing these celebrities to seek legal protection against the misuse of their likeness.

The issue reached a boiling point when a fake video of actress Rashmika Mandanna went viral in 2023, showing her entering a lift — an entirely AI-generated fabrication that stunned the internet. The clip sparked outrage, conversations, and ultimately, calls for government intervention.

Even Prime Minister Narendra Modi had sounded the alarm earlier, cautioning that the rise of deepfakes poses a “new crisis” for society.

Advertisment

“A very big section of society does not have a parallel verification system,” he had warned, highlighting how easily misinformation can now masquerade as truth.

What the Draft Rules Say

MeitY’s proposed amendments mark one of India’s strongest regulatory steps yet to counter the deepfake menace.

Under the draft, the ministry defines “synthetically generated information” as any content that is artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that appears reasonably authentic or true.

To put it simply — if it looks real but was made by AI, it must be clearly labelled.

Here’s how the new labelling mandate is designed to work:

  • Visual content (like images or videos) must carry a visible label that covers at least 10% of the display area.

  • Audio content must include a similar disclosure during at least the first 10% of its duration.

This means that whether you’re scrolling through Instagram Reels, watching a YouTube clip, or listening to a podcast — if what you’re hearing or seeing was generated by AI, you should know it upfront.

Accountability for Platforms

The onus won’t fall solely on content creators. Major social media companies — classified as “significant social media intermediaries” under Indian law — will also have new responsibilities.

They will be required to:

  • Ask users to declare whether the content they upload is AI-generated.

  • Use technical tools to detect and verify such declarations.

  • Clearly mark or label verified AI-generated content with a visible notice.

In essence, the government wants to make synthetic content traceable and transparent — ensuring that platforms, not just individuals, are accountable for what circulates online.

Why It Matters

With 954.4 million Internet users and over 95% of India’s villages connected to 3G/4G networks as of March 2024, India is one of the largest and fastest-growing digital populations on the planet.

That also makes it a prime target for misinformation. A single deepfake video, viewed by millions in minutes, can distort reality, influence public opinion, or destroy reputations.

MeitY’s move, therefore, is not just about content moderation — it’s about restoring digital trust in an era where truth itself is under siege.

India’s initiative echoes a growing international consensus on the need for deepfake regulation.

  • In the United Kingdom, the Online Safety Act (2023) has already criminalised the sharing of AI-generated intimate images.

  • In the United States, the Take It Down Act mandates that platforms must remove non-consensual explicit deepfake content within 48 hours of notification.

By proposing these labelling norms, India is aligning itself with global best practices while tailoring the approach to its massive and diverse online ecosystem.

Towards a Responsible AI Future

The draft amendments are a clear signal that India is moving towards a framework of responsible AI governance — one that balances innovation with accountability.

The government isn’t outlawing AI creativity; it’s simply asking for transparency. In a world where even the most discerning viewer can be fooled by digital forgeries, a label could make all the difference between truth and manipulation.

The proposal now awaits public feedback until November 6, after which MeitY will finalise the amendments. If implemented, these rules could mark a turning point in India’s digital policy — and perhaps, the beginning of a more transparent internet age.

Startup Deepfake MEIty