Let’s understand the rules.
What is proposed?
Under the new rules, all synthetic media including AI-made videos, images, text, and audio will be mandated to carry a clear, permanent label identifying it as artificial. This aims to make it easy for users to spot and report fake content circulating online.
Additionally, in the backdrop of a fight with Elon Musk’s X, MeitY has now amended the rules to ensure that directions to digital intermediaries including social media platforms, internet service providers, and search engines for removing unlawful content are only issued by certain senior officials.
What is synthetic media?
Per the rules, “synthetically generated information” is anything created, modified, or altered by algorithms or computer tools to look or sound real. AI-made videos, manipulated audio clips, and digitally changed photos come under this term, as long as they appear real to most people.
Who has to follow the rules?
The biggest burden falls on major social media platforms, i.e., companies with over 5 million users, or any service that enables creation or sharing of deepfakes and synthetic content.
Google-owned YouTube; Meta-owned Facebook, Instagram, Threads, and WhatsApp; X (formerly Twitter), Snap, LinkedIn, and ShareChat, will now have to obtain user declarations on whether the uploaded content is synthetic, deploy automated tools to verify the same, and ensure synthetic content is clearly marked with appropriate labels, or they will be deemed non-compliant.
Platforms must:
- Label synthetic content with visible or audible marks for at least 10% of its duration or screen area.
- Embed permanent metadata identifying artificial content.
- Ask users uploading material if it is synthetic, and let users report suspected deepfakes.
- Not remove or alter these labels from digital content.
Why now?
With the advent of generative AI tools, creating deepfakes or synthetic content is cheaper and easier. There have been multiple cases of manipulated videos and audio causing reputational harm, political misinformation, and fraud, forcing government and regulatory bodies to take notice.
Any penalties?
If platforms violate or ignore the rules, they risk losing their “safe harbour” legal immunity in India. Which means they can be held liable for user-generated content that breaks the law.