Deepfakes, Consent, and the Law: How to Catch Up with AI Reality?
We are all in a world that is shaped by algorithms, the emergence of deepfakes, hyper-realistic synthetic videos, audio, and images generated by sophisticated artificial intelligence. The result of which we see regularly. This technology has shattered the foundational belief that “seeing is believing, “transforming itself into a powerful tool capable of creating anything from benign entertainment to targeted defamation and non-consensual exploitation. The central legal crisis here is not just about privacy, but about identity and consent itself.
The Crisis of Non-Consensual Likeness
The core problem lies in the complete removal of consent. The entirety of content available involves of non-consensual explicit content (often targeting women), but the main problem extends far beyond what we call revenge porn; it has now been used as:
1. Political disinformation - it’s a common practice to fabricate speeches by politicians and influence elections, or even to spread propaganda.
2. Financial Fraud- also known as “Vishing” is also now used to trick family members and employees into wiring money with the help of cloned videos.
3. Reputational Harm - Replacing an individual’s picture in compromising, embarrassing even false situations that destroy their personal or professional credibility. In all these instances, the victim’s face, body, or voice- their very persona- is stolen and deployed against them. The harm is immediate, global, and often irreversible before the content can be taken down.
Legal Remedies:
The existing laws were never designed to handle this type of synthetic AI-generated theft, forcing courts and lawmakers to rely on a patchwork of inadequate legal concepts:
1. Information Technology Act, 2000:
Section 66D: Cheating by posing as someone else using a computer or communication device can result in a fine of up to ₹1 lakh and three years in prison.
Section 66E: If someone's private photos are taken, shared, or sent without their consent, you could be imprisoned for up to three years, fined up to ₹2 lakhs, or both.
Section 67: For a first offence, sharing or uploading pornographic or sexually explicit content online carries a sentence of three years in prison and a fine of ₹5 lakh; for subsequent offences, the penalty is five years and ₹10 lakh.
Section 67A: Posting or sharing sexually explicit content online carries a five-year jail sentence and a fine of ₹10 lakh for the first offence and a seven-year sentence and ₹10 lakh for subsequent offences.
Section 67B: Creating, sharing, or possessing sexual content featuring minors is punishable by five years in prison and a ₹10 lakh fine for the first offence and seven years in prison and a ₹10 lakh fine for subsequent offences. A child is any individual under the age of eighteen.
Section 69A: The government may order websites or platforms to block online content for national security, public safety, or sovereignty reasons. Violating such orders carries a fine and a maximum sentence of seven years in prison.
2. Bharatiya Nyaya Sanhita, 2023:
Section 356: Using words, writing, pictures, gestures, or digital media to spread untrue information or statements that damage someone else's reputation is known as defamation. Producing or sharing deepfake images, videos, or audio that falsely depict someone in an offensive or negative manner is also defamatory since it damages that person's dignity and reputation.
Punishment: Imprisonment up to 2 years, or fine, or both.
However, Indian law does not yet specifically recognize or define “deepfakes” as a distinct offence. Existing provisions under the IT Act, 2000 and the Bharatiya Nyaya Sanhita, 2023 are being applied by analogy, often inadequately, to address harms caused by such synthetic AI-generated content.
Relevant Case Law:
In Ankur Warikoo & Anr. Vs John Doe & Ors. On 7th August, 2025
The Delhi High Court has stepped in to protect well-known personal finance instructor Ankur Warikoo from identity theft caused by deepfakes created by artificial intelligence (AI). He tricked individuals into joining phoney WhatsApp groups by using these phoney films to promote questionable investment opportunities. In order to prevent unapproved use of Warikoo's name, picture, or voice especially using deepfake or artificial intelligence technology the Court granted a John Doe injunction. It also required social media sites like Meta to remove such fake posts within 36 hours and reveal the perpetrators' identity. The decision recognized that deepfakes had the potential to harm a person's reputation as well as deceive and defraud the public.
The Emerging Federal and Global Response
Realizing the failure of the existing framework, specific deepfake legislation has begun to emerge, focusing tightly on the non-consensual aspect :
1. Targeted US. Federal Legislation (The Take it Down Act)-one of the most important steps in the US has been to criminalize the knowing publication of non-consensual intimate imagery, including AI-generated deepfakes. This law directly addresses the most malicious misuse by establishing criminal penalties and, crucially, requiring online platforms to create a notice and take-down process for the victims
2. Global Transparency Mandates (EU) –The European Union took a different approach by requiring providers of deepfake systems to disclose that the content is AI-generated. Thereby, emphasizing transparency and accountability, and helping users to distinguish between synthetic and authentic media.
3. State- Level Action – States like California and Texas have enacted laws, particularly in the areas of politics, providing civil recourse for victims of non-consensual media.
Conclusion: The Future of Consent
The legislative landscape is now moving towards a new legal definition of consent that is specific to an individual, moving beyond as implies /no traditional contract law. Progressing towards the concept of digital property rights. A federal framework that protects a person’s image, voice, and likeness from unauthorized digital replication, giving individuals genuine ownership and control.
The crux of this problem lies in ensuring that the consent is not merely a formality mixed within the terms of service but a legitimate, enforceable right that governs how our deepest and most personal data is used by machines. The law is trying to catch up, but until strong federal standards are adopted, the burden of proof will always remain in the victims’ court.
Share on
×