Deepfake Sexual Exploitation in India: A Policy Framework for Prevention, Takedown, and Platform Accountability
Author: Anahat Kaur
Abstract Generative artificial intelligence has rapidly transformed deepfake technology from a niche academic and entertainment tool into a powerful, accessible instrument of sexual exploitation. Today, widely accessible AI systems can be utilized to create and circulate sexually explicit content featuring real individuals without their consent, often requiring only a small number of publicly available images.
In India, this problem has amplified significantly due to high internet penetration, widespread social media usage, and a glaring lack of clear laws that recognize synthetic sexual abuse as a distinct and severe crime. Current cybercrime laws, such as Sections 66E and 67A of the Information Technology (IT) Act and related Indian Penal Code (IPC) provisions, cover some illegal content but do not fully address the unique harms, scale, and specific methods of deepfake sexual exploitation. Specific legislative gaps exist around recognizing the intent to harm, the distribution of non-consensual synthetic media, and the very creation of such media without explicit consent. For example, imagine a case where an individual attempts to prosecute the creator of a malicious deepfake under existing privacy laws; the case often fails to proceed because the law lacks specific provisions tailored for synthetic media. This gap unequivocally highlights the urgency for new legislation tailored to counter deepfake sexual exploitation effectively and comprehensively.
This white paper provides a focused analysis of the technological enablers, patterns of abuse, legal and institutional gaps, and global regulatory advancements that must inform a tailored Indian policy response. Following this detailed analysis, the paper outlines concrete and actionable measures in four key areas:
Criminalisation
Digital platform supervision
Rapid removal protocols
Enforcement mechanisms specifically designed for the Indian legal and socio-cultural context
In doing so, this white paper offers a clear, strategic roadmap for addressing deepfake sexual exploitation through both comprehensive legislative and institutional reform.
1. Introduction
"When falsity acquires realism, harm acquires permanence."
India is a burgeoning digital society comprising nearly a billion internet users, with social platforms deeply entrenched in everyday communication, employment, education, and civic engagement. At the same time, the universality and public availability of personal images, videos, and biometric data online create a highly fertile foundation for AI-powered misuse. Generative AI can now produce hyper-realistic synthetic media that distort, impersonate, or completely fabricate sexual content involving identifiable, non-consenting persons.
Deepfake sexual exploitation differs fundamentally from traditional cyber harassment or "revenge porn" in several key respects: it is fully automated, immensely scalable, highly persistent, and often entirely indistinguishable from genuine material to the naked eye. Once created and circulated, synthetic sexual media can be replicated, re-uploaded, and shared across private encrypted and public channels exponentially faster than existing, reactive legal processes can respond. The vast scale of the internet implies that uploads occur at an unparalleled rate. Potentially thousands of deepfake files are generated every minute, far exceeding the current takedown capacities of standard law enforcement mechanisms. This profound mismatch between the astronomical rate of upload and the sluggish pace of enforcement underscores why traditional remedies struggle to keep up, placing significant pressure on regulatory frameworks and legal systems to adapt much more swiftly.
In India, deepfakes initially drew public attention primarily in political and celebrity contexts, but substantial evidence suggests the primary victims are everyday private individuals—most notably women, students, and minors—who lack visibility, financial resources, or legal recourse. According to the National e-Governance Division, over 90% of deepfakes globally are pornographic, a trend equally present in India, disproportionately affecting women and massively contributing to gendered online violence (NEG Digital, 2025).
Figure 1 aggregates key empirical indicators demonstrating the prevalence, sexualised nature, and disproportionate impact of deepfake technologies in India. Data clearly shows widespread exposure among Indian internet users, a dominant concentration of pornographic and gender-targeted deepfake content, and India's unfortunate position as one of the largest sources of traffic to sexually explicit deepfake platforms globally.
Sources: McAfee Survey (2024); Georgetown Institute for Women, Peace & Security; The Yomiuri Shimbun (2024)

Yet, Indian law currently does not specifically define or categorize deepfake sexual exploitation, leaving a critical and dangerous gap in legislative protections and implementation mechanisms.
This paper explicitly treats deepfake sexual exploitation as a distinct form of digital identity harm that violently implicates fundamental constitutional guarantees of dignity, equality, and privacy under Article 21 and Article 14 of the Indian Constitution.
2. Nature and Technological Underpinnings of Deepfake Abuse
Creating deepfakes involves sophisticated AI and deep learning methodologies, primarily using neural networks such as Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), combined with advanced facial recognition algorithms. VAEs assist with highly realistic face swaps by dynamically compressing and reconstructing faces, thereby making deepfakes where the target face seamlessly mimics the source's expressions and subtle movements.
GANs operate using a dual-network system: a "generator" that actively creates synthetic data and a "discriminator" that attempts to detect fakes. This adversarial training process continues iteratively until the fake data produced by the generator is virtually impossible to distinguish from genuine, real data by the discriminator. These highly accessible, open-source technologies allow for the rapid, low-cost production of distorted content, often targeting women by generating completely unauthorized intimate imagery.
Figure 2 shows how a sexual deepfake is created and spread from collecting someone's images, training an AI model, generating fake media, and finally circulating it online to cause reputational harm.

Deep learning models have become so advanced that they can synthesize realistic, fluid facial movements from merely a single static image, highlighting the terrifying ability of deepfake technology to convincingly alter original visuals.

Source: Live Science, reporting on Samsung AI Center (Moscow) research on animating still portraits using deep learning (2019).
The core risks are firmly rooted in extreme accessibility and automation. Whereas earlier forms of rudimentary image manipulation required significant technical skill, modern deepfake generators enable lay users to upload just a handful of photos and produce hyper-realistic explicit content within minutes. This rapid democratisation of harmful capability has catastrophically exacerbated non-consensual sexual exploitation across the internet.
3. Patterns and Impact of Deepfake Sexual Exploitation in India
3.1 Victimisation Profiles
Deepfake sexual exploitation in India manifests diversely across several insidious contexts:
Non-consensual sexual videos and images of private individuals circulated on social media and messaging platforms.
Extortion and threatened distribution of deepfake content to coerce payment or compliance.
Blackmail and sextortion, particularly targeting women who fear social stigma and honour-based repercussions if any explicit content, even fabricated, becomes public.
A comprehensive 2023 open-source investigation found that deepfake pornography views numbered in the tens of crores across dedicated websites, heavily underscoring the massive scale of circulation and unchecked consumption (Security.org, 2025).
3.2 Psychological, Social, and Economic Consequences
Victims consistently report profound, long-lasting psychological trauma, including severe anxiety, depression, social withdrawal, and reputational damage. Deepfake content directly leads to bruised self-esteem, the tangible loss of employment opportunities, educational disruption, and severe social stigmatisation.
"On the evening of October 6, 2025, a second-year student at IIIT-Naya Raipur opened her phone to a message from an unknown number: a photograph of her own smiling Instagram selfie, grafted onto a naked body in a degrading pose. The caption read: 'Delete your account by tomorrow or the entire campus gets this.'" (The Hindu, 2025)
Figure 4 illustrates the psychological, social, and economic consequences of technology-facilitated sexual abuse in India. It depicts a rise of 118% in cybercrime complaints where women are the victims, 10% of helpline cases involving AI-generated sexual abuse, 70% of victims reporting social withdrawal and isolation, and 32% facing disruption to education or employment.

Information Source: NCRB; National Cyber Crime Reporting Portal; Meri Trustline Helpline (RATI Foundation); The Hindu; The Guardian (India reports).
The reported 118% rise in cybercrime complaints against women reflects a sharp comparative increase observed between the baseline year 2020 and the reporting period ending in 2024, based on data compiled by the National Crime Records Bureau and the National Cyber Crime Reporting Portal. This period corresponds directly with accelerated digital adoption and the growing accessibility of generative AI tools, contextualising the sharp escalation depicted in Figure 4. Qualitative and institutional reports further document severe psychological distress, social withdrawal, and tangible economic consequences including educational disruption and loss of employment.
In the Indian cultural context, where images and personal reputation are deeply linked to notions of familial honour, social standing, and community respect, the repercussions of synthetic sexual exploitation extend beyond individual victims. Allegations or the mere existence of explicit synthetic content, even when fabricated, can result in collective shame, ostracism, and lasting damage to family relationships. This dynamic is intensified by societal norms that place a premium on women's chastity and honour, which can lead to victim-blaming, forced isolation, or withdrawal from educational and employment opportunities. As a result, victims may face not only psychological trauma but also enduring consequences for their social integration and future prospects within their families and communities.
3.3 Lived Realities of Technology-Facilitated Sexual Abuse in India
A Delhi-based journalist who has faced years of online rape threats noted that the emergence of deepfake tools has fundamentally altered the nature of risk. Earlier forms of harassment depended on the existence of real images; today, perpetrators require only publicly available facial data from press conferences or social media to fabricate sexually explicit content.
This pattern mirrors earlier forms of digitally mediated targeting in India. In 2021, the "Sulli Deals" application and its 2022 successor "Bulli Bai" used synthetic representations to publicly harass Muslim women. More recently, activists from marginalised communities in Uttar Pradesh have reported the use of morphed or AI-generated imagery to suppress political participation and protest (The Hindu).
Globally, 96 per cent of deepfake videos are non-consensual sexual content aimed at women, according to the Georgetown Institute for Women, Peace & Security. In the first 10 months of 2025 alone, one in 10 calls to India's Meri Trustline helpline were about AI-generated abuse, the RATI Foundation reported (The Hindu).
These lived realities underscore why deepfake sexual exploitation cannot be addressed solely through existing obscenity or privacy provisions, but requires recognition as a distinct, identity-based digital harm within Indian law.
4. Legal and Constitutional Landscape in India
India's cybercrime regime offers several proximate legal tools:
Section 66E (Violation of Privacy): Criminalises publishing intimate images without consent.
Sections 67 and 67A (IT Act): Penalise transmission of obscene and sexually explicit material, including deepfakes involving explicit sexual content.
IPC provisions: Sections such as 354C (voyeurism) and defamation laws may be invoked in related contexts (ANM Global, 2024; Legal Service India, 2025).
However, these statutes were not drafted with deepfake technology in mind and lack explicit reference to non-consensual synthetic imagery. Courts and enforcement agencies have struggled to apply them coherently to AI-generated content, resulting in inconsistent outcomes (Chambers.com, 2025).
According to the Times of India, the National Cybercrime Reporting Portal saw an increase in complaints, recording 61,525 cases between January and June in 2025 as compared to 55,267 during the same period in 2023 (Times of India, 2025). However, without a specific legal category for deepfake sexual exploitation, it remains difficult to determine the true scale of such incidents or address them effectively. The report also notes that India's Supreme Court and High Courts are starting to address cases involving synthetic media. For example, the Bombay High Court ordered the removal of AI-generated deepfake content that violated personality rights, underscoring judicial recognition of harm even when not sexual in nature (Times of India, 2025).
4.1 Evidentiary Barriers in Prosecuting Deepfake Sexual Exploitation
A further enforcement challenge arises from evidentiary barriers in prosecuting deepfake sexual exploitation. Indian courts require digital evidence to comply with certification standards under Section 65B of the Indian Evidence Act, which presuppose identifiable authorship, control over the originating device, and verifiable data integrity.
In deepfake cases, content is often generated anonymously, distributed across platforms, and detected through probabilistic AI forensic tools rather than direct source attribution. This creates practical difficulties in admitting AI-generated or AI-detected material as evidence, even where harm is demonstrable, underscoring the need for clearer statutory recognition of synthetic media forensics.
5. Platform Behaviour and Regulatory Risks
Current intermediary liability rules under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, require due diligence and takedown of harmful content. However, enforcement remains reactive.
Platforms often rely on user complaints rather than proactive detection, and deepfake sexual content can spread across private and encrypted channels before removal is initiated. Recent Indian draft regulations propose requiring AI-generated content to be clearly labelled, compelling platforms to develop technical detection and verification mechanisms. Nevertheless, platforms' content moderation systems have struggled with non-consensual deepfake removal, often classifying such material under general harassment or abuse categories rather than explicit sexual exploitation, reducing visibility and response priority.
5.1 Intermediary Liability and Safe Harbour Limitations
The need for rapid removal of non-consensual synthetic sexual content highlights important gaps in how intermediary liability is currently regulated under Section 79 of the Information Technology Act. At present, online platforms are protected by safe harbour provisions that generally require them to act only after receiving formal notice or gaining actual knowledge of unlawful content.
In cases of deepfake sexual exploitation, this approach can be inadequate. Such content often spreads quickly across multiple platforms, causing serious harm to victims within a matter of hours. Addressing this challenge may therefore require a limited and carefully designed reassessment of safe harbour protections. Introducing narrowly defined obligations for faster platform responses, triggered when there are credible indications of non-consensual synthetic sexual content, could enable earlier intervention while still respecting due process, proportionality, and protections against over-removal.
5.2 Right to Be Forgotten and Continuing Digital Harm
Non-consensual synthetic sexual content also implicates the constitutional "Right to be Forgotten" as an extension of the right to life, dignity, and privacy under Article 21. Unlike conventional offences, deepfake sexual exploitation constitutes a continuing harm, as repeated circulation and re-uploading perpetuate reputational and psychological injury. Effective takedown and de-indexing mechanisms therefore function not merely as regulatory tools, but as constitutional safeguards designed to prevent ongoing violations of personal dignity.
6. Comparative and International Perspectives
International institutions acknowledge the specific harms of deepfake sexual exploitation. For instance, the Meta Oversight Board has recommended clearer rules on non-consensual deepfake intimate content, noting disparities in removal and enforcement across regions.
Similarly, global surveys show that significant portions of the public underestimate the harm of sexual deepfakes. In one study, 7% of respondents had been victims, but only half reported incidents due to embarrassment or distrust in systems (Security.org, 2025).

In India, although comprehensive official statistics on non-consensual deepfake sexual exploitation are not yet available, multiple indicators point to a substantial domestic reporting gap that likely exceeds global averages. A 2024 McAfee survey found that three out of four Indians have encountered deepfake content, yet no systematic reporting mechanism exists that distinguishes deepfake sexual exploitation from broader cybercrime statistics, suggesting that many incidents remain uncaptured in official data (Tech Law Forum NALSAR).
Furthermore, helpline analyses show that roughly 10% of AI-related abuse calls involve deepfake-style threats and explicit content, indicating significant unreported victimisation that does not translate into formal legal action. High-visibility cases involving celebrities occasionally result in FIRs, but ordinary victims rarely achieve similar legal recognition, underscoring how cultural stigma, technical opacity, and a lack of dedicated reporting categories contribute to a profound reporting gap in India's context (The Guardian).
These patterns demonstrate the limitations of existing content policies and underscore the need for legally enforceable, rights-based approaches rather than platform discretionary moderation alone.
7. Policy Directions for India
To address deepfake sexual exploitation as a distinct digital harm, India's policy response should be guided by the following four recommendations:
First, clarify and define legal offences relating to synthetic sexual content.
Second, establish clear platform obligations and strengthen enforcement standards.
Third, expand dedicated victim support and reporting mechanisms.
Fourth, enhance institutional capacity and forensic expertise.
7.1 Penal Amendments
Amend the Bharatiya Nyaya Sanhita (BNS) and IT Act to include explicit provisions criminalising the creation, dissemination, and possession of non-consensual synthetic sexual content.
Define deepfake sexual exploitation as an independent offence recognising intent, non-consent, and distribution as elements.
7.2 Platform Obligations and Enforcement
Mandate clear and enforceable takedown timelines for synthetic sexual content, with penalties for failure to comply.
Require platforms to deploy advanced detection systems and report metrics on detection and removal actions to MeitY.
7.3 Victim Support and Reporting Mechanisms
Create a dedicated reporting and tracking mechanism on the National Cyber Crime Reporting Portal for deepfake sexual exploitation complaints.
Provide victims with accelerated legal and psychological support services.
Victim Anonymity and Shielded Identity Reporting: To address the high rate of social withdrawal among victims, reporting systems should incorporate shielded identity mechanisms that allow victims to initiate complaints and takedown requests without immediate public disclosure of personal details. Anonymised or pseudonymised reporting can reduce fear of stigma, retaliation, and secondary victimisation, particularly in culturally sensitive contexts. Ensuring confidentiality at the initial stages of reporting is essential for increasing reporting rates and enabling timely intervention.
In a digital universe where reality is flexible, courts must penalise the guilty and claim damages for the compromised party. Criminalising the development and spread of deepfakes is pivotal for restricting potential offenders, and the law enforcement authorities must have up-to-date tools and experience to deal with technologically advanced crimes.
7.4 Capacity Building and Forensics
Establish national standards for synthetic media forensics and equip state cybercrime units with tools and training.
Effective enforcement requires clear responsibility among institutions involved in responding to deepfake sexual exploitation. The National Cyber Crime Reporting Portal should function as the primary point of contact for victims, allowing complaints to be filed, categorised, and tracked, while providing victims with visibility on the status of their cases.
CERT-In (Indian Computer Emergency Response Team) should support this process by handling the technical aspects of enforcement, including verifying whether reported content is synthetically generated, conducting digital forensic analysis, and coordinating with online platforms to enable faster takedown and containment.
Beyond basic watermarking, India should adopt internationally recognised content provenance frameworks such as the Coalition for Content Provenance and Authenticity (C2PA) standards. C2PA enables verifiable metadata trails that indicate whether content has been generated or altered using AI, facilitating early detection and forensic verification.
7.5 Censorship Strategy
The implementation of a censorship strategy involves preventing publishers and intermediaries from disseminating misleading or harmful content to the public.
This approach aims to curb the spread of deceptive material by restricting the channels through which synthetic media, including deepfakes, can be shared.
7.6 Multilateral Collaborations
The government can adopt standardised regulations to govern deepfakes, drawing from frameworks in the US, China, and the EU. Measures like watermarking and source labelling can help trace content in cases of privacy breaches.
Supporting research in institutions like the IITs to develop detection technologies will enable early and effective responses to this growing challenge.
8. Conclusion
Deepfake sexual exploitation accentuates a fundamental challenge for India's digital governance: when the very notion of visual truth is eroded, legal and institutional systems must adapt not reactively but structurally.
Potential scalable enforcement models include the utilisation of AI-driven detection technologies to automatically identify and flag deepfake content. The establishment of inter-agency task forces could also be beneficial, allowing for a coordinated approach involving law enforcement, cybersecurity experts, and technology firms to streamline takedown processes and improve real-time responses to deepfake proliferation.
The future of India's digital society depends on whether policymakers can integrate technological understanding, constitutional values, and enforceable standards into law and enforcement practices. Recognising synthetic sexual exploitation as a standalone offence, strengthening platform obligations, and enhancing institutional capacity will not only safeguard individual dignity but also fortify trust in India's digital public infrastructure.
Recognising these harms, as gendered, identity-based, and irreversibly transmitted, is necessary for a 21st-century legal ecosystem that protects individuals, not just punishes publishers.
References
National e-Governance Division. (2025). Deepfakes in India: Legal Landscape, Judicial Responses, and a Practical Playbook for Enforcement. https://negd.gov.in/blog/deepfakes-in-india-legal-landscape-judicial-responses-and-a-practical-playbook-for-enforcement/
Legal Service India. (2025). Deepfake in India: The Tussle between Technology and Law. https://www.legalserviceindia.com/legal/article-18939-deepfake-in-india-the-tussle-between-technology-and-law-.html
ANM Global. (2024). Deepfakes and IT Act Legal Provisions. https://anmglobal.net/deepfakes-article/
Reuters. (2025, October 22). India proposes strict rules to label AI content, citing growing risks. https://www.reuters.com/business/media-telecom/india-proposes-strict-it-rules-labelling-deepfakes-amid-ai-misuse-2025-10-22/
IMPRI Impact and Policy Research Institute. (2025). Deepfakes’ Violence And The Law’s Silence: A Feminist Call For Digital Justice. https://www.impriindia.com/insights/deepfakes-violence-laws-digital/
Chambers.com. (2025). How India is Challenging Deepfakes. https://chambers.com/legal-trends/controlling-deepfakes-in-india
Security.org. (2025). Deepfake Statistics: Facts, Trends, and Victim Impact. https://www.security.org/resources/deepfake-statistics/
https://www.dhs.gov/sites/default/files/publications/increasing_threats_of_deepfake_identities_0.pdf
https://www.cppr.in/articles/deepfakes-doxxing-and-digital-abuse
https://dash.harvard.edu/server/api/core/bitstreams/0aa9cfdb-daf2-429e-9487-05e3c4c732b5/content
https://repository.nls.ac.in/cgi/viewcontent.cgi?article=1992&context=nlsir
https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=2245&context=jss
