ARTICLE AD BOX
Last Updated:November 06, 2025, 13:57 IST
AI-powered deepfakes are no longer targeting only celebrities, over 96% of all deepfake videos online are pornographic, and nearly all feature women without their consent

Content flagged by users is frequently dismissed as not violating "community guidelines", especially when the nudity is not technically "real" (Image: Generated)
Radhika Sagar (name changed), a 32-year-old English teacher from Vadodara, had never posed, stored or shared a nude photo. Yet, there she saw her face perfectly blended with the body of a stranger, shared in full portrait mode across a WhatsApp group, her reputation stripped in seconds by an image she never took. The pictures were not real, but the consequences were.
For Radhika, the nightmare did not begin on a dark web forum or with a malicious hack. It began on a perfectly ordinary Wednesday, with a WhatsApp notification. A startling new report released by Tattle and Rati, Make It Real, further strengthens this chilling effect that reveals that 92% of women reporting deepfake abuse to a survivor helpline were not public figures but ordinary citizens. And this is no longer a problem limited to actresses or activists. Deepfake AI videos is a crisis that has quietly breached the lives of women across India, with artificial intelligence speeding up the scale and intimacy of violation at a pace that law, society, and tech platforms have failed to recognise fully.
What Is The Deepfake Crisis?
A deepfake is a digitally altered video, image, or audio clip created using artificial intelligence to mimic a person’s likeness or voice in a way that looks or sounds real, even though it never happened. The name comes from “deep learning," the AI technique used to produce these convincing fakes. While the technology can be used for creative or harmless purposes, it is increasingly misused for harmful acts such as non-consensual sexual content, identity scams, or spreading false information.
Digital manipulation is not new, but artificial intelligence has revolutionised how fast, simple, and convincing it has become. What earlier took hours of editing now takes seconds through online AI tools, many of which require only a photo upload to generate sexually explicit, non-consensual content.
The 2025 report draws from real cases submitted to Meri Trustline, a helpline started by Rati Foundation. Even more troubling is that in most situations, the perpetrator and survivor did not know each other offline. The technology makes proximity irrelevant. Digital strangers now have unprecedented access to manipulate and circulate sexualised versions of a woman’s image without her consent, and at a level of realism that blurs the lines between fabrication and truth.
Studies from the past three years reveal a similar global pattern. A 2023 analysis by Sensity found that 96% of deepfake content circulating online was pornographic, with an overwhelming majority targeting women. In India, the technology has rapidly entered messaging apps, anonymous forums, and private chatrooms, leaving little digital evidence of the original creator. This makes reporting and removal even more difficult.
Do Online Platform Rank Safety Below Copyright?
Radhika recalls trying to report the deepfake image to the platform where it was first shared. “They asked me if I could prove it was not me. I had to repeat the same trauma of not knowing and hoped it believed me." The image remained online for days. Radhika found the source of her video on X (formerly Twitter) which took more than 10 days for her to get the photo scrapped altogether.
The report makes clear that platforms have become one of the weakest links in protecting women from AI-driven abuse. The issue is not only policy but process. On platforms such as X, Instagram, or WhatsApp, reporting content that appears artificially altered still involves opaque systems that often require survivors to prove that the image is fake. Content flagged by users is frequently dismissed as not violating “community guidelines", especially when the nudity is not technically “real".
Furthermore, memes, GIFs, or filters that sexualise images often sit in what the report calls a “policy grey zone", where platforms hesitate to assign harm because the format appears humorous or altered.
One of the most revealing observations in the report is that copyright frameworks often work faster than safety frameworks. When women reported the images under copyright takedown laws instead of abuse, the content was removed more quickly. This suggests that platforms take intellectual property more seriously than bodily autonomy.
Legal researchers and cyber safety experts have criticised this foundational gap.
Are There Laws In India That Punish AI Abuse?
Contrary to public perception, India does not lack legal provisions that cover image-based sexual abuse. The Information Technology Act, Indian Penal Code, and even the Indecent Representation of Women Act can be applied to pursue action in deepfake cases. Yet survivors face a justice system poorly trained to understand AI-based manipulation.
According to the report, survivors often encounter two main barriers- police disbelief and institutional delay. Many police stations in India still lack cyber forensic units or dedicated gender-based cybercrime cells. Survivors are sometimes asked to “prove" that the content is fake or are blamed for “sharing" explicit content.
In one of the cases documented by Rati, police officers advised a survivor to “stay off the internet" rather than file a complaint. Others asked whether the woman had “encouraged" the attention. Lack of cyber forensic capacity further complicates matters.
Such responses reinforce a culture where the burden of proof sits on women, not technology. “Access to justice, not law, is the gap," the report notes. What this really means is that until the legal education of police, judges, and enforcement bodies catches up with emerging technologies, women will continue navigating a justice process that does not fully comprehend their harm.
Radhika’s own experience confirms this. When she approached local authorities, she felt judged more than supported. “It felt like I was being asked to defend my own decency. The offender was invisible. I became the accused."
Does AI Deepfake Abuse Affect Women More?
Deepfakes are not just a technological trend; they are a cultural signal. They reveal who society believes has a right to occupy digital space without fear. And it is often not women. For many perpetrators, it is less about sexual gratification and more about control. By placing a woman’s face on a nude body, the abuser performs a symbolic act, stripping her autonomy and weaponising her likeness.
This matters because it alters the stakes of participation. As deepfake cases rise, women like Radhika hesitate before posting a photo, before applying for a public-facing job, before speaking up online. “Every photo I upload now feels like a risk," says Radhika. “Even a simple classroom picture. I wonder, will someone use this again?"
A 2022 study published by the Centre for Internet and Society (CIS) found that young women in India are increasingly withdrawing from public digital spaces due to fears of harassment, doxxing, or image-based abuse. Deepfakes, in their uncanny realism, intensify that fear. The psychological impact is often long-term affecting not just professional and social life, but core identity and sense of safety.
What Would Real Safety Look Like in the Age of Deepfakes?
The report does not suggest a quick fix. It challenges both platforms and policymakers to rethink protection through accountability and transparency. The recommendations include building rapid-response systems for content removal, incorporating AI forensic tools for real-time detection, and centring survivors in the design of safety policies.
Technology alone cannot solve this crisis. What is needed is a coordinated ecosystem, tech platforms that respond without delay, police units trained in digital evidence, legal procedures that do not shame victims, and public awareness that recognises deepfakes as violence, not entertainment.
For now, the frontline of resistance is being led by survivor-driven initiatives like Meri Trustline, which document cases, provide legal support, and push for visibility. Their work signals that this is not a fringe problem, but a rising threat shaping digital life for Indian women.
For Radhika, justice is not a court sentence but restoration of dignity. “I just want to feel safe going online again. Not paranoid each time I see a camera." The real challenge in the era of deepfakes is not just detecting what is fake. It is listening to what is real.
First Published:
November 06, 2025, 13:57 IST
News india With 92% Of Deepfake Victims Being Women, How Is AI Becoming A Tool For Digital Abuse?
Disclaimer: Comments reflect users’ views, not News18’s. Please keep discussions respectful and constructive. Abusive, defamatory, or illegal comments will be removed. News18 may disable any comment at its discretion. By posting, you agree to our Terms of Use and Privacy Policy.
Stay Ahead, Read Faster
Scan the QR code to download the News18 app and enjoy a seamless news experience anytime, anywhere.

2 hours ago
5






English (US) ·