Asianet NewsableAsianet Newsable

Over 24 million people visit websites that enable them use AI to undress women in photos, reveals study

In September alone, a staggering 24 million individuals were reported to have visited websites focused on undressing content, as disclosed by Graphika, a company specializing in social network analysis.

Over 24 million people visit websites that enable them use AI to undress women in photos, reveals study snt
Author
First Published Dec 8, 2023, 7:20 PM IST

In recent months, a disturbing surge in the popularity of apps and websites utilizing artificial intelligence (AI) to undress individuals, predominantly women, has caught the attention of researchers, privacy advocates, and the general public. The so-called "nudify" services have capitalized on popular social networks for marketing, leading to a concerning increase in non-consensual pornography driven by advancements in AI technology. This article explores the widespread implications of this trend, the legal and ethical challenges it poses, and the urgent need for comprehensive regulations to protect individuals from the harmful use of AI-generated content.

According to a report from Bloomberg, Graphika, a social network analysis company, disclosed that a staggering 24 million people visited undressing websites in September alone. The rise in popularity is evident in the skyrocketing number of links advertising undressing apps, increasing by over 2,400 percent on platforms like X and Reddit since the beginning of the year. This alarming trend is fueled by AI technology that digitally undresses individuals, often using images taken from social media without the subject's consent or knowledge.

Also read: Retired IPS officer's Deepfake video in sextortion: Criminals target elderly man

The proliferation of "nudify" services raises serious legal and ethical concerns, as it involves the creation and dissemination of explicit content without the consent of the individuals involved. Some advertisements even suggest potential harassment by allowing customers to create nude images and send them to the digitally undressed person. While Google has taken a stance against sexually explicit content in ads and actively removes violative material, platforms like X and Reddit have yet to respond to requests for comments.

Privacy experts are sounding the alarm about the increasing accessibility of deepfake pornography facilitated by advancements in AI technology. The Electronic Frontier Foundation's director of cybersecurity, Eva Galperin, notes a shift towards ordinary people using these technologies on everyday targets, including high school and college students. Victims may remain unaware of manipulated images, and those who are aware face challenges in seeking law enforcement intervention or pursuing legal action.

The United States currently lacks a federal law explicitly prohibiting the creation of deepfake pornography. However, a recent case in North Carolina marked the first prosecution under a law banning the deepfake generation of child sexual abuse material. The absence of comprehensive regulations underscores the urgent need for legal frameworks that address the broader issue of non-consensual and harmful use of AI-generated content.

Also read: Rashmika Mandanna talks about deepfake hours after Alia Bhatt falls victim; says "I felt afraid..."

In response to the alarming trend, TikTok and Meta Platforms Inc. have taken initial steps to address the issue. TikTok warns users about content associated with terms like "undress" violating its guidelines, while Meta Platforms Inc. has implemented measures to block keywords related to undressing apps. However, further industry-wide actions and collaboration are necessary to effectively combat the proliferation of AI-generated deepfake pornography.

The rise of AI-generated deepfake pornography poses significant threats to individual privacy, consent, and overall digital security. As technology evolves, urgent action is needed to establish comprehensive regulations that address the legal and ethical challenges associated with the non-consensual use of AI-generated content. Industry stakeholders, policymakers, and advocacy groups must work collaboratively to protect individuals from the harmful impacts of deepfake technology and ensure a safer digital landscape for all.

Latest Videos
Follow Us:
Download App:
  • android
  • ios