Crackdown on Deepfake: YouTube takes down 1000 AI-driven celebrity scam ads

By Team Asianet Newsable  |  First Published Jan 27, 2024, 9:12 AM IST

The action comes after an investigation by 404 Media revealed an advertising ring responsible for creating these misleading videos. YouTube is actively combating AI-driven celebrity scam ads and has updated its policies to address AI-generated content portraying deceased minors or victims of violence


In a significant move, YouTube has removed more than 1,000 scam advertisement videos employing AI technology and featuring celebrities such as Taylor Swift, Steve Harvey, and Joe Rogan. The Google-owned platform revealed its commitment to tackling the issue of AI-driven celebrity scam ads, acknowledging the use of resources in the battle against deceptive content.

The removal action followed an investigation by 404 Media, uncovering an advertising ring responsible for creating misleading ads through AI, promoting Medicare scams. These videos collectively amassed nearly 200 million views, triggering numerous complaints from both users and the celebrities featured.

Tap to resize

Latest Videos

Tap to resize

YouTube, fully cognizant of the problem, assured users that it is actively engaged in preventing the spread of celebrity deepfakes, a term used for AI-generated content that convincingly portrays individuals, often famous personalities, in deceptive scenarios.

This recent move aligns with YouTube's broader initiative to combat AI-generated content that realistically depicts deceased minors or victims of well-documented violent events. The platform has updated its harassment and cyberbullying policies accordingly.

Reports have surfaced regarding the misuse of AI technology by content creators to recreate the likeness of deceased or missing children, providing them with a simulated "voice" to narrate the details of their deaths. YouTube has explicitly stated that any content violating this policy will be promptly removed, and creators will be notified via email.

In a separate development, sexually explicit deepfake content featuring Taylor Swift went viral on X (formerly Twitter). A single post garnered over 45 million views and 24,000 reposts before being taken down after approximately 17 hours. 404 Media's investigation traced the origin of these explicit images back to a Telegram group specializing in the sharing of AI-generated images of women.

According to cybersecurity firm Deeptrace, a staggering 96% of deepfakes are of a pornographic nature, with the majority targeting women. The incidents underscore the growing challenges platforms face in curbing the misuse of AI technology and the need for vigilant measures to protect users and celebrities from such deceptive content.

click me!