Latest Videos

AI experts and leaders call for tighter regulations to combat deepfakes; open letter gains over 750 signatures

By Team Asianet NewsableFirst Published Feb 22, 2024, 1:25 PM IST
Highlights

"Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed," the group said in the letter.

A coalition of artificial intelligence experts and industry leaders, among them pioneering figure Yoshua Bengio, have penned an open letter advocating for increased regulation concerning the production of deepfakes, citing potential societal risks. The letter, spearheaded by Andrew Critch, an AI researcher at UC Berkeley, highlights the current prevalence of deepfakes, which frequently feature sexual content, fraudulent activities, or political disinformation.

Also read: Mukesh Ambani-backed BharatGPT set to launch India's first AI language model 'Hanooman' in March

"Today, deepfakes often involve sexual imagery, fraud, or political disinformation. Since AI is progressing rapidly and making deepfakes much easier to create, safeguards are needed," the group said in the letter.

📢Today, an Open Letter titled "Disrupting the Deepfake Supply Chain" was released by Andrew Critch with Dr. Joy Buolamwini's () name signed to it. Moments before the press release, it was made known to her that there were organizations affiliated with the letter that…

— Algorithmic Justice League (@AJLUnited)

Deepfakes, lifelike yet artificial images, audio, and videos generated by AI algorithms, have become increasingly difficult to distinguish from content produced by humans due to recent technological advancements.

Titled "Disrupting the Deepfake Supply Chain," the letter proposes regulatory measures to address deepfakes. These recommendations include the complete criminalization of deepfake child pornography, imposing penalties on individuals knowingly involved in the creation or dissemination of harmful deepfakes, and mandating that AI companies take steps to prevent their products from generating harmful deepfakes.

✍️ I just signed “Disrupting the Deepfake Supply Chain” on https://t.co/j5HPjikezW, because we need new laws and regulations to protect us from the harms of deepfakes.

Learn more and add your signature here: https://t.co/EYLdITMkoK pic.twitter.com/FzABxg7ddw

— aifray (@aifray)

By Thursday afternoon, the open letter had garnered signatures from more than 750 individuals spanning diverse sectors, including academia, entertainment, and politics.

The list of signatories featured prominent figures such as Steven Pinker, a psychology professor at Harvard University, two former presidents of Estonia, researchers from Google, DeepMind, and an expert from OpenAI.

Since the introduction of ChatGPT by Microsoft-backed OpenAI in late 2022, which impressed users with its ability to engage in human-like conversation and perform various tasks, ensuring that AI systems do not pose harm to society has become a key concern for regulators.

Prominent figures, including Elon Musk, have issued numerous warnings about the risks associated with AI. Notably, Musk's letter from last year urged for a six-month halt in the development of systems surpassing the capabilities of OpenAI's GPT-4 AI model.

Also read: The looming threat: Why nearly 90% of AI startups could face extinction within the coming year

click me!