Explained: How Deepfakes are made and how to spot one?
Girish Linganna explains deepfake technology, which uses machine learning and face swapping to create realistic but fake videos. Detecting deepfakes is challenging, making critical thinking essential.
A ‘deepfake’ video of actor Rashmika Mandanna recently went viral online. Investigation revealed that the original video was that of social media influencer Zara Patel. The Indian government got down to business, cracking the whip on social media platforms, sternly warning them of the legal implications of promoting deepfakes and the penalties that could result from such action.
According to reports, the Ministry of Electronics and Information Technology has cited Section 66D of the Information Technology Act, 2000, dealing with ‘punishment for cheating by personation by using a computer resource’. Consequently, individuals convicted of this offence could face imprisonment for a maximum of three years and a fine of up to Rs 1 lakh.
An FIR was filed against unknown people on November 10 in the Rashmika Mandanna case -- filed at the Delhi Police special cell’s Intelligence Fusion and Strategic Operations (IFSO) Unit -- invoking sections 465 (punishment for forgery) and 469 (forgery to harm reputation) of the IPC, besides sections 66C and 66E of the IT Act.
Understanding Deepfake Videos
Deepfakes are artificial videos created using computer software, machine learning (ML) and face swapping. These videos are made by combining different images to create new footage that shows events, statements, or actions that did not really occur. The outcomes can appear very believable. Deepfakes stand apart from other types of false information because they are extremely challenging to detect as untrue.
How Do Deepfakes Function?
Machine Learning is a part of artificial intelligence (AI). It teaches computers to recognize patterns and make decisions without specific programming for each task. This is done through algorithms that get better with time as they analyse and interpret data.
The fundamental idea behind deepfake technology is the identification of faces. If you have used Snapchat, you may be familiar with such features as face swap or filters that change, or enhance your facial appearance. Deepfakes are similar but have an even more realistic appearance. Videos that are not real, or fake, can be made using ‘ML technique’, called a ‘generative adversarial network’, or GAN.
Deepfake technology maps human faces based on specific ‘landmark’ points, which are the unique topographical features of the face -- for instance, the ear lobes, the tip of the nose, or the crinkles at the edges of the eyes. In deepfake technology, these spots are used to understand and recreate facial expressions accurately.
For instance, a ‘generative adversarial network’ could be used to feed it hundreds of photographs of any world-renowned personality, for instance, Beyoncé -- a widely popular American singer. The GAN would then use this information to create a brand new image that looks similar to Beyoncé’s, without being an exact replica of any single photo.
In simpler terms, the GAN learns from the pictures to generate a new image that resembles Beyoncé. Flexible technology like GAN can also make a new audio from existing audios, or create fresh text from an existing text.
Spotting Manipulated Content
While assessing the authenticity of videos or images online, employing critical thinking is crucial. Key questions to consider include:
* Who shared this video? With what intent?
* What is the content’s original source?
* When and where was the video captured?
* Does the person in the video make unexpected statements?
* Does the video serve someone else’s agenda and who stands to gain from it?
Ensuring Net Info Authenticity
In an era of technological progress leading to information overkill and instantaneous online news dissemination, fact-checking information on the Internet has increasingly burdensome. Employing critical thinking skills is crucial for confirming the authenticity of online content.
Whether examining a video, photo meme, or article, the following considerations from First Draft can aid in the verification of online information:
Provenance: Is the original account, article, or content piece under examination?
Source: Who is the creator of the account, article, or original content?
Date: What is the creation date?
Location: Where was the account initiated, the website formed, or the content captured?
Motivation: What drove the establishment of the account or website, or the capturing of the content?
Deepfake Tech Getting Better, Fast
Creating deepfakes is a new tech, and it is getting better fast. Now, it is tough to tell if a video is real or fake. Advancements in such technologies raise clear social, moral and political concerns. Fake videos can worsen the challenge of false information online, besides impacting trust in news and sources of information.
As the wheels of history have rolled on, audio and video recordings and still photos have provided a basis for our understanding of past events. The overwhelming presence of video evidence notwithstanding, several well-recorded events are mired in doubts in some quarters -- for instance, man’s historic 1969 landing on the Moon, the September 11 terror attacks on America, or the events surrounding the Holocaust. If deepfakes erode our trust in videos, challenges may mount in disseminating information, giving rise to an overwhelming number of conspiracy theories.
A major worry about deepfakes and the false information they help disseminate is how these may influence democracy, in general, and elections, in particular. A study from UCC found that people remember fake news more than real news. The survey showed that voters might develop false memories from made-up news, especially if it suits their political inclinations. This suggests that voters could be influenced in forthcoming elections, such as the 2020 US presidential race.