How Deepfake technology is a threat to the world!

What is a Deep Fake? and why is it named Deep Fake?

Let’s start with a simple definition, it’s an image, audio, or video of a person whose face or voice is digitally altered to look like someone else is speaking to create an original but fake one. The word Deep Fake consists of two words, here ‘Deep’ talks about the technology of “Deep learning” which is a part of A.I. Technology. Deep learning teaches itself when given large datasets without human intervention, here it swaps the faces in video with another audio to make realistic-looking ‘Fake’ media, that’s why it’s named ‘Deep Fake’. Manipulating visuals with another audio takes place.

Let’s quickly give a touch to the concept of “Shallow fakes”, the goal is the same as deepfake but the technique is different, here it is made without the use of machine learning instead it uses simple video editing software to alter existing media content. Both can be misused in either way.

Source-: https://www.abc.net.au/news/2021-06-24/tom-cruise-deepfake-chris-ume-security-washington-dc/100234772

What do Researchers say about Deep Fakes Technology?

 According to the University College of London, Deepfakes are the most dangerous form of crime using Artificial Intelligence. The researcher said that deepfake content is dangerous for several reasons. One of the most significant concerns is that it would be difficult to detect because it requires extensive training and must be successful in every instance. Additionally, malicious individuals only need to succeed once to cause damage. 

The way people conduct large parts of their lives online presents an ideal environment for criminal activity, as data in the digital world is easily controlled and can be used to damage reputations. This research was conducted by Dr. Matthew Caldwell who authored this emerging technology.

Ben Colman from Reality Defender warns that good deep fakes can be created by anyone, even if you don’t have any special skills or equipment.

Last year In march, a Video circulated the world where it appeared to show Ukrainian President Volodymyr Zelenskyy ordering his soldiers to surrender to Russian Forces, it was then quickly denounced by the president.

Deep Fake Video of Ukraine President

How are Deepfakes used in both Positive and Negative ways?

Like Every Coin has two sides, similarly, every technology is made for good purposes but criminals always find a way to misuse it for their own benefit. Deep Fakes are dangerous for many reasons, but we have some good reasons to use them too.

In 1997, Christoph Bregler, Michele Covell, and Malcolm Slaney created a video editing program that modified existing footage to make it appear as if the person speaking was mouthing gibberish instead of actual words. It was first when Deepfake was used.

Using Deep Fake one can bring back loved ones, which can be mostly seen in the movie industry. We may know the scene of Fast and Furious 7 where they brought Paul Walker using Deep Fake Technology. We have plenty of examples of using Deep Fake in the Movie Industry. Using in Education, imagine learning Mathematics from Srinivasa Ramanujan sir, it will make us motivated to learn more. Deep fakes lower the cost of any video campaign.

But the weight of the Misuse of technology is more here, therefore, it’s a threat to the world, Criminals are using Deep fakes videos to scam people, and also spreads misleading News via Politicians and Public Figures to disrupt the nation. One of the major misuse technique criminals uses is Pornographic Content. 96% of pornographic deepfake videos were found online in 2019, according to the Deeptrace report. It severely damages the reputation of celebrities and prominent figures. Criminals create revenge porn aka deepfake porn which gets thousands of searches on google every day.

How Deepfakes can be dangerous?

This technology is no longer expensive and limited to the film industry, the advancements and ease to use of technology have made every individual create a deepfake video within 5 mins. Public figures are often subjected to criticism, but the recent development of deepfake technology has led to some disturbing uses that may be difficult for even regular people to understand. A study published by MIT Technology Review found that a majority of participants (64%) would like to undress girls they know in real life via messaging app Telegram. This technology allows for the creation of intimate content without any need for face-to-face interaction, making it easy for perpetrators to exploit victims. Deepfake technology was recently used to create explicit content, such as revenge porn and cyberbullying. This rise in malicious behavior is due in part to the tool’s ability to replicate realistic images of people with ease. In a famous case on Reddit in 2017, where a user named r/deepfakes a group was created primarily to share pornographic content featuring public figures, the user swapped the faces of the celebrities with adult actors. Though Reddit banned the user in 2018 the reputation of those celebrities was ruined somewhat.

Not only Celebrities suffered but also Politicians. The First Victim of Deep Fake in Politicians was Donald Trump when a Belgian political party released a video of Trump giving a speech calling on Belgium to withdraw from the Paris Climate, However, Trump never gave a such speech. So, we can say From Big Personalities to common people anyone can suffer from the misuse of this technology.

How to make Deepfake Media content? And who makes Deepfake content?

As we have already discussed, Deep Fake uses the Deep Learning technique. The autoencoder is a deep learning algorithm that uses video data to learn about the person in the footage and then uses those insights to map them onto an individual in another clip. The variational auto-encoder is a machine-learning algorithm that can be used to encode images of any subject. The encoder would be “trained” on the original image of the person, and another encoder would need to be trained to recognize diverse faces for contrast.

In addition to traditional machine learning methods, another type has been added called Generative Adversarial Networks (GANs). This is designed specifically to identify and correct any mistakes made in deepfake videos, making it more difficult for deepfake detectors to decipher them.

GANs are also used to create deep fakes, which use data analysis to learn how to produce realistic images that mimic the original. This technique is remarkably accurate, often producing results indistinguishable from the real thing.

Audio deepfakes require enough background noise to obscure the tell-tale signs of a falsified voice.

Nowadays there are many applications to make deepfakes, it’s easy even for beginners, for example, FaceApp simply is a photo editing app with built-in AI Features to make deepfake videos. One only needs to upload an image or video and within 5 mins it’s ready. A large amount of deepfakes software is also found in GitHub, it’s mainly made for entertainment purposes but criminals often use it for malicious purposes.

Deepfake Creators may be Political Groups, Social media users, visual effects experts, criminals, or any person like us. Deepfake technology is available to anyone, regardless of their technical expertise. This means that even your friends and family can create deepfakes using publicly-available tools.

How to Spot Deepfake media content?

Some deepfakes content is difficult to detect with our naked eye, but some can be detected if we look a bit closer to it. The University of Albany published a study detecting that blinking abnormality can be easily spotted but it is no longer a problem. Some other problems which can help us in detecting include bad lip-synching, some facial features that may be misplaced or misshapen, patchy skin color, face edges that may flicker, the teeth and clothes being worn may render, also the lighting may render inconsistently. If the person’s voice matches the appearance, if the eyebrows match their face’s shape.

It’s all if we see closely but what if we can’t detect with our own, we too have some software to detect deepfakes. Deepware has made its deepfake detection tool which can be accessed from https://scanner.deepware.ai/ 

There are also many other tools available but some of them are not so effective as they need to be trained from thousands of datasets to finally detect one.

How Big Celebrities are victims of Deep Fakes?

We all may have seen on news about the Famous Barak Obama Deep Fake video that circulated the world, also we have discussed the Donald Trump case, the Belgian Socialist party circulated and Tom Cruise Tik-Tok account which Tom Cruise claims that he has no tik-tok account.

Read the article to know about this case-: https://www.abc.net.au/news/2021-06-24/tom-cruise-deepfake-chris-ume-security-washington-dc/100234772

Would you believe it if I say this is not Tom Cruise and it’s not his account, yes, this is the Power of Deepfakes, anyone can say this is Tom but in reality, it’s not.

Combating Deepfake with Technology

As time flies, though Deep Fake is getting more realistic but we are not entirely defenseless, many companies including startups are coming forward to develop technology that can be used to detect Deepfakes. Renowned Companies like ‘Operation Minerva’ takes a more automated approach to detecting deepfakes. Their algorithm compares potential fake videos to known examples of copyrighted content, such as revenge porn. By recognizing this pattern, Operation Minerva can prevent these doctored videos from entering the public sphere. Another company like ‘Sensity’ has developed a detection platform that can identify deepfakes or synthetic videos created with AI. Users receive an email notification whenever they’re viewing something that bears suspicious signs of artificial media creation. This process uses the same machine-learning techniques used to create fake videos.

It’s all our duty to come forward and contribute to help and combat this emerging threat. With this I came to conclude my article here, I hope the article helped you to introduce a new emerging technology and introduce this new emerging threat, how it can be misused and how it can impact each one of us. Keeping our privacy is the only way we can defend ourselves from malicious actors.

References to read more-:

https://www.youtube.com/channel/UCKpH0CKltc73e4wh0_pgL3g

https://arxiv.org/pdf/1806.02877.pdf

https://en.wikipedia.org/wiki/Deepfake

https://arxiv.org/abs/2001.00179

https://ieeexplore.ieee.org/document/9010912

https://arxiv.org/abs/1809.00888

https://arxiv.org/abs/1911.00686

The Article is written by Kunal Das 
Cover Pic :- Deep Deka
Kunal Das

Kunal Das aka Cyber Delta, is a passionate cybersecurity enthusiast who has dedicated his life to the field of information security. He holds a number of certifications, such as Certified Network Security Specialists (CNSS), CompTIA IT Fundamentals (ITF+) by Infosec, CompTIA Cybersecurity Analyst (CySA+) by Infosec, Autopsy Basics and Hands-on by Sleuth kit, Trace Labs OSINT Foundations Course and many other Certifications.

Exit mobile version