This Artificial Intelligence (AI) approach can spot Deepfake videos of famous people using facial, gestural and vocal mannerisms

This Artificial Intelligence (AI) approach can spot Deepfake videos of famous people using facial, gestural and vocal mannerisms

Recent technological advancements in artificial intelligence (AI) can be seen as a double-edged sword. While AI has benefited humanity in countless ways by making our lives easier, from improving healthcare to providing personalized and more interactive experiences, it also comes with its own drawbacks. One such detrimental effect of AI is the increase in the number of deepfakes or synthetically generated media. Deepfakes (derived from a combination of “deep learning” and “fake”) are AI-generated media in which a person in an existing image or video is replaced with the likeness of someone else . This is done by using robust machine learning techniques to produce audio and visual content that can easily fool a general audience. Since their introduction a few years ago, deepfakes have improved dramatically in terms of quality, sophistication, and ease of generation. The most common deep learning-based techniques for producing deepfakes involve training generative neural network designs like autoencoders or generative adversarial networks (GANs).

Deepfakes have attracted a lot of attention due to their potential for use in large-scale fraud, non-consensual pornography and defamation campaigns. It’s getting harder and harder to tell if a video is real as technology gets more advanced day by day. When we consider how deepfakes could be used as a weapon against world leaders during election seasons or in times of armed conflict, their use becomes more dangerous. One such case occurred recently when Russian parties produced a deepfake video purporting to show Volodymyr Zelenskyy, the President of Ukraine, saying things he didn’t actually say. According to reports, the video was created to help the Russian government persuade its people to believe state propaganda regarding the invasion of Ukraine.

To protect world leaders from deepfakes, researchers from Johannes-Kepler-Gymnasium and the University of California, Berkeley have created an AI application that can determine if a famous person’s video clip is authentic or a deepfake . The researchers trained their AI system to distinguish the distinctive body movements of specific people to determine whether a video was authentic or not, as described in their research paper published in Proceedings of the National Academy of Sciences.

Discover Hailo-8™: an artificial intelligence processor that uses computer vision for multi-camera and multi-person re-identification (sponsored)

The duo researched an identity-based strategy in their new AI system. They trained their system on several hours of real video footage to identify specific visual, gestural and vocal traits that can distinguish a world leader from an imitator or fake impostor. Scientists have also observed that people have several distinctive qualities apart from body markings or facial features, including the way they move. In Zelenskyy’s example, the Ukrainian president tends to raise his left hand while arching his right forehead. This type of data was essential for programming the deep learning AI system to study the subject’s physical movements by examining numerous recordings.

It was noted that the algorithm has become more adept over time at identifying acts that humans are unlikely to notice. The pair tested their method by reviewing multiple deepfake videos alongside actual videos from various people. The end results were very impressive, showing that their method was 100% successful in distinguishing genuine videos from fake ones. He also managed to establish the falsity of the Zelenskyy video.

Although the team’s study is heavily focused on Zelenskyy, they stress that their methodology can be applied to analyze any high profile personality for which there is enough original video footage available. The researchers also said that they do not plan to publicly release their classifier to prevent counterattacks. Additionally, in an effort to combat misinformation fueled by deepfakes, they have made their classifier available to credible media and government organizations.

Check Paper and Reference article. All credit for this research goes to the researchers on this project. Also don’t forget to register. our Reddit page and discord channelwhere we share the latest AI research news, cool AI projects, and more.

Khushboo Gupta is an intern consultant at MarktechPost. She is currently pursuing her B.Tech from Indian Institute of Technology (IIT), Goa. She is passionate about the fields of machine learning, natural language processing and web development. She likes to learn more about the technical field by participating in several challenges.

#Artificial #Intelligence #approach #spot #Deepfake #videos #famous #people #facial #gestural #vocal #mannerisms

Leave a Comment

Your email address will not be published. Required fields are marked *