Deepfake is a word that is fashionable: it is about computer-generated images (in the form of photos and videos) that imitate the human appearance. Sometimes they mimic someone real, like a celebrity or acquaintance. But sometimes they are completely unique fake faces. It may be a way to create any new person on the internet, but in the realms of politics, cybersecurity, counterfeiting, and border control, deepfakes are of great concern.
However, it seems that the human brain is really good at recognizing fake faces. Scientists at the University of Sydney found that when they looked at brain activity using electroencephalography (EEG), the fake could be identified 54% of the time.
However, when participants had to consciously make the judgment from observation, they were only 37% correct. This means that the brain shows a different activity when seeing deepfakes but the conscious mind of the observer cannot interpret it in the same way.
Thomas Carlson, the lead author of the study, said: “The fact that the brain can detect deepfakes it means that the current ones are imperfect. If we can learn how the brain detects them, we could use this information to create algorithms that flag possible deepfakes on digital platforms like Facebook and Twitter.
Scientists are already thinking about future studies in which they will try to find ways to open up those possibilities in the brain. They are thinking that EEG headsets could be used in some cases where safety is essential. But it would also be useful for knocking down deep fakes on social networks.
Recently, a deep fake video of Ukraine’s President Volodimir Zelensky appeared on social media urging his troops to surrender to Russian forces. Against the background of Russia’s current invasion of Ukraine, it was shocking, especially considering that Ukraine is actually preparing for a long war. President Zelensky Doesn’t Mean Surrender, But Some People Might Have Been Fooled By That Video deep fake.
It is also worth mentioning that despite remaining imprecise, deep fakes will improve. Recognizing false visual information will be more difficult in the future. More efficient and effective ways have to be found to counteract them. Perhaps brain scans provide some key pieces that lead to better security and better monitoring of social media content.