Seeing is believing… but should it be?
You’re scrolling through your newsfeed when you come across an interesting video of the President of America and the UK Prime Minister having a fierce break dancing competition. After a beat, you realise that the video isn’t actually documenting a real event – it’s a fake. If they stopped at harmless comedy, fake videos wouldn’t be a serious issue. Beyond comedy, the danger becomes all too clear. So how do fake videos work, and how can you spot them?
Fake it til you make it
Until recently, convincing computer generated video was incredibly difficult to carry out. However, artificially intelligent tools are now so widely accessible that it is fairly easy to morph a face onto a different body and construct a fictional but frighteningly real narrative. In fact, using the Snapchat app, anyone can do it. It’s rudimentary, but it’s possible… And that’s the scary part. On one end of the scale are the amusing videos of politicians and famous faces, and on the other there is damaging, inappropriate, and damning content that could be used as a weapon.
Deepfakes, made with AI, are the newest form of fake content creation. Images of a subject are fed to a machine learning algorithm, which uses the different angles and expressions to build up a doppelganger. The digital copy can then be manipulated to perform any act… Yes, any.
Reputation damage is just one potential consequence of fake videos. They could, in time, lead to the fabrication of events that never happened. One especially ironic example is Barack Obama issuing a ‘warning’ about fake videos. It’s already hard enough to trust mainstream media, but what if every image you saw had the potential to be completely untrue?
Escaping the fakes
Thankfully, there are currently various ways to detect fake content. Darpa’s MediFor (Media Forensics) programme was founded in 2016 in response to the rising prevalence of fake content. MediFor’s aim is to build an automated system that can spot fake videos, issuing visual content with an ‘integrity rating’ based on three key criteria. First, the system looks for any digital fingerprints, like a watermark or background noise. It then searches for physical glitches, and finally compares what is shown against wider media data.
Even if you don’t have advanced software to hand, fake videos are by no means perfect yet. There are a number of physiological cues, for example, that an artificially intelligent algorithm wouldn’t recognise as natural parts of human interaction – like blinking or breathing. That’s why, for the moment, we can tell that fake videos are fake.
There has also been pressure on the websites that upload deepfakes to remove the misleading videos. In January, popular uploading site Gfycat began to actively delete ‘objectionable’ deepfakes using an AI powered solution. When challenged about its commitment to removing deepfakes, Gfycat admitted that videos could slip under the radar. So, while there is a movement to combat deepfakes, it has struggled to address the issue.
Being aware of fake videos and knowing how to spot them is necessary in a digital world where seeing is no longer believing. Artificially intelligent software isn’t just capable of creating amusing and inappropriate videos – it could redefine reality. Despite efforts to fight fake videos, the question is whether fakes can be detected as quickly as they can accurately mimic reality.
Read more about the darker side of AI in our free weekly newsletter.