Science
Deepfakes look more and more real
It started with Hollywood stars suddenly appearing in porno films, but now “deepfake” videos are impacting politics too. “It’s making people scared and uncertain.”
Vincent Bongers
Thursday 15 November 2018

“President Trump is a total and complete dipshit!” The words of Barack Obama in a video that emerged in April; the images seem to be a clip from an official speech. Obama’s voice is not quite right, but the movements of his head, shoulders and mouth give the impression that you are indeed watching the former statesman.

However, the video’s producers are quick to disillusion you. It was made by comedian and film director Jordan Peele, who used software to make Obama move and to imitate his voice. It’s a “deepfake” video – and this one was intended as a warning.

“If I show deepfakes to an audience, they always laugh at first”, says Jeanine Reutemann from Switzerland. She is a researcher at the Centre of Innovation in The Hague. “But when they’ve finished saying ‘wow, that’s cool’ they go quiet and you can actually see them think: ‘Oh shit, what and whom can I trust now?’ The technology behind the videos has been around for a while, but the big difference is that the software is available everywhere and it’s easy to use.” Just download a program like FakeApp, follow the tutorial and you’re good to go.

 

Sex, of course, was a major force for the first fake videos. “Deepfakes first took off in the porno industry. You could find videos with film stars like Emma Watson in a hard-core sex scene.”

It is possible to mimic a person’s movements and to put words in his or her mouth with a large degree of accuracy by sampling a person’s voice, processing it and then generating text yourself. “The more a person is in the public eye, the easier it is to save and use his or her voice and movements.”

At the moment, you can still tell whether a video is fake or not, says to Reutemann. “For example, if you follow the movements of the mouth and hands very closely. But the makers’ skills and the technology are improving all the time, so at some point we won’t be able to see what’s real and what’s not. The trouble is, we don’t know when that moment will arrive. And deepfakes are only a symptom of something much bigger. People are scared and uncertain because they are already being confronted with fake news, hoaxes and abuse of social-media data.”

Reutemann believes that it’s important that reliable media, such as the BBC and organisations like Amnesty International, have a part in verifying images. “For instance, Amnesty has a lab that studies images and videos of possible violations of human rights and tries to find out whether the images are legitimate. But it is very difficult to distinguish genuine news from fake news in the gigantic stream of information.