I write this since the latest FaceApp trend has died down and I eagerly await its next resurgence.
FaceApp isn’t really new. It has been here since 2017. However, it has gotten eerily better at simulating how people look like as they age, become younger,
and so on. In the middle of 2019, your social media feed, like mine, was filled with pictures of people comparing their current selves with older, younger, whatever-else-have-you selves. If somehow you lived under a rock and you haven’t heard of this app, you can still download it and see for yourself. You may find it quite fun, albeit you’re a few months late and the trend is all but a figment of many imaginations.
This FaceApp resurgence came on the heels of yet another social media trend a few months earlier of posting photos with a 10-year gap in between. Ten-year challenge, anyone? That was quite fun to see too. I’m pretty sure the FaceApp developers had that trend as inspiration or vice versa.
While these seem innocent and just clean fun, there are obvious implications of doing this from an AI perspective. The conspiracy theorist in me agrees with the point of view of many that this can somehow be used against us or for some other evil agenda. It becomes even more interesting since the developers of FaceApp are from Russia. It makes for a good Tom Clancy plot, doesn’t it? Something only the likes of Jack Ryan can unravel...
To twist the plot a bit more, have you heard of DeepFake? If not yet, here’s a quick 101. DeepFake is a portmanteau – a blending of the words – (1) deep learning, a term widely used in technology circles for how AIs are developed to learn faster on its own, and (2) fake. This relatively newer, and to my mind even a scarier trend, has the capability not only to change imagery, face swap, but also to create fabricated but realistic-looking images and videos.
Case in point is the controversy that Zao, a very popular face swapping app in China is embroiled in right now. It started innocently enough, with users of the app having the ability to swap their faces with popular actors in blockbuster movies, simply by uploading their pictures to the app. Zao then creates simulated video clips with the users’ faces instead of the actual actors. In 30 seconds, the users can get deepfaked.
What worries me beyond this particular example is how free deepfake technology can potentially be abused. This technology has been a growing concern since it can manipulate images to swing, say the electorate views in an election, by changing a politician’s actions. More worrisome is the possibility of this technology to gain widespread free use and applications – which, when abused, can be tantamount to bullying and deceit of the highest order. Just imagine how it can be used for pornography, crime, economic manipulation, and the like. If people’s identities can be anyone’s fair game, then who’s to say what is real anymore?
Ironically, the only way to address this is for technology to catch up to deepfakes and create even faster learning algorithms that can identify real from the deepfakes.
While there are several start-ups already trying to do this, there have been no major inroads. Recently, Facebook and Microsoft joined the fray. The Deepfake Detection Challenge offers $10 million in research grants and rewards, supported by various top universities. It remains to be seen whether this will move the needle.
In the meantime, we all need to be extra cautious and mindful of all the fine print to every new app we download and use, lest we become victims of deepfakes, then, now, and in the future.
Joy Santamarina is a consulting principal in the APAC region specializing in the telecommunications, media, and technology industry. Send feedback to firstname.lastname@example.org