EPFL develops solution for detecting deepfakes
Deepfakes – or fake videos produced to look real through the use of artificial intelligence – pose a growing challenge. That’s why an EPFL research group has been teaming up with the Swiss startup Quantum Integrity to develop a deepfake detection solution over the past two years. The team has been awarded an Innosuisse grant starting on 1 October, with deployment as early as next year.
Barak Obama insulting Donald Trump, Trump accusing Obama of stealing, and Mark Zuckerberg making worrying claims about how Facebook uses personal data – fake videos like these, which look disturbingly real as a result of artificial intelligence, have been spreading across social media over the past few years. Until recently there were still some tell-tale signs that could indicate videos were fabricated, like eyes that never blink. But today these phony videos – commonly known as deepfakes – have become so realistic that it’s all but impossible to spot them with a naked eye. As a result, scientists, entrepreneurs and government agencies, driven by worries about the consequences of misinformation and motivated by the challenge of weeding out the fakes, are tackling the problem head on.
Beefing up an existing deepfake detection solution
EPFL’s Multimedia Signal Processing Group has been working with Quantum Integrity, a startup based at EPFL Innovation Park, on a deepfake detection solution for the past two years. The research team has already completed two pilot tests and recently obtained a grant from Innosuisse, Switzerland’s innovation agency, to accelerate development of a software. The project will start on 1 October. “Quantum Integrity already markets fake detection software. Our role is to make the software more powerful so that it can be used more widely,” says Touradj Ebrahimi, head of the Multimedia Signal Processing Group. His group provides expertise in multimedia signal processing, while the startup brings its many years of experience in detecting fake images. Ebrahimi is affiliated to the Center for Digital Trust (C4DT) and coordinates its “Digital Information” domain. “Detection of image and video forgery to fight against malicious manipulations is clearly one of the applications where Artificial Intelligence helps to regain trust” says Olivier Crochat, Executive Director of the C4DT.
A game of cat and mouse
The game between deepfake creators and the experts who try to catch them is one of cat and mouse. And the deepfakers tend to be one step ahead, since they can come up with an almost unlimited number of new contents that the experts must then try to detect. As soon as word gets out that deepfakes can be identified because people’s mouths don’t move naturally, for example, a malicious programmer will develop an algorithm to remedy that problem. This is the vicious circle that Prof. Ebrahimi at EPFL, together with Quantum Integrity CEO Anthony Sahakian, aim to put an end to.
Prof. Ebrahimi’s team is keeping the details of its technology under wraps for now, since many other research groups have started to work on the same problem, and they have a two-year advance when compared to competition. Prof. Ebrahimi only states that “the detector also relies on artificial intelligence to keep pace with the latest tricks used by deepfake artists.”
The risk of commercial fraud
The biggest fear that most people face vis-a-vis deepfakes is that they will be used to steal their identity. But the threat actually runs much deeper – fraudulent contents can also be used to deceive manufacturers, insurers, and even customs officials. For instance, goods can be digitally added to or removed from a cargo ship before it leaves the dock, or transactions could be approved using photos that have been counterfeited.
The technology developed by EPFL and Quantum Integrity uses a machine-learning algorithm that has been trained on a huge database that the startup has created over years. This gives their technology an advantage over competing systems. The goal is to eventually create a website where people can upload a content and know immediately whether it has been falsified.
A threat for social networks
The research team’s deepfake detection solution comes at a perfect time, because phony content is becoming increasingly common on social media. A Chinese app called Zao, introduced in late August and currently available in just a few countries, can produce deepfakes that appear astonishingly real using a few photos. Facebook and Microsoft recently announced that they would invest $10 million in a new deepfake detection program, with more details to come in December. The company recently launched the Deepfake Detection Challenge in association with several US universities to find the best solution for what could turn out to be a real problem for social media. EPFL’s solution could be a candidate for the Challenge since it spans a variety of applications, from industrial photos to facial videos. “But we’d need to know more about details of the evaluations criteria and the terms for entering the Challenge, especially as far as intellectual property rights are concerned,” says Prof. Ebrahimi.