“Deepfake generation and detection is like an arms race”

Peter Grönquist and Yufan Ren in Singapore © AI Singapore / EPFL 2022

Peter Grönquist and Yufan Ren in Singapore © AI Singapore / EPFL 2022

Two EPFL computer scientists have taken home a major prize in Singapore’s Trusted Media Challenge, a five-month long competition aimed at cracking the code of deepfakes.

Despite exploding onto the scene only four years ago, deepfakes now seem to be everywhere. While the technology has opened up incredible possibilities in film and advertising it is also notorious in fake news and fraud, raising questions as to the danger this kind of manipulation might cause now and in the future.

Recently, Peter Grönquist and Yufan Ren, who work out of the Image and Visual Representation Lab (IVRL) in the School of Computer and Communication Sciences, took part in a global challenge to improve the identification of deepfakes. With access to datasets of original and fake media videos with audio - approximately 4,000 real clips and 8,000 fake video clips - for the participants to train and test their models on, Grönquist and Ren built an AI model that estimated the probability that any given video was a fake.

“Our approach was to build an algorithm that separated the video, audio, and synchronization of the two and to look at all of these modalities separately. Taking one step at a time made it much easier to look at the data and analyze it,” said Grönquist. “But there were challenges. We had a 9 second time limit per video clip to predict whether it was a deep fake or not so there was no way to look at the whole 90 second video frame by frame. We had to do a lot of fine tuning on our algorithm for it to be super-efficient,” he continued.

“The journey was really interesting. We led the competition for the whole first part of the challenge so midway through we needed to keep an eye on the leader board to see if another team was going to be able to catch us. Also, in addition to the time limit there was a submission limitation, meaning we couldn’t just test and try our algorithm over and over again, we really needed to think about what we wanted to test with the limited time and budget,” added Ren.

Grönquist and Ren finally placed second, winning a prize of more than CHF100,000, money which they will spend on further research and possibly a start-up venture focused on the detection of deepfakes. They are already in talks with several companies to explore the different possibilities open to them.

“We’re passionate about this topic. It’s a continuing arms race between generation and detection and we need to keep developing newer detection technologies every time someone designs new deepfake generation technology. The technology moves so fast on the generation side that even if today we have a solution to detect, say, 98% of deepfake videos, we can’t be assured that our solutions will work for future technologies,” explained Grönquist.

From an award ceremony in Singapore, Ren said the team was thrilled with the result and very grateful for the support from Professor Sabine Süsstrunk, Head of the Image and Visual Representation Lab. “This would never have been possible without her or the IVRL. She suggested this from the get go, she motivated us to get started and do this, and also provided technical feed-back along the way. I'm sure our achievement would never have happened without Sabine and we are very thankful for that. In the beginning we were expecting to fail so it was hard to believe when the final result came in and we had been so successful!” he concluded.


Author: Tanya Petersen

Source: Image and Visual Representation Laboratory IVRL

This content is distributed under a Creative Commons CC BY-SA 4.0 license. You may freely reproduce the text, videos and images it contains, provided that you indicate the author’s name and place no restrictions on the subsequent use of the content. If you would like to reproduce an illustration that does not contain the CC BY-SA notice, you must obtain approval from the author.