Deepfakes challenge to trust and truth
From personal abuse and reputational damage to the breakdown of democratic politics through the manipulation of public opinion, deepfakes are increasingly in the spotlight for the harm that they might cause to individuals and society as a whole. Tanya Petersen discussed these issues with Aengus Collins, deputy director and head of policy at EPFL’s International Risk Governance Center.
Will deepfakes become the most powerful tool of misinformation ever seen? Can we mitigate, or govern, against the coming onslaught of synthetic media?
Our research focuses on the risks that deepfakes create. We highlight risks at three levels: the individual, the organizational and the societal. In each case, knowing how to respond means investigating to better understand the risks of what and to whom. And it’s important to note that these risks don’t necessarily involve malicious intent. Typically, if an individual or an organization faces a deepfake risk, it’s because they’ve been targeted in some way – for example, non-consensual pornography at the individual level, or fraud against an organization. But on the societal level, one of the things our research highlights is that the potential harm from deepfakes is not necessarily intentional: the growing prevalence of synthetic media can stoke concerns about fundamental social values like trust and truth.
Can we prioritize, and if so how and where should we focus our energy on avoiding harm from deepfakes?
In our research we have suggested using a simple framework involving three dimensions: the severity of the harm that might be caused, the scale of the harm and the resilience of the target. We argue that this three-way analysis suggests that individual and societal harms should be the priority. Many organizations will have existing processes and resources that can be redirected toward potential deepfake risks. For individuals, the severity can be very high. Think about the potential lasting consequences for a woman targeted by non-consensual deepfake pornography and the resilience required to deal with that. In terms of societal impacts, worries are rising about dramatic risks, such as the undermining of elections, but there is also the risk of a quieter process of societal disruption: a low-intensity, low-severity process that nevertheless leads to systemic-level problems if deepfakes chip away at the foundations of truth and trust.
But computers don’t have values, so are deepfakes a technical problem or a fundamental societal problem brought to the surface with scale and accessibility?
At this stage the two are inextricable and I don’t think it works anymore to say simply it’s a human problem or it’s a technical problem. Finding a common vocabulary or frame of reference for shaping the impact of technology on societal values is one of the biggest challenges both for policymakers and for developers of technology. Of course, technology is a tool but values can affect or distort the making of the tool in the first place. I think we see that tension quite prominently at the moment in debates around AI and bias.
This mix of technology, societal values, the interaction between the two, the biases of tech developers and globalization is incredibly complex. Where should we begin in thinking about the governance of deepfakes, and is it even possible?
It is incredibly complex. Innovation is moving at an unprecedented pace and the policy process is struggling to keep up. There’s no simple lever we can pull to fix this, but there is quite a bit of work being done to make the regulatory process more agile and creative. Also, even though it can take time for policymakers to get to grips with emerging technologies, they can subsequently move quite quickly. For example, there has been a lot of movement on data protection in recent years, and developments with AI and social media platforms may be starting to come to a head. Policymakers are catching up and are starting to draw some lines in the sand. Maybe some of these precedents will help us to avoid the same mistakes with deepfake technology.
If you would like to read more on deepfakes you can find more articles here.