“Don’t believe what you see or hear”: the security and ethical implications of Deepfakes
The rise of synthetic media and deepfakes (artificial intelligence technique combining and superimposing existing images and videos onto source images or videos) is forcing us towards an important and unsettling realization: our historical belief that video and audio are reliable records of reality is no longer tenable. Still today, we trust a phone call from a friend or a video clip featuring a known politician, simply based on the recognition of their voices and faces.
Previously, no commonly available technology could have synthetically created this media with comparable realism, so we treated it as authentic by definition. With the development of synthetic media and deepfakes, this is no longer the case. Every digital communication channel our society is built upon, whether that be audio, video, or even text, is at risk of being subverted.
This public discussion will discuss the ethical and security implications of the exponential proliferation of deepfakes. We will particularly discuss the findings of one of the first reports on the state of deepfakes, which was recently published by Deeptrace Company and clearly highlights how women are the first victims of this technology such as application like Deepnude.
More information on the report: https://deeptracelabs.com/mapping-the-deepfake-landscape/
- Dr. Giorgio Patrini, CEO and Chief Scientist, Co-founder, Deeptrace.
- Ms. Anne-Marie Buzatu, Executive-in-Residence at the Geneva Centre for Security Policy and Director / Co-founder of Security and Human Empowerment Solutions.
- Dr. Jean-Marc Rickli, Head of global risk and resilience at the Geneva Centre for Security Policy
- Ms. Fleur Heyworth, Head of gender and inclusive security at the Geneva Centre for Security Policy