Do the notes taken by the interpreters at the recent Helsinki summit include the words “Snowden” and “swap”? We could ask the Russians to check their (assumed) audio recording and let us all know whether Presidents Trump and Putin discussed such a prospect during their long private chat. Trump wrong-footing his own country’s intelligence community by delivering their most-wanted, Edward Snowden, seems precisely the trolling that Putin would enjoy.
What else might leak soon, in the form of audio of the authentic voices of two familiar public figures speaking to each other through the only other people in the room, the US interpreter and her Russian counterpart? What other mischief could be coming in this dawning era of astonishingly realistic “deep fakes”?
Artificial intelligence is becoming more proficient at using genuine audio and video to help create fake audio and video in which people appear to say or do what they have not said or done. Celebrities seeming to read aloud their own tweets and fake video of Barack Obama are two examples. Some developers indicate awareness of the ethical implications.
The issues are analysed in a new draft paper, Deep Fakes: A Looming Challenge for Privacy, Democracy and National Security, by two US law professors. Robert Chesney and Danielle Citron unflinchingly yet constructively explain the potential harms to individuals and societies – for example to reputations, elections, commerce, security, diplomacy and journalism – and suggest ways the problem can be ameliorated, through technology, law, government action and market initiatives. The paper reflects and respects both experience and scholarship, a style familiar from the Lawfare blog that Chesney co-founded. The specifics in the paper are mostly American but its relevance is global. Deep fakes are aided by the quick, many-to-many spread of information, especially in social media, and by human traits such as biases, attraction to what’s novel and negative, and our comfort in our filter bubbles. [Continue reading…]