This June, in the political battle leading up to the 2024 US presidential primaries, a series of images were released showing Donald Trump embracing one of his former medical advisers, Anthony Fauci. In a few of the shots, Trump is captured awkwardly kissing the face of Fauci, a health official reviled by some US conservatives for promoting masking and vaccines during the COVID-19 pandemic.
“It was obvious” that they were fakes, says Hany Farid, a computer scientist at the University of California, Berkeley, and one of many specialists who examined the pictures. On close inspection of three of the photos, Trump’s hair is strangely blurred, the text in the background is nonsensical, the arms and hands are unnaturally placed and the details of Trump’s visible ear are not right. All are hallmarks — for now — of generative artificial intelligence (AI), also called synthetic AI.
Such deepfake images and videos, made by text-to-image generators powered by ‘deep learning’ AI, are now rife. Although fraudsters have long used deception to make a profit, sway opinions or start a war, the speed and ease with which huge volumes of viscerally convincing fakes can now be created and spread — paired with a lack of public awareness — is a growing threat. “People are not used to generative technology. It’s not like it evolved gradually; it was like ‘boom’, all of a sudden it’s here. So, you don’t have that level of scepticism that you would need,” says Cynthia Rudin, an AI computer scientist at Duke University in Durham, North Carolina.
Dozens of systems are now available for unsophisticated users to generate almost any content for any purpose, whether that’s creating deepfake Tom Cruise videos on Tik Tok for entertainment; bringing back the likeness of a school-shooting victim to create a video advocating gun regulation; or faking a call for help from a loved one to scam victims out of tens of thousands of dollars. Deepfake videos can be generated in real time on a live video call. Earlier this year, Jerome Powell, chair of the US Federal Reserve, had a video conversation with someone he thought was Ukrainian President Volodymyr Zelenskyy, but wasn’t.
The quantity of AI-generated content is unknown, but it is thought to be exploding. Academics commonly quote an estimate that around 90% of all Internet content could be synthetic within a few years1. “Everything else would just get drowned out by this noise,” says Rudin, which would make it hard to find genuine, useful content. Search engines and social media will just amplify misinformation, she adds. “We’ve been recommending and circulating all this crap. And now we’re going to be generating crap.”
Although a lot of synthetic media is made for entertainment and fun, such as the viral image of Pope Francis wearing a designer puffer jacket, some is agenda-driven and some malicious — including vast amounts of non-consensual pornography, in which someone’s face is transposed onto another body. Even a single synthetic file can make waves: an AI-generated image of an explosion at the US Pentagon that went viral in May, for example, caused the stock market to dip briefly. The existence of synthetic content also allows bad actors to brush off real evidence of misbehaviour by simply claiming that it is fake.
“People’s ability to really know where they should place their trust is falling away. And that’s a real problem for democracy,” says psychologist Sophie Nightingale at Lancaster University, UK, who studies the effects of generative AI. “We need to act on that, and quite quickly. It’s already a huge threat.” She adds that this issue will be a big one in the coming year or two, with major elections planned in the United States, Russia and the United Kingdom. [Continue reading…]