Many years before ChatGPT was released, my research group, the University of Cambridge Social Decision-Making Laboratory, wondered whether it was possible to have neural networks generate misinformation. To achieve this, we trained ChatGPT’s predecessor, GPT-2, on examples of popular conspiracy theories and then asked it to generate fake news for us. It gave us thousands of misleading but plausible-sounding news stories. A few examples: “Certain Vaccines Are Loaded With Dangerous Chemicals and Toxins,” and “Government Officials Have Manipulated Stock Prices to Hide Scandals.” The question was, would anyone believe these claims?
We created the first psychometric tool to test this hypothesis, which we called the Misinformation Susceptibility Test (MIST). In collaboration with YouGov, we used the AI-generated headlines to test how susceptible Americans are to AI-generated fake news. The results were concerning: 41 percent of Americans incorrectly thought the vaccine headline was true, and 46 percent thought the government was manipulating the stock market. Another recent study, published in the journal Science, showed not only that GPT-3 produces more compelling disinformation than humans, but also that people cannot reliably distinguish between human and AI-generated misinformation.
My prediction for 2024 is that AI-generated misinformation will be coming to an election near you, and you likely won’t even realize it. In fact, you may have already been exposed to some examples. In May of 2023, a viral fake story about a bombing at the Pentagon was accompanied by an AI-generated image which showed a big cloud of smoke. This caused public uproar and even a dip in the stock market. Republican presidential candidate Ron DeSantis used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By mixing real and AI-generated images, politicians can blur the lines between fact and fiction, and use AI to boost their political attacks. [Continue reading…]
The New Hampshire attorney general’s office on Monday said it was investigating reports of an apparent robocall that used artificial intelligence to mimic President Joe Biden’s voice and discourage voters in the state from coming to the polls during Tuesday’s primary election.
Attorney General John Formella said the recorded message, which was sent to multiple voters on Sunday, appears to be an illegal attempt to disrupt and suppress voting. He said voters “should disregard the contents of this message entirely.”
A recording of the call reviewed by The Associated Press generates a voice similar to Biden’s and employs his often-used phrase, “What a bunch of malarkey.” It then tells the listener to “save your vote for the November election.”
“Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the voice mimicking Biden says. “Your vote makes a difference in November, not this Tuesday.”
It is not true that voting in Tuesday’s primary precludes voters from casting a ballot in November’s general election. Biden is not campaigning in New Hampshire and his name will not appear on Tuesday’s primary ballot after he elevated South Carolina to the lead-off position for the Democratic primaries, but his allies are running a write-in campaign for him in the state. [Continue reading…]