Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots

Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots

“Annie Newport” and Nina Jankowicz write:

Scientists, policy experts, and artists have been concerned about the unintended consequences of artificial intelligence since before the technology was readily available. With most technological innovations, it’s common to ask whether that invention could be maliciously weaponized, and there has been no shortage of experts warning that AI is being utilized to spread disinformation. Just a little more than two years after the public release of AI language models, there are already documented cases of malign actors using the technology to mass produce harmful and false narratives at a previously infeasible scale. Now, an apparent attempt by Russia to infect AI chatbots themselves with propaganda shows that the internet as we know it may be changed forever.

The self-iterating and widespread nature of artificial intelligence is a perfect medium for a novel abuse of the technology vis a vis disinformation. This can be done in two ways: The more familiar harmful uses for AI are external to the technology. They spread falsehood by instructing AI models to mass produce false narratives—for example, using AI to quickly craft thousands of articles containing selected disinformation, then publishing those articles online. But disinformation can also be dispersed via the internal corruption of large-language models themselves. This phenomenon—which we have dubbed “LLM grooming” in new report—is poised to take the internet and digital disinformation into a dangerous new era.

Our report details evidence that the so-called “Pravda network” (no relation to the propaganda outlet Pravda), a collection of websites and social media accounts that aggregate pro-Russia propaganda, is engaged in LLM grooming with the potential intent of inducing AI chatbots to reproduce Russian disinformation and propaganda. Since we published our report, NewsGuard and the Atlantic Council’s Digital Forensic Research Lab (DFRLab)—organizations that study malign information operations—confirmed that Pravda network content was being cited by some major AI chatbots in support of pro-Russia narratives that are provably false. Left unaddressed, these false narratives could plague nearly every piece of information online, undermining democracy around the world. [Continue reading…]

Comments are closed.