The ability of Internet users to spread a video of Friday’s slaughter in New Zealand marked a triumph — however appalling — of human ingenuity over computerized systems designed to block troubling images of violence and hate.
People celebrating the mosque attacks that left 50 people dead were able to keep posting and reposting videos on Facebook, YouTube and Twitter despite the websites’ use of largely automated systems powered by artificial intelligence to block them. Clips of the attack stayed up for many hours and, in some cases, days.
This failure has highlighted Silicon Valley’s struggles to police platforms that are massively lucrative yet also persistently vulnerable to outside manipulation despite years of promises to do better.
Friday’s uncontrolled spread of horrific videos — a propaganda coup for those espousing hateful ideologies — also raised questions about whether social media can be made safer without undermining business models that rely on the speed and volume of content uploaded by users worldwide. In Washington and Silicon Valley, the incident crystallized growing concerns about the extent to which government and market forces have failed to check the power of social media.
“It’s an uncontrollable digital Frankenstein,” said Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology. [Continue reading…]