What Facebook did for Chauvin’s trial should happen all the time
On Monday, Facebook vowed that its staff was “working around the clock” to identify and restrict posts that could lead to unrest or violence after a verdict was announced in the murder trial of the former Minneapolis police officer Derek Chauvin. In a blog post, the company promised to remove “content that praises, celebrates or mocks” the death of George Floyd. Most of the company’s statement amounted to pinky-swearing to really, really enforce its existing community standards, which have long prohibited bullying, hate speech, and incitements to violence.
Buried in the post was something less humdrum, though: “As we have done in emergency situations in the past,” declared Monika Bickert, the company’s vice president of content policy, “we may also limit the spread of content that our systems predict is likely to violate our Community Standards in the areas of hate speech, graphic violence, and violence and incitement.” Translation: Facebook might turn down the dial on toxic content for a little while. Which raises some questions: Facebook has a toxic-content dial? If so, which level is it set at on a typical day? On a scale of one to 10, is the toxicity level usually a five—or does it go all the way up to 11?
This is not the first time Facebook has talked about reducing the amplification of inflammatory posts to make its platform a better and safer place. In the run-up to and the aftermath of the 2020 presidential election, Facebook talked about the “break glass” measures it was taking to limit the spread of misinformation and incitements to violence in the United States. Such steps had previously been reserved for “at-risk countries” such as Myanmar, Ethiopia, and Sri Lanka. Now these exceptional measures may be deployed in Minneapolis, which Facebook has “temporarily deemed to be a high-risk location” because of the Chauvin trial. [Continue reading…]