Why Facebook can’t fix itself
When Facebook was founded, in 2004, the company had few codified rules about what was allowed on the platform and what was not. Charlotte Willner joined three years later, as one of the company’s first employees to moderate content on the site. At the time, she said, the written guidelines were about a page long; around the office, they were often summarized as, “If something makes you feel bad in your gut, take it down.” Her husband, Dave, was hired the following year, becoming one of twelve full-time content moderators. He later became the company’s head of content policy. The guidelines, he told me, “were just a bunch of examples, with no one articulating the reasoning behind them. ‘We delete nudity.’ ‘People aren’t allowed to say nice things about Hitler.’ It was a list, not a framework.” So he wrote a framework. He called the document the Abuse Standards. A few years later, it was given a more innocuous-sounding title: the Implementation Standards.
These days, the Implementation Standards comprise an ever-changing wiki, roughly twelve thousand words long, with twenty-four headings—“Hate Speech,” “Bullying,” “Harassment,” and so on—each of which contains dozens of subcategories, technical definitions, and links to supplementary materials. These are located on an internal software system that only content moderators and select employees can access. The document available to Facebook’s users, the Community Standards, is a condensed, sanitized version of the guidelines. The rule about graphic content, for example, begins, “We remove content that glorifies violence.” The internal version, by contrast, enumerates several dozen types of graphic images—“charred or burning human beings”; “the detachment of non-generating body parts”; “toddlers smoking”—that content moderators are instructed to mark as “disturbing,” but not to remove.
Facebook’s stated mission is to “bring the world closer together.” It considers itself a neutral platform, not a publisher, and so has resisted censoring its users’ speech, even when that speech is ugly or unpopular. In its early years, Facebook weathered periodic waves of bad press, usually occasioned by incidents of bullying or violence on the platform. Yet none of this seemed to cause lasting damage to the company’s reputation, or to its valuation. Facebook’s representatives repeatedly claimed that they took the spread of harmful content seriously, indicating that they could manage the problem if they were only given more time. Rashad Robinson, the president of the racial-justice group Color of Change, told me, “I don’t want to sound naïve, but until recently I was willing to believe that they were committed to making real progress. But then the hate speech and the toxicity keeps multiplying, and at a certain point you go, Oh, maybe, despite what they say, getting rid of this stuff just isn’t a priority for them.”
There are reportedly more than five hundred full-time employees working in Facebook’s P.R. department. These days, their primary job is to insist that Facebook is a fun place to share baby photos and sell old couches, not a vector for hate speech, misinformation, and violent extremist propaganda. In July, Nick Clegg, a former Deputy Prime Minister of the U.K. who is now a top flack at Facebook, published a piece on AdAge.com and on the company’s official blog titled “Facebook Does Not Benefit from Hate,” in which he wrote, “There is no incentive for us to do anything but remove it.” The previous week, Guy Rosen, whose job title is vice-president for integrity, had written, “We don’t allow hate speech on Facebook. While we recognize we have more to do . . . we are moving in the right direction.”
It would be more accurate to say that the company is moving in several contradictory directions at once. [Continue reading…]