[T]he problems with Mr. Trump’s presence on Facebook — the lies, the propaganda, the incitements — are not just Trump problems. They’re Facebook problems (and to be fair, Twitter problems).
Manipulation, misinformation, fear and loathing are endemic to today’s social media platforms, whose engagement-driven algorithms are built to spread whatever messages tap into users’ viscera and provoke a quick “like” or an angry comment. Yet the platforms have delegated much of the work of moderating this content to overwhelmed contractors and fallible artificial intelligence software. The tide of hogwash and bile may recede when a super-spewer such as Mr. Trump is deplatformed. But the dynamics that enabled him endure.
It is those underlying dynamics, and not solely Mr. Trump’s right to use the platform, that any truly independent oversight of Facebook would address. Last month, the U.S. Senate began deliberating over how social media algorithms and design choices mold political discourse. While its hearing was inconclusive at best, it at least served notice that they’re a topic of potential regulatory interest.
Facebook endowed the Oversight Board with a measure of autonomy. It funded the board with an irrevocable trust, promised operational independence and pledged to treat its content decisions (though not its policy recommendations) as binding. Yet it did not empower the board to watch over its products or systems — only its rules and how it applies them.
That’s why some communication scholars have dismissed the board as a red herring, substituting a simulacrum of due process in certain high-profile cases for substantive reform. While the term “oversight board” suggests accountability for the institution it oversees, this board’s function is essentially the opposite: to shift accountability for Facebook’s decisions away from the company itself. The board’s power to adjudicate individual content decisions may be real, but it’s a power that Mr. Zuckerberg never wanted in the first place. [Continue reading…]