Why Facebook really, really doesn’t want to discourage extremism
Steve Rathje, Jay Van Bavel and Sander van der Linden write:
Our findings may reflect the fact that, more and more, political identities are driven by hating the opposition more than loving one’s own party. Out-party hate has been increasing steadily over the past few decades, researchers find, and is at the highest level seen in 40 years. Out-group hate is also more strongly related to whom we vote for than in-party love. In much the same way, who we hate captures more attention online than who we love.
In a recent detailed article, technology writer Karen Hao described how Facebook’s “content-recommendation models” promote “posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.” Since out-group animosity is very likely to go viral, a social media business model that tries to keep us engaged so it can sell advertising ends up rewarding politicians, brands and hyperpartisan media companies for attacking their enemies.
More important, with this business model, social media companies will be unlikely to find ways to reduce animosity on their platforms.
For example, in November, the New York Times reported that Facebook declined to make permanent a tweak to the news feed algorithm that showed less harmful content in the feed. Why? It reduced the number of times people opened the app.
Such decisions might be helping Facebook’s bottom line: Despite years of controversy, Facebook recently reached a $1 trillion market value.
Facebook also recently denied a problem even exists, and has come out with an employee “playbook” of how to respond to accusations that it polarizes discussions online. The talking points include the claim that there is little evidence that Facebook causes polarization, though a recent controlled experiment found a significant reduction in political polarization when Americans log off Facebook for a month. [Continue reading…]