More internal documents show how Facebook’s algorithm prioritized anger and posts that triggered it

More internal documents show how Facebook’s algorithm prioritized anger and posts that triggered it

Nieman Lab reports:

As if there wasn’t enough Facebook news to digest already, another deep dive from The Washington Post this morning revealed that Facebook engineers changed the company’s algorithm to prioritize and elevate posts that elicited emoji reactions — many of which were rolled out in 2017. More specifically, the ranking algorithm treated reactions such as “angry,” “love,” “sad,” and “wow” as five times more valuable than traditional “likes” on the social media platform.

The problem with this plan for engagement: Other posts also likely to yield similar reactions were more likely to show up, only these posts were likely to also contain misinformation, spam, or forms of clickbait. One Facebook staffer, whose name was redacted in a dump of documents shared with the Securities and Exchange Commission by whistleblower and former Facebook employee Frances Haugen, had warned that this might happen; they were proven right.

According to the Post, “The company’s data scientists confirmed in 2019 that posts that sparked angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news.”

More on that:

That means Facebook for three years systematically amped up some of the worst of its platform, making it more prominent in users’ feeds and spreading it to a much wider audience. The power of the algorithmic promotion undermined the efforts of Facebook’s content moderators and integrity teams, who were fighting an uphill battle against toxic and harmful content.

This isn’t the first time that “anger” has reared its ugly head as a useful metric. Back in 2017, a report found that hyper-political publishers were especially adept at provoking the anger of their readers. [Continue reading…]

Comments are closed.