Facebook System to Combat Disinformation That Wound Up Promoting Questionable Content is Peak Facebook

Yet another challenge for the social network

Facebook logo
Trying to limit harmful content on social media can go very wrong.
Jakub Porzycki/NurPhoto via Getty Images

It’s hard to believe that there once was a time when Facebook was regarded as a relatively low-key space where one could go to wish friends a happy birthday and see photographs of dogs being adorable. But in online years, that might as well be a century ago; the social network that inspired The Social Network is more likely now to come up in conversations about online radicalization and frustrating designs.

Facebook has taken some steps to address the problem of harmful or toxic content on its platform — but, as a recent article shows, a bug may well have made the issue worse instead of better.

Writing at The Verge, Alex Heath has more details on the issue, which came to light via an internal report documenting the efforts of a group of engineers within Facebook to address the problem. From October 2021 through March 2022, Health writes that the engineers found “Facebook’s systems failed to properly demote probable nudity, violence, and even Russian state media.”

Had the system worked correctly, it would have minimized these posts; instead, Heath writes, “the News Feed was instead giving the posts distribution, spiking views by as much as 30 percent globally.”

The bug relates to Facebook’s system of downranking, which the social network intended as way to reduce the impact of more polarizing posts as well as to flag posts that might violate Facebook’s content policies. Unfortunately, the bug caused an uptick in the very types of posts the system was designed to block. It’s since been fixed — but the news of this issue is unlikely to make anyone’s concerns about harmful content on Facebook to subside.

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.