After Facebook stopped harmful content and brought support for hardware security keys on mobile devices, the company now says it’s tackling misinformation across its apps.
In a blog post today, Facebook’s VP of Integrity Guy Rosen explains the challenges Facebook is dealing with across its apps.
Let’s start with fake accounts. We take a hard line against this activity and block millions of fake accounts each day, most of them at the time of creation. Between October and December 2020, we disabled more than 1.3 billion of them. We also investigate and takedown covert foreign and domestic influence operations that rely on fake accounts. Over the past three years, we’ve removed over 100 networks of coordinated inauthentic behavior (CIB) from our platform and keep the public informed about our efforts through our monthly CIB reports.
To avoid misinformation on its social network, Facebook has a global network of more than 80 independent fact-checkers, who review content in more than 60 languages.
We know that when a warning screen is placed on a post, 95% of the time people don’t click to view it. We also notify the person who posted it and we reduce the distribution of Pages, Groups, and domains that repeatedly share misinformation.
Facebook also highlights the hubs it has created over the years, such as COVID-19 Information Center, Climate Science Information Center, and US 2020 Voting Information Center.
Rosen also says that the company doesn’t have “financial interest in turning a blind eye to misinformation” and highlights the changed News Feed ranking system to connect people to “meaningful posts from their friends and family.”
FTC: We use income earning auto affiliate links. More.
Comments