Facebook announced today four ways that will work to block or prevent misinformation and hate speech from spreading on its platform. The tactics range from tweaking how its NewsFeed algorithm works to expanding its partnership with the Associated Press, academics, fact-checking experts, and more.
As reported by Axios, Facebook shared the updates on its efforts to block and reduce fake news and hate speech during a press event today at its headquarters. Here’s what Facebook said it is planning to do:
- Expand its partnership with the Associated Press to “debunk false and misleading video misinformation and Spanish-language content appearing on Facebook in the U.S.”
- Adjust the News Feed algorithm to reduce the rank of sites that link out much more widely than they are linked to.
- Reduce the reach of Facebook Groups whose members repeatedly share misinformation and holding Group administrators more accountable for violations of Facebook’s community standards
- Open up a consultation process with “a wide range of academics, fact-checking experts, journalists, survey researchers and civil society organizations” to explore the benefits and risks of involving “groups of Facebook users pointing to journalistic sources to corroborate or contradict claims made in potentially false content.”
Notably, Facebook said it catches the majority of child exploitation, terrorist, nudity, and violent content. However, hate speech is much more difficult to police with only 52% of it being stopped by the platform.
The company said it catches 99% of both child exploitation and terrorist propaganda, as well as 96% of nudity and 97% of graphic violence — but barely more than 50 percent of hate speech (52%). That’s up from only 23% of hate speech at the end of 2017.
FTC: We use income earning auto affiliate links. More.
Comments