Twitter currently uses a combination of human review and AI to identify problematic tweets. Where a tweet violates the company’s policies, it is removed.
But the company is now aiming to use an innovative approach to dealing with tweets which don’t technically break the rules but are ‘unhealthy’ or detract from the conversation …
Twitter’s own blog post didn’t reveal too much about how this would work, but Slate had more details. Effectively, Twitter is crowdsourcing the work of identifying trolls by looking at how people respond to their tweets.
Its software will look at a large number of signals […] such as how often an account is the subject of user complaints and how often it’s blocked and muted versus receiving more positive interactions such as favorites and retweets. The company will not be looking at the actual content of tweets for this feature—just the types of interactions that a given account tends to generate.
For instance, Harvey said, “If you send the same message to four people, and two of them blocked you, and one reported you, we could assume, without ever seeing what the content of the message was, that was generally a negative interaction.”
Once a problematic account has been identified, Twitter performs a kind of shadow-ban. Their tweets are relegated to the bottom of a thread under a ‘Show more replies’ link, and will not appear in normal search results. All their subsequent tweets are affected in the same way until their ‘score’ improves.
Twitter believes that the approach will be equally effective against both trolls and bots, and says it expects fewer than 1% of accounts to be affected.
Photo: Reuters/Kacper Pempel
FTC: We use income earning auto affiliate links. More.