You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Twitter has updated its hateful conduct rules, which starting today ban dehumanizing language on the basis of religion.

Twitter shared the news in a blog post:

We create our rules to keep people safe on Twitter, and they continuously evolve to reflect the realities of the world we operate within. Our primary focus is on addressing the risks of offline harm, and research* shows that dehumanizing language increases that risk. As a result, after months of conversations and feedback from the public, external experts and our own teams, we’re expanding our rules against hateful conduct to include language that dehumanizes others on the basis of religion.

Twitter notes that tweets with religion-based dehumanizing language will need to be reported for them to be removed. It shared four examples.

Twitter dehumanizing language

When reported, tweets that include this type of language that have been sent before today will also be removed. However, Twitter won’t be suspending any accounts for tweets that broke the rule before today.

Twitter says this latest hateful conduct rule was decided on after hearing feedback from 8,000 Twitter users from 30 countries on how to improve the platform. The main themes from the feedback were:

  • Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules.
  • Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters.”
  • Consistent enforcement — Many people raised concerns about our ability to enforce our rules fairly and consistently, so we developed a longer, more in-depth training process with our teams to make sure they were better informed when reviewing reports. For this update it was especially  important to spend time reviewing examples of what could potentially go against this rule, due to the shift we outlined earlier.

Twitter also committed to its global focus moving forward and to continue listening to user feedback.

We’ll continue to build Twitter for the global community it serves and ensure your voices help shape our rules, product, and how we work. As we look to expand the scope of this change, we’ll update you on what we learn and how we address it within our rules. We’ll also continue to provide regular updates on all of the other work we’re doing to make Twitter a safer place for everyone @TwitterSafety.

Read more about the policy change on Twitter’s blog post.

FTC: We use income earning auto affiliate links. More.

Hyper Cube automatic iPhone backups


Check out 9to5Mac on YouTube for more Apple news:

About the Author