Twitter is testing a new feature that labels misleading news, fake news, and lies from public figures and politicians with bright orange or red warnings. If Twitter decides to move ahead with the change, it could arrive as soon as March 5th.
Reported by NBC News, the Twitter test to label misleading news, fake news, and lies from politicians and public figures was seen in a leaked demo sent to the outlet.
Twitter confirmed to NBC that it is testing the feature but that it may or may not roll it out to all users. However, in any case, it sounds like Twitter will launch a new plan to tackle misinformation on March 5th. This comes after Twitter announced a ban on deepfakes and other synthetic/manipulated media earlier this month.
Twitter confirmed that the leaked demo, which was accessible on a publicly available site, is one possible iteration of a new policy to target misinformation it plans to roll out March 5.
Shown in the image above, here’s what the orange badges include:
The demo features bright red and orange badges for tweets that have been deemed “harmfully misleading,” in nearly the same size as the tweet itself and prominently displayed directly below the tweet that contains the harmful misinformation.
Examples of misinformation included a false tweet about whistleblowers by House Minority Leader Kevin McCarthy, R-Calif., a tweet about gun background checks by Sen. Bernie Sanders, I-Vt., and a tweet by an unverified Twitter account posting a doctored video of House Majority Leader Nancy Pelosi, D-Calif.
One version of the test includes a community incentive for users to contribute to the feature…
In one iteration of the demo, Twitter users could earn “points” and a “community badge” if they “contribute in good faith and act like a good neighbor” and “provide critical context to help people understand information they see.”
NBC notes that aspect might be useful to keep trolls and more from having too much of an impact.
The points system could prevent trolls or political ideologues from becoming moderators if they too often differ from the broader community in what they mark as false or misleading.
The question asked is whether a tweet is “harmfully misleading.”
In the demo, community members are asked if the tweet is “likely” or “unlikely” to be “harmfully misleading.” They are then asked to rate how many community members will answer the same as them on a sliding scale of 1 to 100, before elaborating on why the tweet is harmfully misleading.
FTC: We use income earning auto affiliate links. More.