Yesterday saw a Twitter API U-turn by the company’s owner Elon Musk, after he had said that the code powering both third-party twitter clients and automated posting bots would no longer be freely available.
Separately, a new report says that even the most common and easily-detected Child Sexual Abuse Material (CSAM) remains on the platform after Musk vowed that removing it was his #1 priority …
Twitter API to be chargeable
If you’re reading this story after clicking on 9to5Mac’s link on Twitter, you’re doing so thanks to an Application Programming Interface (API) which allows posts to be automatically tweeted as soon as they are published.
The same API is used by countless bots used to post links to everything from breaking news stories to fun things like which tech exec has just unfollowed another.
The API was also used by third-party Twitter apps, like Tweetbot.
The first sign of trouble happened when those third-party apps stopped working, and it turned out that was intentional. Musk later said that the company would continue to offer APIs for use by auto-posting bots, but that this would be a chargeable service from February 9. The proposed costs of these would be unaffordable even by larger media organizations.
U-turn announced by Musk
The API announcement created uproar, as most of the developers behind popular Twitter bots said that they would no longer be able to offer the service. Others pointed out that free and immediate access to news and other timely information on Twitter was a huge part of its appeal, and that Musk was effectively removing a key reason for people to use the social network.
Musk yesterday tweeted a partial U-turn on the upcoming policy.
Responding to feedback, Twitter will enable a light, write-only API for bots providing good content that is free
Few seemed convinced it was enough, however.
Twitter failing to block CSAM
Back in November of last year, Musk said that removing CSAM was “priority #1.” More than two months later, a New York Times report says that he hasn’t even managed to block the most easily-detected CSAM.
Indeed, says the report, things have gotten worse, not better.
A review by The New York Times found that the imagery, commonly known as child pornography, persisted on the platform, including widely circulated material that the authorities consider the easiest to detect and eliminate.
After Mr. Musk took the reins in late October, Twitter largely eliminated or lost staff experienced with the problem and failed to prevent the spread of abusive images previously identified by the authorities, the review shows. Twitter also stopped paying for some detection software considered key to its efforts [presumably to cut costs].
Twitter claims that the company has become increasingly aggressive at tackling the problem, but an experiment by the paper throws this claim into considerable doubt. It quickly found well-known material, as did others.
“The volume we’re able to find with a minimal amount of effort is quite significant,” said Lloyd Richardson, the technology director at the Canadian center. “It shouldn’t be the job of external people to find this sort of content sitting on their system.”
Apple was forced to put its own CSAM-detection plans on hold after concerns were raised about the potential for abuse by repressive governments. Finally, in December, the company announced that it has now permanently abandoned this plan.
FTC: We use income earning auto affiliate links. More.
Comments