Two academics from Princeton University say they know for a fact that Apple’s CSAM system is dangerous because they built one just like it.

They say the system they prototyped worked in exactly the same way as Apple’s approach, but they quickly spotted a glaring problem…

That is the same risk many have pointed to: A repressive government could force Apple to use a database of political images.

Assistant professor Jonathan Mayer and graduate researcher Anunay Kulshrestha write in the Washington Post that they had the same well-intentioned aim as Apple:

Our research project began two years ago, as an experimental system to identify CSAM in end-to-end-encrypted online services. As security researchers, we know the value of end-to-end encryption, which protects data from third-party access. But we’re also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.

We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightforward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn’t read the database or learn whether content matched, since that information could reveal law enforcement methods and help criminals evade detection.

They got as far as a working prototype before calling a halt to the project.

After many false starts, we built a working prototype. But we encountered a glaring problem.

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook, and Twitter for not removing pro-democracy protest materials.

The pair say they are baffled by Apple’s decision to roll this out without being able to properly address the risks they raised.

We were so disturbed that we took a step we hadn’t seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides. We’d planned to discuss paths forward at an academic conference this month.

That dialogue never happened. The week before our presentation, Apple announced it would deploy its nearly identical system on iCloud Photos, which exists on more than 1.5 billion devices. Apple’s motivation, like ours, was to protect children. And its system was technically more efficient and capable than ours. But we were baffled to see that Apple had few answers for the hard questions we’d surfaced.

Apple says it wouldn’t allow this kind of misuse, but as I’ve said before, it wouldn’t necessarily have any choice. A government can legally compel the company to do so, and even in the US, the government has the power to prohibit Apple from disclosing that it is complying with such an order.

Others argue that this is already possible. Governments can already compel Apple to do whatever they like, and issue gag orders prohibiting it from disclosing what it is doing – including handing over complete access to iCloud data.

The specific risk I see here, however, is that governments now know Apple has this on-device scanning capability, and can see how trivially easy it would be to adapt it to their own ends.

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

About the Author

Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!

Ben Lovejoy's favorite gear