Skip to main content

Opinion: The Apple CSAM scanning controversy was entirely predictable

Update: Within minutes of writing this piece, an interview was posted where Craig Federighi admitted that Apple should have handled things differently.

One thing about the CSAM scanning controversy is now abundantly clear: It took Apple completely by surprise. Which is a surprise.

Ever since the original announcement, Apple has been on a PR blitz to correct misapprehensions, and to try to address the very real privacy and human rights concerns raised by the move …

CSAM scanning controversy

Let’s briefly recap on the Child Sexual Abuse Materials (CSAM) furore.

Apple was already placed on the back foot when the plans were leaked shortly before the company announced them. Cryptography and security expert Matthew Green tweeted the plans, saying it was a bad idea.

I’ve had independent confirmation from multiple people that Apple is releasing a client-side tool for CSAM scanning tomorrow. This is a really bad idea.

The leak – which didn’t include details of the protections Apple has against false positives – meant that four concerns were raised ahead of the announcement.

Then there was the official announcement itself. This, honestly, was in large part responsible for mainstream media confusion – because it included two totally separate measures that were quickly and erroneously conflated in numerous reports:

  • CSAM scanning using digital fingerprints from known abusive materials
  • AI-powered detection of probable nudes in iMessages sent or received by children

Security experts continued to raise concerns even after the announcement, as did many of Apple’s own employees.

The CSAM scanning controversy grew so noisy that other security experts had to call for calm, and a less heated debate.

Apple’s response

The company’s immediate response was to leak an internal memo that acknowledged that ‘”some people have misunderstandings, and more than a few are worried about the implications” and tried to balance this with a glowing thank-you note from the National Center for Missing and Exploited Children (NCMEC).

Four days after the announcement, Apple published a six-page FAQ attempting to address both misunderstandings and actual concerns about the measures.

As someone commented at the time, the fact that you need six pages of FAQs, and didn’t have them ready on the day of the announcement, is proof enough that the company made a hash of explaining things in the first place.

The company said that expansion outside the US would be cautious, and engaged in a series of background briefings and interviews to try to counter the negative publicity.

Apple’s mistakes

In my view, Apple made three fundamental mistakes here.

Naivety

Apple’s view seemed to be: Clearly everyone is appalled by CSAM, so clearly everyone should applaud measures taken to fight it. After all, other tech giants, like Facebook and Google, already routinely scan for CSAM in photo uploads and cloud storage, and Apple was doing this in a more privacy-conscious way. Everyone should have been happy.

That was naive because any steps taken to address this problem inevitably involve balancing privacy against protection. Every company needs to choose its own position on that scale, but no position on it is non-controversial.

While other tech companies just get on with it quietly in the background, Apple made a big announcement – which was clearly going to generate a lot of media attention, both informed and uninformed.

Blindness to its own brand image

Any tech company should expect controversy if it makes a public announcement about this, but Apple should have been 10 times more aware. Firstly because Apple. Anything Apple announces tends to be headline news.

But second, and more importantly, Apple has spent literally years touting privacy, privacy, privacy. It has put up huge billboards. It has run amusing ads. It has an entire privacy microsite. Its CEO talks about privacy in every interview and public appearance. The company attacks other tech giants over privacy. It fought the entire ad industry over a new privacy feature.

It would not be an exaggeration to say that, in recent years, privacy has been both a major selling point and a huge part of Apple’s brand image.

I am frankly stunned that Apple didn’t understand that any reduction of privacy, however minor it may be, and however good the reason, was going to create massive waves.

A non-response to the greatest threat

Apple has good protective measures against one risk: false positives. The company says that a report will only be triggered by matches on multiple photos, not just one. The company didn’t reveal how many matches are needed, but did claim that the threshold meant the risk of a false report was one in a trillion. Update: Federighi indicated that the number of matches required is 30.

Some experts have taken issue with that calculation, but whatever the real number, it does seem clear that it is an extremely low risk – and that risk is mitigated by a manual review by Apple. That means that if you have enough matches in your photos to trigger an alert, someone at Apple will look at the photos to see whether or not they are CSAM. If not, no report is passed on to the authorities.

That still raises a small privacy risk – that someone at Apple ends up looking at what may be perfectly innocent photos – but there is zero risk that anyone’s life is going to be ruined by a false report to the authorities. Personally, I’m totally happy with that.

But the greatest risks remains:

Misuse by authoritarian governments

A digital fingerprint can be created for any type of material, not just CSAM. What’s to stop an authoritarian government adding to the database images of political campaign posters or similar?

So a tool that is designed to target serious criminals could be trivially adapted to detect those who oppose a government or one or more of its policies.

Potential expansion into messaging

If you use an end-to-end encrypted messaging service like iMessage, Apple has no way to see the content of those messages. If a government arrives with a court order, Apple can simply shrug and say it doesn’t know what was said. 

But if a government adds fingerprints for types of text – let’s say the date, time, and location of a planned protest – then it could easily create a database of political opponents.

Apple’s response to this was that it would “refuse such demands.” As I said earlier, however, that is not a promise the company can make.

That statement is predicated on Apple having the legal freedom to refuse. In China, for example, Apple has been legally required to remove VPNnews, and other apps, and to store the iCloud data of Chinese citizens on a server owned by a government-controlled company

There is no realistic way for Apple to promise that it will not comply with future requirements to process government-supplied databases of “CSAM images” that also include matches for materials used by critics and protestors. As the company has often said when defending its actions in countries like China, Apple complies with the law in each of the countries in which it operates.

What should Apple have done instead?

I’ll repeat my earlier point that doing anything at all is controversial – and, for that matter, so is doing nothing. There is no magic solution here.

But there is one very clear thing Apple should have done differently if it was going to make an announcement at all: It should have made the CSAM one first, addressed all of the issues around that, and then made a separate and later announcement about the iMessage feature. In that way, the two wouldn’t have been confused – and a huge chunk of the mainstream media concern about this would have been eliminated.

But it probably shouldn’t have made a CSAM scanning announcement at all.

Ironically, Apple’s best bet would have been to do something that is actually less transparent and less private, but also less controversial. That is, scan photos on iCloud against the CSAM hashes. And simply note that in the iCloud privacy policy (observing that all cloud services do this), rather than make a song and dance about it.

That would be less controversial because everyone else does it, and because security experts already knew that iCloud isn’t private: The fact that Apple doesn’t use end-to-end encryption means that we already knew it holds the key, and we already knew that it cooperates with law enforcement by handing over iCloud data on receipt of a court order.

If it had started doing this, I think most security experts would simply have shrugged – nothing new here – and it wouldn’t have come to the attention of mainstream media.

There may be method in the madness

I did raise one possibility in my initial response.

Apple’s strong privacy messaging has meant that failing to use end-to-end encryption for iCloud backups is looking increasingly anomalous – most especially in China, where a government-owned company has access to the servers on which the backups are stored.

So one possibility is that this is the first step in a new compromise by Apple: It makes a future switch to E2E encryption for iCloud backups – which includes photos and videos – but also builds in a mechanism by which governments can scan user photo libraries. And, potentially, messages too.

But if that is the case, Apple still messed up. It should have announced a switch to end-to-end encryption first, then explained that, in the light of this, it would have to change the way it scans for CSAM – and pointed out that the new method offers better privacy.

That’s my view – what’s yours? Do you agree that Apple messed up here, or do you think it handled things as well as was possible? Please take our poll, and share your thoughts in the comments.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing