CSAM
Apple’s efforts to detect Child Sexual Abuse Materials (CSAM).
Table of contents
What is CSAM?
While US federal law uses the term child pornography, the National Center for Missing and Exploited Children (NCMEC) explains why the term CSAM is preferred.
NCMEC chooses to refer to these images as Child Sexual Abuse Material (CSAM) to most accurately reflect what is depicted – the sexual abuse and exploitation of children. Not only do these images and videos document victims’ exploitation and abuse, but when these files are shared across the internet, child victims suffer re-victimization each time the image of their sexual abuse is viewed […]
While CSAM is seen and transmitted on computers and through other technology, these images and videos depict actual crimes being committed against children. The human element, children at risk, must always be considered when talking about this offense that is based in a high-tech world.
How is it usually detected?
The usual way to detect CSAM is when cloud services like Google Photos scan uploaded photos and compare them against a database of known CSAM images. This database is provided by NCMEC and similar organizations around the world.
The actual matching process uses what’s known as a hash, or digital fingerprint. This is derived from key elements of the image, and is deliberately fuzzy so that it will continue to work when images are resized, cropped, or otherwise processed. This means there will sometimes be false positives: an innocent image whose hash happens to be a close enough match to a CSAM one.
How is Apple detecting CSAM?
Apple made an announcement in early August 2021 about its own plans to begin scanning for CSAM.
Apple has chosen to take a somewhat different approach, which it says better protects privacy. This process is:
- Apple downloads the CSAM database hashes to your iPhone
- An on-device process looks for matches with hashes of your photos
- If fewer than 30* are found, no action is taken
- If 30+ matches are found, low-resolutions of your photos are manually examined by Apple
- If the photos are found to be innocent, no further action is taken
- If manual review confirms them as CSAM, law enforcement is informed
*Apple initially said only that there was a threshold of matching images, without revealing what it was, but Craig Federighi implied in an interview that this is 30 images.
What concerns have been raised?
Concerns have been raised by cybersecurity experts, human rights organizations, governments, and Apple’s own employees. Four main concerns have been raised, explained here:
- Accidental false positives could ruin someone’s reputation
- Deliberate false positives (aka collision attacks) could be created to achieve the same goal
- Authoritarian governments could add political posters and similar to the database
- The same hash-based on-device searches could be later applied to iMessage
Additionally, because Apple simultaneously announced an entirely separate feature designed to detect nude photos in iMessages sent or received by children, many non-technical people conflated the two, thinking Apple was scanning our photos for nudes.
How has Apple responded?
Apple engaged in a flurry of rapid-fire PR activity designed to correct misapprehensions and address genuine concerns. This included a leaked internal memo, a series of background briefings, interviews, and a six-page FAQ.
Apple said that images were only scanned if they were synched with iCloud, so customers could opt out if they wished. It added that the risk of either accidental or deliberate false positives was statistically insignificant, as it required multiple matches before an account was flagged. Even then, an Apple employee would review images before any report to law enforcement.
The company said it would roll out the feature on a country-by-country basis, and would refuse any government demand to add political images to the database – a promise it cannot realistically make.
Since then, things have gone complete quiet, with no sign of any move by Apple to actually launch CSAM scanning.
Why has this proven so controversial?
Google, Amazon, Facebook and many other tech giants already routinely scan for CSAM and report instances to law enforcement. Apple is merely joining in, and trying to use a more privacy-focused approach, by performing the actual comparison on-device. So why so much controversy?
In part, for the reason explained earlier: Apple’s mistake in simultaneously announcing two different features.
But the outrage was entirely predictable, given the years Apple has spent touting its privacy credentials.
The company has put up huge billboards. It has run amusing ads. It has an entire privacy microsite. Its CEO talks about privacy in every interview and public appearance. The company attacks other tech giants over privacy. It fought the entire ad industry over a new privacy feature.
Any risk that customer privacy will be compromised, however small the likelihood, and however well-intentioned the reason, was bound to raise eyebrows.
Apple may not be able to keep its head down much longer, however, as a UK CSAM law could force the issue.