Skip to main content

Search warrant shows how Apple tackles child abuse images on iCloud and email

A search warrant issued on behalf of Homeland Security Investigations provides a glimpse into how Apple detects and reports child abuse images uploaded to iCloud or sent via its email servers, while protecting the privacy of innocent customers.

The first stage of detection is automated, using a system common to most tech companies…

For each child abuse image already detected by authorities, a “hash” is created. This is effectively a digital signature for that image, and tech companies can have their systems automatically search for images that match this hash.

Forbes explains what usually happens when a match is detected.

Once the threshold has been met, that’s enough for a tech company to contact the relevant authority, typically the National Center for Missing and Exploited Children (NCMEC). NCMEC is a nonprofit that acts as the nation’s law enforcement “clearing house” for information regarding online child sexual exploitation. It will typically call law enforcement after being tipped about illegal content, often [prompting] criminal investigations.

However, Apple appears to go a little further, manually checking the images to confirm that they are suspect before then providing law enforcement agencies with the name, address, and mobile phone number associated with the relevant Apple ID.

The process was revealed by the search warrant including comments from an Apple employee.

The investigating officer published an Apple employee’s comments on how they first detected ‘several images of suspected child pornography’ being uploaded by an iCloud user and then looked at their emails.

‘When we intercept the email with suspected images they do not go to the intended recipient. This individual… sent 8 emails that we intercepted. [Seven] of those emails contained 12 images. All 7 emails and images were the same, as was the recipient email address. The other email contained 4 images which were different than the 12 previously mentioned. The intended recipient was the same,’ the Apple workers’ comments read.

‘I suspect what happened was he was sending these images to himself and when they didn’t deliver he sent them again repeatedly. Either that or he got word from the recipient that they did not get delivered.’

The Apple employee then examined each of these images of suspected of child pornography, according to the special agent at the Homeland Security Investigations unit.

Apple’s approach here seems ideal. It only examines images when they have been matched against the hash of a known image, so there should be a very low risk of Apple intercepting and viewing innocent images. Additionally, the company appears to make a manual check before reporting. This acts as a safeguard against a mistake in a hash, in order to be sure the company is only handing over personal data for the owner of the Apple ID when appropriate to do so.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Ben Lovejoy Ben Lovejoy

Ben Lovejoy is a British technology writer and EU Editor for 9to5Mac. He’s known for his op-eds and diary pieces, exploring his experience of Apple products over time, for a more rounded review. He also writes fiction, with two technothriller novels, a couple of SF shorts and a rom-com!


Ben Lovejoy's favorite gear

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
You are subscribed to notifications