Skip to main content

Apple details the ways its CSAM detection system is designed to prevent misuse

Apple has published a new document today that offers additional detail on its recently announced child safety features. The company is addressing concerns about the potential for the new CSAM detection capability to turn into a backdoor, with specifics on the threshold it’s using and more.

One of the more notable announcements by Apple today is that the system will be able to be audited by third parties. Apple explains that it will publish a Knowledge Base article with the root hash of the encrypted CSAM hash database. Apple will also allow users to inspect the root hash database on their device and compare against the database in the Knowledge Base article:

Apple will publish a Knowledge Base article containing a root hash of the encrypted CSAM hash database included with each version of every Apple operating system that supports the feature. Additionally, users will be able to inspect the root hash of the encrypted database present on their device, and compare it to the expected root hash in the Knowledge Base article. That the calculation of the root hash shown to the user in Settings is accurate is subject to code inspection by security researchers like all other iOS device-side security claims.

This approach enables third-party technical audits: an auditor can confirm that for any given root hash of the encrypted CSAM database in the Knowledge Base article or on a device, the database was generated only from an intersection of hashes from participating child safety organizations, with no additions, removals, or changes. Facilitating the audit does not require the child safety organization to provide any sensitive information like raw hashes or the source images used to generate the hashes – they must provide only a non-sensitive attestation of the full database that they sent to Apple. Then, in a secure on-campus environment, Apple can provide technical proof to the auditor that the intersection and blinding were performed correctly. A participating child safety organization can decide to perform the audit as well.

Apple also addressed the possibility that an organization could include something other than known CSAM content in the database. Apple says that it will work with at least two child safety organizations to generate the database included in iOS that are not under control of the same government:

Apple generates the on-device perceptual CSAM hash database through an intersection of hashes provided by at least two child safety organizations operating in separate sovereign jurisdictions – that is, not under the control of the same government. Any perceptual hashes appearing in only one participating child safety organization’s database, or only in databases from multiple agencies in a single sovereign jurisdiction, are discarded by this process, and not included in the encrypted CSAM database that Apple includes in the operating system. This mechanism meets our source image correctness requirement.

Apple also offers new details on the manual review process that is performed once the threshold is reached:

Since Apple does not possess the CSAM images whose perceptual hashes comprise the on-device database, it is important to understand that the reviewers are not merely reviewing whether a given flagged image corresponds to an entry in Apple’s encrypted CSAM image database – that is, an entry in the intersection of hashes from at least two child safety organizations operating in separate sovereign jurisdictions. Instead, the reviewers are confirming one thing only: that for an account that exceeded the match threshold, the positively-matching images have visual derivatives that are CSAM. This means that if non-CSAM images were ever inserted into the on-device perceptual CSAM hash database – inadvertently, or through coercion – there would be no effect unless Apple’s human reviewers were also informed what specific non-CSAM images they should flag (for accounts that exceed the match threshold), and were then coerced to do so.

You can find the full document published by Apple today, titled “Security Threat Model Review of Apple’s Child Safety Features,” right here.

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Chance Miller Chance Miller

Chance is an editor for the entire 9to5 network and covers the latest Apple news for 9to5Mac.

Tips, questions, typos to chance@9to5mac.com

Manage push notifications

notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing
notification icon
We would like to show you notifications for the latest news and updates.
notification icon
Please wait...processing