Apple explains how iPhones will scan photos for child-sexual-abuse images

By | August 5, 2021
Close-up shot of female finger scrolling on smartphone screen in a dark environment.

Shortly after reports today that Apple will start scanning iPhones for child-abuse images, the company confirmed its plan and provided details in a news release and technical summary.

“Apple’s method of detecting known CSAM (child sexual abuse material) is designed with user privacy in mind,” Apple’s announcement said. “Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC (National Center for Missing and Exploited Children) and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.”

Apple provided more detail on the CSAM detection system in a technical summary and said its system uses a threshold “set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.”

The changes will roll out “later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey,” Apple said. Apple will also deploy software that can analyze images in the Messages application for a new system that will “warn children and their parents when receiving or sending sexually explicit photos.”

Apple accused of building “infrastructure for surveillance”

Despite Apple’s assurances, security experts and privacy advocates criticized the plan.

“Apple is replacing its industry-standard end-to-end encrypted messaging system with an infrastructure for surveillance and censorship, which will be vulnerable to abuse and scope-creep not only in the US, but around the world,” said Greg Nojeim, co-director of the Center for Democracy & Technology’s Security & Surveillance Project. “Apple should abandon these changes and restore its users’ faith in the security and integrity of their data on Apple devices and services.”

For years, Apple has resisted pressure from the US government to install a “backdoor” in its encryption systems, saying that doing so would undermine security for all users. Apple has been lauded by security experts for this stance. But with its plan to deploy software that performs on-device scanning and share selected results with authorities, Apple is coming dangerously close to acting as a tool for government surveillance, Johns Hopkins University cryptography Professor Matthew Green suggested on Twitter.

The client-side scanning Apple announced today could eventually “be a key ingredient in adding surveillance to encrypted messaging systems,” he wrote. “The ability to add scanning systems like this to E2E [end-to-end encrypted] messaging systems has been a major ‘ask’ by law enforcement the world over.”

Message scanning and Siri “intervention”

In addition to scanning devices for images that match the CSAM database, Apple said it will update the Messages app to “add new tools to warn children and their parents when receiving or sending sexually explicit photos.”

“Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages,” Apple said.

When an image in Messages is flagged, “the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo.” The system will let parents get a message if children do view a flagged photo, and “similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it,” Apple said.

Apple said it will also update Siri and Search to “provide parents and children expanded information and help if they encounter unsafe situations.” The Siri and Search systems will “intervene when users perform searches for queries related to CSAM” and “explain to users that interest in this topic is harmful and problematic, and provide resources from partners to get help with this issue.”

The Center for Democracy & Technology called the photo-scanning in Messages a “backdoor,” writing:

The mechanism that will enable Apple to scan images in Messages is not an alternative to a backdoor—it is a backdoor. Client-side scanning on one “end” of the communication breaks the security of the transmission, and informing a third party (the parent) about the content of the communication undermines its privacy. Organizations around the world have cautioned against client-side scanning because it could be used as a way for governments and companies to police the content of private communications.

Apple’s technology for analyzing images

Apple’s technical document on CSAM detection includes a few privacy promises in the introduction. “Apple does not learn anything about images that do not match the known CSAM database,” it says. “Apple can’t access metadata or visual derivatives for matched CSAM images until a threshold of matches is exceeded for an iCloud Photos account.”

Apple’s hashing technology is called NeuralHash and it “analyzes an image and converts it to a unique number specific to that image. Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value,” Apple wrote.

Before an iPhone or other Apple device uploads an image to iCloud, the “device creates a cryptographic safety voucher that encodes the match result. It also encrypts the image’s NeuralHash and a visual derivative. This voucher is uploaded to iCloud Photos along with the image.”

Using “threshold secret sharing,” Apple’s “system ensures that the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content,” the document said. “Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images.”

While noting the 1-in-1 trillion probability of a false positive, Apple said it also “manually reviews all reports made to NCMEC to ensure reporting accuracy.” Users can “file an appeal to have their account reinstated” if they believe their account was mistakenly flagged.

User devices to store blinded CSAM database

User devices will store a “blinded database” that allows the device to determine when a photo matches a picture in the CSAM database, Apple explained:

First, Apple receives the NeuralHashes corresponding to known CSAM from the above child-safety organizations. Next, these NeuralHashes go through a series of transformations that includes a final blinding step, powered by elliptic curve cryptography. The blinding is done using a server-side blinding secret, known only to Apple. The blinded CSAM hashes are placed in a hash table, where the position in the hash table is purely a function of the NeuralHash of the CSAM image. This blinded database is securely stored on users’ devices. The properties of elliptic curve cryptography ensure that no device can infer anything about the underlying CSAM image hashes from the blinded database.

An iPhone or other device will analyze user photos, compute a NeuralHash, and look up “the entry in the blinded hash table.” The device “also uses the blinded hash that the system looked up to obtain a derived encryption key” and uses that encryption key “to encrypt the associated payload data.”

Combined with other steps, this ensures that only images matching the CSAM database will be decrypted, Apple wrote:

If the user image hash matches the entry in the known CSAM hash list, then the NeuralHash of the user image exactly transforms to the blinded hash if it went through the series of transformations done at database setup time. Based on this property, the server will be able to use the cryptographic header (derived from the NeuralHash) and using the server-side secret, can compute the derived encryption key and successfully decrypt the associated payload data.

If the user image doesn’t match, the above step will not lead to the correct derived encryption key, and the server will be unable to decrypt the associated payload data. The server thus learns nothing about non-matching images.

The device doesn’t learn about the result of the match because that requires knowledge of the server-side blinding secret.

Finally, the client uploads the image to the server along with the voucher that contains the encrypted payload data and the cryptographic header.

As noted earlier, you can read the technical summary here. Apple also published a longer and more detailed explanation of the “private set intersection” cryptographic technology that determines whether a photo matches the CSAM database without revealing the result.

Source