Last month, I wrote about Apple’s well-intentioned but profoundly unwise plan to begin scanning photos on user devices for Child Sexual Abuse Material (CSAM). The announcement provoked a degree of outcry from privacy advocates and security researchers that seems to have caught the company by surprise—prompting Apple to press pause on the rollout in order to “collect input and make improvements before releasing these critically important child safety features.”
While this is good news, the wording suggests they remain committed to moving ahead with the features they’d announced—including the photo scanning tool—with perhaps some tweaks and modifications to account for outside input. For the other components in the suite of child protection measures they’d announced—which affect the Messages app and Siri searches—that approach makes sense. But the chief problem with the CSAM scanning tool is with the underlying concept, not the details of its implementation. The details of the cryptographic architecture are on the whole quite ingenious: It gets an A in engineering, but fails political economy.
Apple seems to have been unprepared for the degree of backlash to their announcement in part because they really have put substantial effort into attempting to develop a mechanism that detects child abuse imagery in a privacy-preserving way. Running part of the detection process on the user’s device is very likely a prelude to enabling full end-to-end encryption for files stored in the cloud—the architecture makes very little sense otherwise—and avoiding the server-side scanning of images that many other cloud providers routinely perform. And while they’re clearly aware governments around the world will seek to leverage this new capability, the company seems confident they’ve designed the system in a way that will enable them to resist such requests. Here’s what their Frequently Asked Questions document says about the possibility of governments seeking to co-opt the system to search for content other than child abuse images:
Apple would refuse such demands and our system has been designed to prevent that from happening. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. […] The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under this design.
This is, I think, the key to their confidence that the system they’ve built would resist efforts to hijack it for other types of surveillance. In their dispute with the FBI several years back, when they resisted efforts to compel them to weaken the encryption on a terrorist shooter’s iPhone, their legal argument hinged on the premise that they could not be compelled to rewrite their iPhone operating system (iOS) to accomplish what the FBI was demanding. Because the list of hash values that would be used to scan for child abuse imagery is embedded in the operating system, with updates to the list pushed out as part of their regular operating system updates, they appear to believe that they could successfully resist demands to repurpose their CSAM scanning tool using similar logic: Searching for other types of content would require them to push out new operating system code to millions of users. If we assume governments are prepared to compel developers to push out compromised operating systems, after all, couldn’t they order the inclusion of the underlying spyware as well, whether or not Apple introduces their own CSAM detection algorithm?
Here’s why I remain unpersuaded.
Read the rest of this post →