Last month, I wrote about Apple’s well-intentioned but profoundly unwise plan to begin scanning photos on user devices for Child Sexual Abuse Material (CSAM). The announcement provoked a degree of outcry from privacy advocates and security researchers that seems to have caught the company by surprise—prompting Apple to press pause on the rollout in order to “collect input and make improvements before releasing these critically important child safety features.”
While this is good news, the wording suggests they remain committed to moving ahead with the features they’d announced—including the photo scanning tool—with perhaps some tweaks and modifications to account for outside input. For the other components in the suite of child protection measures they’d announced—which affect the Messages app and Siri searches—that approach makes sense. But the chief problem with the CSAM scanning tool is with the underlying concept, not the details of its implementation. The details of the cryptographic architecture are on the whole quite ingenious: It gets an A in engineering, but fails political economy.
Apple seems to have been unprepared for the degree of backlash to their announcement in part because they really have put substantial effort into attempting to develop a mechanism that detects child abuse imagery in a privacy-preserving way. Running part of the detection process on the user’s device is very likely a prelude to enabling full end-to-end encryption for files stored in the cloud—the architecture makes very little sense otherwise—and avoiding the server-side scanning of images that many other cloud providers routinely perform. And while they’re clearly aware governments around the world will seek to leverage this new capability, the company seems confident they’ve designed the system in a way that will enable them to resist such requests. Here’s what their Frequently Asked Questions document says about the possibility of governments seeking to co-opt the system to search for content other than child abuse images:
Apple would refuse such demands and our system has been designed to prevent that from happening. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. […] The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under this design.
This is, I think, the key to their confidence that the system they’ve built would resist efforts to hijack it for other types of surveillance. In their dispute with the FBI several years back, when they resisted efforts to compel them to weaken the encryption on a terrorist shooter’s iPhone, their legal argument hinged on the premise that they could not be compelled to rewrite their iPhone operating system (iOS) to accomplish what the FBI was demanding. Because the list of hash values that would be used to scan for child abuse imagery is embedded in the operating system, with updates to the list pushed out as part of their regular operating system updates, they appear to believe that they could successfully resist demands to repurpose their CSAM scanning tool using similar logic: Searching for other types of content would require them to push out new operating system code to millions of users. If we assume governments are prepared to compel developers to push out compromised operating systems, after all, couldn’t they order the inclusion of the underlying spyware as well, whether or not Apple introduces their own CSAM detection algorithm?
Here’s why I remain unpersuaded.
First, Apple implies that governments—or at least the U.S. government—would be deterred by the inability to conduct “targeted” searches: A compelled scan would have to run on the devices of all users. This is not in itself enormously reassuring, as they’re effectively saying “but this could only be used for dragnet searches.” As was reported back in 2016, the American intelligence community apparently obtained an order from the Foreign Intelligence Surveillance Court compelling Yahoo to run just such a blanket search server-side, using custom software to scan all incoming messages for a particular “selector” associated with an intelligence target. There is, moreover, a line of precedent associated with drug-sniffing dog searches that blesses sweeping but highly particularized searches, on the theory that a dog’s sniff only uncovers the presence or absence of contraband, in which an individual lacks a “reasonable expectation of privacy.” It is, unfortunately, unclear under current doctrine whether American courts would regard a scan run for specific content without result as a “search” within the meaning of the Fourth Amendment. And, of course, a court might provide for “targeting” server-side, when the results of the on-device scans are processed, in much the same way “minimization procedures” are used to filter the results of large scale intelligence collection.
Moreover, courts care about how practically onerous an order is, as well as the degree of relative intrusion a proposed search entails. Information already divulged to a company or other “third party” receives less Fourth Amendment protection on the theory that users have waived that “reasonable expectation of privacy” in the information provided. Thus a court that would indeed blanch at the prospect of ordering a software developer to install new and untested spyware on the devices of millions of users might very well look differently at a proposal to merely alter the parameters of spyware already running on those devices. Nor would I be overly confident courts would view this as a demand to “rewrite the operating system.” While an updated hash list is indeed delivered as part of an operating system update, adding new hash values to the list provided by child safety organizations does not entail writing new code.
None of this is meant as an endorsement of the lines of legal reasoning I’ve just sketched. Apple would be right to forcefully argue the opposite. But implementing their CSAM detection system makes it significantly more plausible that an argument along those lines might seem persuasive to, for instance, a FISA Court judge. Ultimately the security of the system, at least against government attackers, depends less on the code than on Apple’s ability to prevail in a legal battle after having altered the terrain in the government’s favor. As Apple’s FAQ implicitly acknowledges, the existence of a preexisting on-device scanning system invites the very sort of demands they say they would refuse. Their ability to make that refusal stick legally is unclear, however. And while security researchers might be able to detect the covert installation of new spyware, there would by design no way to determine from the outside what types of content a scanning system baked into the operating system is looking for. To my mind that makes the risk unacceptable.
Finally, it is worth considering broader political and cultural dynamics. Here is a scenario that strikes me as fairly likely, should Apple implement the system they’ve described. In the first months after it begins operating, a significant number of pedophiles are reported as a result of the CSAM scanning system. It soon becomes common knowledge that people in possession of child abuse imagery who back those files up to iCloud are likely to be caught, and pedophiles respond by deactivating iCloud backup, storing their images elsewhere, or taking other countermeasures. The number of reports drops off precipitously. But we have now normalized the idea of running routine scans adverse to the interest of the user on that user’s own device. It seems entirely plausible that at this point we begin hearing demands from law enforcement for legislation that would eliminate “loopholes” that permit users to “circumvent” such scans. You can call this a slippery slope argument, but the slope does indeed seem fairly well greased, and the argument would be quite similar in form to the perpetual demand for encryption backdoors. Perhaps something on the model of the CSAM tool even becomes the basis of a “moderate” compromise proposal in the backdoor debate: “If you won’t let us break drive encryption, at least let us co-opt this preexisting system to hunt for specific content.”
If implemented precisely as designed, the tool Apple has developed is indeed highly privacy protective, though its benefits are likely to diminish as awareness of it becomes more widespread. But technological architectures have effects beyond their initial implementations. They alter the legal and political landscape in ways that are difficult to foresee, and invite applications that may be quite different from what Apple intends or desires.