Has the thorny problem of providing law enforcement with access to encrypted data without fatally compromising user security finally been solved? That’s the bold thesis advanced by a piece at Wired that garnered an enormous amount of attention last week by suggesting that renowned computer scientist Ray Ozzie, formerly a top engineer at Microsoft, had developed an “exceptional access” proposal that “satisfies both law enforcement and privacy purists.” Alas, other experts have been conspicuously less enthusiastic, with good reason. It’s worth saying a few words about why.


In one sense, the attention garnered by Ozzie’s proposal, which he’s dubbed “CLEAR,” is somewhat odd: There isn’t much here that’s fundamentally new. A few novel wrinkles notwithstanding, Ozzie’s proposal is a variant on the very old idea of “key escrow,” which involves device manufacturers holding on to either a master key or a database of such keys that can be used to decrypt data at the request of law enforcement. The proposal is limited to providing “exceptional access” to data “at rest” on a device, such as a smartphone, in the physical custody of law enforcement. Ozzie’s suggests that when a user creates a passcode to encrypt the data on a device, the passcode itself should be encrypted using the device manufacturer’s public key, which is hardcoded into the cryptographic processor embedded in the device. Then, when law enforcement wishes to access such a device in their possession, pursuant to a valid court order, they activate a special law‐​enforcement mode which permanently renders the device inoperable (or “bricks” it) and displays the encrypted user passcode. This can then be sent to the manufacturer, which, upon validating that they’ve received a legitimate request from a real law enforcement agency with a valid warrant, uses their own private key (corresponding to the public key baked into the phone) to decrypt the original passcode and provide it to the requesting agency.


In its broad outlines, this isn’t fundamentally much different from proposals that crypto experts have considered and rejected for decades. So why has CLEAR (and Wireds article on it) generated so much interest? A substantial part of it simply comes down to who’s offering it: Ozzie has a stellar reputation, and is offering a solution where most security experts have simply been urging governments to abandon the idea of building a police backdoor into cryptosystems. This feeds into the seemingly widespread conviction among law enforcement types that computer scientists are really just ideologically opposed to such backdoors, and stubbornly refusing to work on developing technical solutions. Many, moreover, may not really understand why experts tend to say such backdoors can’t be built securely, and therefore believe that Ozzie’s proposal does represent something fundamentally new: The “golden key” that all those other experts pretended couldn’t exist. But, of course, cryptographers have long known a system along these lines could be built: That was never the technical problem with law enforcement backdoors. (There’s perhaps some fairness to the complaint that privacy advocates haven’t always been sufficiently clear about this in arguments aimed at a mass audience, which may contribute to the impression that Ozzie’s proposal represents some significant breakthrough.) Rather, the deep problem—or rather, one of several deep problems—has always been ensuring the security of that master key, or key database.

That brings us to the second reason for the appeal of Ozzie’s proposal, which is essentially a rhetorical point rather than a novel technical one. Software developers and device manufacturers, Ozzie notes, already hold “master keys” of a sort: The cryptographic signing keys used to authenticate new software updates. The way your iPhone already confirms that a new version of iOS is really a legitimate update from Apple and not some malicious code written by hackers impersonating them superficially resembles Ozzie’s proposal in reverse. Apple uses their own private key to sign the update, and your phone confirms its authenticity using the corresponding public key baked into its cryptoprocessor. That all‐​important private key is typically kept on an expensive bit of machinery called a Hardware Security Module designed to make it (in theory) possible to use the secret private key to authenticate new updates, but impossible to copy the key itself off the device. The existence of that key does, of course, represent a security risk of a sort, but one we generally consider acceptable—far less risky than leaving users with no good way distribute authenticated security updates when bugs and vulnerabilities are discovered. Thus the argument, in effect, becomes: If it’s not a wildly unacceptable risk for developers to maintain signing keys stored on an HSM, then surely it’s equally acceptable to similarly maintain a “golden key” for law enforcement use.


This is, however, misleading in at least a couple of ways. First, as Stanford’s Rianna Pfefferkorn has argued in a recent paper, the use cases for signing keys—used to authenticate new software releases on perhaps a monthly basis—is very different from that of a decryption key that would almost certainly need to be accessed by human beings multiple times each day. An asset becomes inherently harder to secure the more routinely it must be accessed by legitimate users. Second, and perhaps more importantly, the value to an adversary of a decryption key is much higher, because it has far greater potential for clandestine use. The risks associated with stolen signing keys—and we should pause to note that signing keys do indeed get stolen on occasion—are mitigated by the fact that misuse of such keys is intrinsically public. A falsely authenticated piece of malicious code is only useful to an adversary if that code is then deployed on a target’s device, and there are a variety of mechanisms by which such a compromise is likely to be detected, at which point the key is revoked and its value to the adversary comes to an end. Decrypting stolen data, by contrast, has no such inherently public component. One can attempt to design an exceptional access system in a way that forces publicity about its use, but without getting too mired in the technical weeds, the fact that decryption doesn’t inherently require publicity means that in most cases this just gives an attacker the secondary problem of spoofing confirmation that their decryption has been publicly logged. Ozzie’s suggestion that law enforcement decryption should permanently brick the device being unlocked is one way of making it more difficult for an attacker to covertly extract data, but as Stanford’s Pfefferkorn notes, this “solution” has significant downsides of its own given that many smartphones are quite expensive pieces of technology.


Why does it matter that a decryption key, with its potential for clandestine (and therefore repeated) use is more value to an adversary? Because security is in many ways as much about economics as the fine points of engineering. The same security system that would be excessive for the purpose of safeguarding my private residence might be pathetically inadequate for a bank or an art museum, for the obvious reason that there’s nothing in my house a rational adversary would dedicate hundreds of thousands of dollars’worth of resources to stealing, while a heist of the bank or the museum might well yield returns that would justify such an investment. No security is perfect: Adequate security is security that would cost an attacker more to breach than the value they can expect to realize from that breach. Therefore security that is adequate for an asset that is likely to be rendered useless as a result of being deployed is by no means guaranteed to be adequate for an asset that might be used many times undetected.


There are also, of course, a host of other familiar objections one could raise to this or any other backdoor system. If the United States government gets such “exceptional access,” shouldn’t we expect other, nastier regimes to demand the same? Won’t even moderately sophisticated criminals simply cease relying on compromised hardware based encryption and instead add a layer of software‐​based encryption sans backdoor, rendering the whole elaborate scheme ineffective?


Even if we restrict ourselves to the narrower security question, however, Ozzie’s proposal seems susceptible to the same response other key escrow systems face, and that response is as much about economics as technology: Any master key, any centralized mechanism for compromising millions of individual devices, is too valuable to reliably secure against the sort of adversaries likely to be most interested in acquiring it.