At a New America Foundation conference on cybersecurity Monday, NSA Director Mike Rogers gave an interview that—despite his best efforts to deal exclusively in uninformative platitudes—did produce a few lively moments. The most interesting of these came when techies in the audience—security guru Bruce Schneier and Yahoo’s chief information security officer Alex Stamos—challenged Rogers’ endorsement of a “legal framework” for requiring device manufacturers and telecommunications service providers to give the government backdoor access to their users’ encrypted communications. (Rogers repeatedly objected to the term “backdoor” on the grounds that it “sounds shady”—but that is quite clearly the correct technical term for what he’s seeking.) Rogers’ exchange with Stamos, transcribed by John Reed of Just Security, is particularly illuminating:

Alex Stamos (AS): “Thank you, Admiral. My name is Alex Stamos, I’m the CISO for Yahoo!. … So it sounds like you agree with Director Comey that we should be building defects into the encryption in our products so that the US government can decrypt…


Mike Rogers (MR): That would be your characterization. [laughing]


AS: No, I think Bruce Schneier and Ed Felton and all of the best public cryptographers in the world would agree that you can’t really build backdoors in crypto. That it’s like drilling a hole in the windshield.


MR: I’ve got a lot of world-class cryptographers at the National Security Agency.


AS: I’ve talked to some of those folks and some of them agree too, but…


MR: Oh, we agree that we don’t accept each others’ premise. [laughing]


AS: We’ll agree to disagree on that. So, if we’re going to build defects/​backdoors or golden master keys for the US government, do you believe we should do so — we have about 1.3 billion users around the world — should we do for the Chinese government, the Russian government, the Saudi Arabian government, the Israeli government, the French government? Which of those countries should we give backdoors to?


MR: So, I’m not gonna… I mean, the way you framed the question isn’t designed to elicit a response.


AS: Well, do you believe we should build backdoors for other countries?


MR: My position is — hey look, I think that we’re lying that this isn’t technically feasible. Now, it needs to be done within a framework. I’m the first to acknowledge that. You don’t want the FBI and you don’t want the NSA unilaterally deciding, so, what are we going to access and what are we not going to access? That shouldn’t be for us. I just believe that this is achievable. We’ll have to work our way through it. And I’m the first to acknowledge there are international implications. I think we can work our way through this.


AS: So you do believe then, that we should build those for other countries if they pass laws?


MR: I think we can work our way through this.


AS: I’m sure the Chinese and Russians are going to have the same opinion.


MR: I said I think we can work through this.

I’ve written previously about why backdoor mandates are a horrible, horrible idea—and Stamos hits on some of the reasons I’ve pointed to in his question. What’s most obviously disturbing here is that the head of the NSA didn’t even seem to have a bad response prepared to such an obvious objection—he has no serious response at all. China and Russia may not be able to force American firms like Google and Apple to redesign their products to be more spy-friendly, but if the American government does their dirty work for them with some form of legal backdoor mandate, those firms will be hard pressed to resist demands from repressive regimes to hand over the keys. Rogers’ unreflective response seems like a symptom of what a senior intelligence official once described to me as the “tyranny of the inbox”: A mindset so myopically focused on solving one’s own immediate practical problems that the bigger picture—the dangerous long-term consequences of the easiest or most obvious quick fix solution—are barely considered.

What we also see, however, is a hint to why officials like Rogers and FBI Director James Comey seem so dismissive of the overwhelming consensus of security professionals and crypographers that it’s not technically feasible to implement a magical “golden key” that will permit the “good guys” to unlock encrypted data while leaving it secure against other adversaries. No doubt these officials are asking their own experts a narrow, technical question and getting a narrow, technically correct answer: There is a subfield of cryptography known as “kleptography” that studies the design of “asymmetric backdoors.” The idea is that the designer of a cryptographic algorithm can bake into it a very specific vulnerability that depends on a lengthy mathematical key that is too large to guess and cannot be easily reverse-engineered from the algorithm itself. Probably the most famous example of this is the vulnerability in the Dual Ellipitic Curve algorithm NSA is believed to have inserted in a widely-used commercial security suite. More prosaically, there is the method companies like Apple use to control what software can run on their devices: Their processors are hard-coded with the company’s public key, and (in theory) will only run software signed by Apple’s private developer key.


So there’s a sense in which it is technically feasible to do what NSA and FBI would like. There’s also a sense in which it’s technically possible for a human being to go without oxygen for ten minutes—but in practice you’ll be in for some rude surprises unless you ask the follow up question: “Will the person be left irreparably brain damaged?” When Comey or Rogers get a ten minute briefing from their experts about the plausibility of designing “golden key” backdoors, they are probably getting the technically accurate answer that yes, on paper, it is possible to construct a cryptographic algorithm with a vulnerability that depends on a long mathematical key known only to the algorithm’s designer, and which it would be computationally infeasible for an adversary to find via a “brute force” attack. In theory. But to quote that eminent cryptographer Homer Simpson: “I agree with you in theory, Marge. In theory, communism works. In theory.” 


The trouble, as any good information security pro will also tell you, is that real world systems are rarely as tidy as the theories, and the history of cryptography is littered with robust-looking cryptogaphic algorithms that proved vulnerable under extended scrutiny or were ultimately impossible to implement securely under real-world conditions, where the crypto is inevitably just one component in a larger security and software ecosystem. A measure of adaptability is one virtue of “end to end” encryption, where cryptographic keys are generated by, and held exclusively by, the end users: If my private encryption key is stolen or otherwise compromised, I can “revoke” the corresponding public key and generate a new one. If some clever method is discovered that allows an attacker to search the “key space” of a cryptosystem more quickly than was previously thought possible, I can compensate by generating a longer key that remains beyond the reach of any attacker’s computing resources. But if a “golden key” that works against an entire class of systems is cracked or compromised, the entire system is vulnerable—which makes it worthwhile for sophisticated attackers to devote enormous resources to compromising that key, far beyond what it would make sense to expend on the key for any single individual or company.


So maybe you don’t want a single master key: Maybe you prefer a model where every device or instance of software has its own corresponding backdoor key. This creates its own special set of problems, because now you’ve got to maintain and distribute and control access to the database of backdoor keys, and ensure that new keys can’t be generated and used without creating a corresponding key in the master database. This weak point—key distribution—is the one NSA and GCHQ are purported to have exploited in last week’s story about the theft of cell phone SIM card keys. Needless to say, this model also massively reduces the flexibility of a communications or data storage system, since it means you need some centralized authority to generate and distribute all these keys. (Contrast a system like GPG, which allows users to generate as many keys as they need without any further interaction with the software creator.) You also, of course, have the added problem of designing your system to resist modification by the user or device owner, so the keys can’t be changed once they leave the manufacturer.


As I’ve argued elswhere, the feasibility of implementing a crypto backdoor depends significantly on the nature of the system where you’re trying to implement it. If you want backdoors in an ecosystem like Apple’s, where you have a single manufacturer producing devices with hardcoded cryptographic keys and exerting control over the software running on its devices, maybe (maybe) you can pull it off without too massive a reduction in the overall security of the system. Ditto if you’re running a communications system where all messages are routed through a centralized array of servers—assuming users are willing to trust that centralized hub with access to their most sensitive data. If, on the other hand, you want backdoors that are compatible with a decentralized peer-to-peer communications network that uses software-generated keys running on a range of different types of computing hardware, that’s going to be a much bigger problem. So when Mike Rogers asks his technical experts whether Apple could realistically comply with a mandate to provide backdoor access to encrypted iPhone data, they might well tell him it’s technically doable—but that doesn’t mean there wouldn’t be serious problems implementing such a mandate generally.


In short, Rogers’ dismissive attitude in the exchange above seems like prime evidence that a little knowledge can indeed be a dangerous thing. He’s got a lot of “world class cryptographers” eager to give him the—very narrowly technically accurate—answer he wants to hear: It is mathematically possible to create backdoors of this sort, at least on certain types of systems. The reason the rest of the cryptographic community disagrees is that they’re not limiting themselves to giving a simplified five-minute answer to the precise question the boss asked, or finding an abstract solution to a chalkboard problem. In other words, they’re looking at the bigger picture and recognizing that actually implementing these solutions across a range of data storage and communications architectures—even on the dubious premise that the global market could be compelled to use broken American crypto indefinitely—would create an intractable array of new security problems. We can only hope that eventually one of the in-house experts that our intelligence leaders actually listen to will sit the boss down for long enough to break the bad news.