It’s gotten surprisingly little media attention thus far, but late last week the House Permanent Select Committee on Intelligence approvedbill to facilitate sharing and pooling of “cyber threat information” between private companies and government intelligence agencies—in particular, the übergeeks at the National Security Agency. It’s actually not a bad idea in principle. But the original draft was so broad that that the White House felt compelled to express concerns about the lack of privacy safeguards—which should give you pause, considering how seamlessly President Obama has shifted from thundering against the Patriot Act to quietly embracing the ongoing kudzu growth of our surveillance state. A few encouraging tweaks were hastily added before the committee approved it, but the bill’s current incarnation still punches an enormous hole in the wiretapping laws that have, for decades, been a primary guarantor of our electronic privacy.


First, a bit of context. Whenever you send an e‑mail, start an IM chat, place a VoIP call, visit a web page, or download a file, your traffic passes through many intermediary networks, starting with your own broadband or wireless provider. While savvy users will protect their sensitive communications with encryption, our expectation of privacy when we use the Internet is also safeguarded by federal law, which generally prohibits network owners providing transit services to the general public from intercepting, using, or disclosing the contents of other people’s communications in any way beyond what’s needed to get the traffic from sender to recipient in the ordinary course of business. There are exceptions, of course: for law enforcement monitoring subject to a warrant, for emergencies, for consensual interceptions, and for monitoring that’s necessary to the protection of a provider’s own network. But the presumption against interception is strong and typically hard to overcome. (Non‐​public networks, like a corporation’s private intranet, are another story, of course.) Communications metadata—the information about who is talking to whom, and by what route—is less stringently regulated, but carriers are still barred from sharing that information with the government absent some form of legal process. The motivation for all of this is the understanding that heavily regulated carriers, which also often compete for lucrative government contracts, would be subject to government pressure to “voluntarily” share their customers’ data (especially if the sharing could be done secretly). Thus, the law ensures that the government will have to observe the niceties of judicial process before digging through citizens’ private communications, rather than relying on the “informal cooperation” of intermediaries.


This generally salutary arrangement does, however, create some difficulties in the cybersecurity context. Carriers and cybersecurity providers who have visibility on multiple private networks will often be in an optimal position to detect a wide array of attack patterns, involving both metadata (where are apparent attacks coming from? what timing patterns do they exhibit) and contents (what characteristic “signatures” indicate the presence of viruses, malware, or mass phishing emails). This is information it’s highly valuable to have shared among providers—and, yes, the government too—and which generally doesn’t implicate the kinds of privacy interests wiretap law is supposed to protect. But legislators (or rather, the staffers who actually draft these bills) are generally keen to craft “tech neutral” laws that aren’t bound too tightly to current technologies and vulnerabilities, and therefore won’t be obsolete in the face of new tech or new threats. Unfortunately, this often entails erring on the side of breadth, which in this case means creating a massive loophole to remove a minor obstruction—the legislative equivalent of blowing your nose with C‑4.


The bill provides that, “notwithstanding any other provision of law,” a company that provides cybersecurity services for its own networks or others may use “cybersecurity systems” to acquire “cyber threat information,” and share such information with any other entity, including the government. (One of the amendments introduced last week stipulates that the government may use and share that information only when one “significant purpose” of such use is the protection of national security or cybersecurity.) The crucial question, of course, is what counts as “cyber threat information.” That term is defined to encompass:

information directly pertaining to a vulnerability of, or threat to a system or network of a government or private entity, including information pertaining to the protection of a system or network from—


(A) efforts to degrade, disrupt, or destroy such system or network; or


(B) theft or misappropriation of private or government information, intellectual property, or personally identifiable information.

The intention here is to cover the sort of information I talked about earlier—intrusion patterns and malware fingerprints. On a literal reading, though, it might also include Julian Assange’s personal IM conversations (assuming he ever had an unencrypted one), or e‑mails between security researchers. Moreover, one important purpose of this information sharing is to be able to distinguish malicious from benign traffic—which may mean combing through a big chunk of traffic logs surrounding a suspected or confirmed penetration attempt (and comparing those logs to others) in order to extract the hostile “signal” from the background noise. That makes it extremely likely that a substantial amount of wholly innocent, and potentially sensitive, information about ordinary Americans’ Internet activities will end up in the sharing pool. Many attacks will appear to originate from computers conscripted into malicious botnets by malware, unbeknownst to owners whose legitimate personal traffic could easily be swept in and shared as “cyber threat information” as well. The current proposal doesn’t require minimization or anonymization of personal information unless the companies sharing the information impose such conditions themselves. Finally, “cybersecurity systems” is sufficiently vaguely defined that one could even imagine a sysadmin with a vigilante streak reading it to include aggressive countermeasures, like spyware targeting suspected attackers. After all, “notwithstanding any other provision of law” includes provisions of (say) the Computer Fraud and Abuse Act that would place such tactics out of bounds.


Intelligence agencies are also empowered to share classified cyberintelligence with designated companies—and heaven help the firm that’s starved of that security information while their competitors have access to it. Another of the amendments added last week expressly bars conditioning such intelligence sharing on any particular company’s level of “voluntary” cooperation, and clarifies that the intelligence companies may not “task” private companies with obtaining specific types of information for them. Which is nice, but seems awfully hard to enforce in practice. What we’ve already seen, unfortunately, is that cozy long term collaborative relationships between carriers and intelligence agencies are breeding grounds for abuse, even when the law actually does prohibit the carriers from sharing information without legal process. It’s desirable to create legal space for limited cyberthreat information sharing—but it has to be done without creating a large and tempting backdoor through which government might seek to use “voluntary information sharing” as a way to avoid getting a warrant or court order.