Information—you may have heard this one before—wants to be free, and on the global Internet, it flows more freely than ever. Governments are frequently less than sanguine about this fact—often for bad and censorious reasons, but also on occasion with perfectly valid motives, such as the desire to protect national security or the personal privacy of their citizens, which are in many cases overlapping interests. The past month saw developments in two fronts of the perennial struggle to reap the benefits of a borderless network while still maintaining a modicum of control over private data—and I believe it is illuminating to consider them together.

TikTok and WeChat

In June, President Biden formally revoked a pair of his predecessor’s ill‐​starred executive orders, which had sought to effectively ban the popular Chinese‐​owned social media apps TikTok and WeChat. Both orders had already been temporarily blocked by courts on First Amendment grounds. Biden’s replacement order, however, made clear that this was a prelude to developing a more systematic approach to foreign‐​owned apps, and directed the Department of Commerce (in consultation with other agencies) to prepare a report “recommending additional executive and legislative actions to address the risk associated with connected software applications that are designed, developed, manufactured, or supplied by persons owned or controlled by, or subject to the jurisdiction or direction of, a foreign adversary.”

The primary “risk” in question is that modern apps routinely collect enormous amounts of data about their users—often including real‐​world data like geolocation—which might be of significant intelligence value to a foreign power, especially if the app in question is used by a government employee or contractor, or a member of their family. This is a bit odd in the case of WeChat, however, given that it is primarily used to communicate with people in China… and it should not be surprising to anyone that unsecured communications to China are at risk of being intercepted by the Chinese government.

An additional justification cited is the possibility that these platforms will be subject to Chinese‐​directed censorship or serve as vehicles for Chinese propaganda, though this seems far less compelling: The logical endpoint of that argument, after all, would be that Americans should be forbidden from consuming foreign media or using platforms operated outside the United States. If Chinese propaganda is supposed to constitute a national security threat, it is unclear why it matters whether it comes via an app operated by a U.S. subsidiary or a website hosted entirely overseas.

The original orders, it must be noted, appeared to be motivated by more than pure national security concerns. Donald Trump had openly declared his desire to force the sale of TikTok to an American buyer, and hoped the United States government could somehow extract a cut from the transaction. Like the tariffs he had previously imposed pursuant to “emergency” executive powers, this was a not particularly subtle bid to use national security as a pretext for economic protectionism. This was particularly clear in the case of TikTok, which is owned by the Chinese firm ByteDance, but incorporated and headquartered in the United States. That should give the government plenty of options for addressing potential exfiltration of user data without compelled divestiture.

While rejecting the Trump administration’s slipshod approach—pick two popular apps and ban them by name!—Biden signaled that he regards the prospect of foreign governments’ access to American user data as a genuine national security risk demanding a policy remedy.

Schrems II

Just a few days earlier, the European Commission adopted a long‐​awaited set of updated “Standard Contractual Clauses” meant to govern the transfer of personal data between companies in the European Union and the United States. The updates are an attempt to comply with a ruling issued in 2020 by the Court of Justice of the European Union (CJEU) in a case known as known as Schrems II. (“Schrems” after Max Schrems, the Austrian attorney and privacy activist who initiated the case, and “II” because it was his second trip through the courts on this topic.) The E.U.‘s General Data Protection Regulation (GDPR) stipulates that E.U. citizens’ personal data may only be sent abroad if it will receive protection while on vacation equivalent to what it would enjoy at home. Since U.S. privacy regulations are generally less stringent than Europe’s, an international agreement known as Privacy Shield had been hammered out, establishing a framework of privacy rules that American companies hoping to do business with Europe could commit to following. Some 5,300 American businesses were relying on Privacy Shield at the time it was invalidated.

The European Commission had deemed the combination of U.S. law and Privacy Shield “adequate” protection, but in Schrems II the ECJU disagreed, pointing to the broad authority the American government had been granted under §702 of the Foreign Intelligence Surveillance Act to collect the communications and other personal data of foreigners, with minimal judicial involvement and no meaningful means of legal redress for those whose information was handed over. The governing principle under E.U. law is that surveillance must be “necessary and proportionate” to the national security purposes it serves. American law falls short of that standard, the Court concluded, because it was “apparent that Section 702 of the FISA does not indicate any limitations on the power it confers to implement surveillance programmes for the purposes of foreign intelligence or the existence of guarantees for non‐​US persons potentially targeted by those programmes.”

Virtually nobody, of course, actually wants to shut down transatlantic data flows, with all the disruption to international business that would entail. Fortunately, the CJEU had left European companies an out: When a country’s data privacy laws are not “adequate,” companies can attempt to make up the difference by means of contractual restrictions—those “Standard Contractual Clauses—that require the parties to take additional steps to mitigate privacy risks.

A few weeks after the European Commission issued its update, the European Data Protection Board released a set of recommendations to guide companies through a six‐​step process they should undertake before sending data to privacy danger zones like the United States. American companies cannot, of course, refuse to comply with lawful government demands for data, but the EDPB notes a number of mitigation measures that might nevertheless be taken. Data can be encrypted with a key held in the E.U., rendering it unreadable to both the U.S. custodian and the American government—though this tends to defeat the purpose of sending the data in the first place unless that purpose is backup storage. Companies can commit to challenging requests for European user data whenever possible. They can issue “warrant canaries”—regular public announcements that one has not received national security demands for user data, thereby providing effective notice of any demands when the canary fails to chirp on schedule, and circumventing the gag orders invariably attached to such requests.

Needless to say, this all this imposes a substantial burden on companies, as well as a fair amount of uncertainty. Alan Charles Raul, former vice chair of the (American) Privacy and Civil Liberties Oversight Board, has argued that the likely practical impact of Schrems II has been exaggerated, and that most companies should be able to comply with the strictures of the GDPR with minimal mitigation, because the data in question is not subject to §702. But this argument is not terribly persuasive. First, Raul makes a surprising claim about §702 that is, unless I am badly misreading him, simply wrong. He claims that “surveillance under Section 702 … may not target communications of U.S. persons–including American companies—or persons reasonably believed to be in the U.S.” But this is not what §702 says: Raul has conflated targeting of a U.S. person—which is not permitted under §702—with collection of a U.S. person’s communications, which is allowed as long as the U.S. person is not the “target” (that is, the person about whom information is sought). If, for instance, the government “targets” a French citizen employed abroad by an American company, which provides e‑mail services to its employees, it does not matter that the target’s communications include messages to and from Americans, or that the data is transferred between two branches of an American company. Second, Raul notes that §702 only covers demands to “electronic communication service providers,” which should leave many routine data transfers between European firms and their U.S. subsidiaries unaffected. This may be partly right—many companies engaged in cross‐​border data transfers are probably not “electronic communications providers,” though Raul’s argument that it was not “intended” to cover firms that provide communication services to their employees seems implausible. In any event, the statutory definition of that term includes substantial wiggle room, and questions about its precise scope are resolved in secret by the Foreign Intelligence Surveillance Court. Moreover, firms in the U.S. routinely subcontract data storage or other services to companies which are unambiguously “electronic communication service providers,” which means European companies will need to either expressly prohibit such further transfers or undertake mitigation.

Thinking about Cross‐​Border Data Flows

Most press coverage has treated these cases as belonging to quite different categories. The Executive Orders covering TikTok and WeChat are a national security story about assessing the risks of foreign ownership of technology companies. Schrems II is a case about consumer privacy regulation and its impact on businesses. But these frames mostly reflect a difference in emphasis. The underlying issue is the same: Personal data now routinely flows between legal jurisdictions, making the personal data of each country’s citizens highly vulnerable to collection by the intelligence services of other countries. It matters to the policy response whether one frames this in terms of personal privacy or national security—if one is particularly concerned about monitoring of government employees, banning them from using TikTok is a more elegant solution than banning TikTok—but both are fundamentally questions of extrajurisdictional information control. Thinking about the cases in tandem, a few common themes emerge:

Everyone underestimates the problem. Compared with Europe’s comprehensive approach to data protection, the American focus on the risks associated with particular foreign‐​owned companies may start to look a bit futile, like obsessively plugging the largest hole in a sieve. Yes, the Chinese government might hypothetically order ByteDance to obtain Americans’ user data from TikTok (though the U.S.-based company denies it would comply with such a request). Given how little control the United States exerts over the private sharing of non‐​content data, however, there’s no shortage of ways to skin that particular cat. The reality is that data from apps and platforms we use routinely flows to companies other than the developer of the app. One 2018 study by Oxford researchers found the vast majority of apps in the Google play store sending some amount of data to third‐​party trackers—most to U.S. based companies, but about 5% to trackers with corporate ties to China. And of course, that’s just looking at data flowing directly from the apps themselves to known trackers, not any subsequent sharing with companies that might be directly or indirectly foreign‐​owned, or purchases from data brokers.

If Europe’s data protection framework seems complex and onerous—which it is—this is at least partly a function of being somewhat more clear‐​eyed about the scale of the task they’ve set themselves. In one sense, the disproportionate panic around TikTok and WeChat seems almost quaint, since it implies that access to user data requires owning the company that does the initial collection, and banning or forcing disgorgement of a few high‐​profile apps is all that’s needed to lock down American’s data.

Europe is more realistic about the scope of the problem: It demands companies exporting personal information map out not just the initial transfer, but the full downstream flow of that data. But it makes wildly unrealistic assumptions about the capabilities of companies to engage in sophisticated threat modeling and secure data against state intelligence agencies. Having completed their data mapping, companies are enjoined to “look into the characteristics of each of your transfers and determine whether the domestic legal order and/​or practices in force of the country to which data is transferred (or onward transferred) affect your transfers.” That is, firms are expected to assess the risk that the National Security Agency will seek to acquire their data—pursuant to a highly classified process that legally bars recipients of orders from disclosing them—and then take steps to mitigate that risk, ideally without rendering the data useless in the process. We know that a joint U.S./British program known as MUSCULAR exfiltrated data for years from the U.K.-based data centers of tech titans like Google and Microsoft. The idea that smaller firms are going to prevent NSA from getting data it’s sufficiently invested in acquiring seems like a pleasant fiction designed to allow business to continue.

Everyone is a bit of a hypocrite. As many have noted, Schrems II employs a rather glaring double standard. European countries are no more restrained than the United States in their surveillance practices; in many respects they are less so, to the extent it’s possible to judge from public information. Moreover, they are often eager beneficiaries of U.S. surveillance, and engage in routine sharing of intelligence. The U.K., for instance—part of the E.U. until this year—has it’s own bulk Internet collection program, Tempora, and a surveillance statute, the Regulation of Investigatory Powers Act, which confers authorities as broad as anything in FISA. However E.U. member states are afforded the discretion to make their own determinations about the appropriate balance between security and privacy. European institutions are inclined to defer to their judgments about what surveillance authorities are “necessary and proportionate” to their national security needs, even if those authorities are no less permissive than §702.

The United States, of course, engages in the very warrantless harvesting of foreign user data it finds intolerable when the roles are reversed. For decades, the country where the Internet was born has enjoyed an immense intelligence windfall: Digital traffic from around the globe flows across our pipes, and into the servers of our dominant tech companies. Indeed, have repeatedly reformed our surveillance laws for the express purpose of exploiting this fact. We have (correctly) opposed data localization mandates in other countries as protectionism under the guise of national security. But when we find ourselves in the same situation as the rest of the rest of the world—albeit to a far smaller extent—we declare an emergency.

Of course, nobody ever said intelligence collection was supposed to be fair: One spies on one’s adversaries (and allies) while trying to prevent them from doing the same. But it was never reasonable to expect this serendipitous asymmetry to last forever. Our domestic surveillance debates have tacitly treated this historical artifact as a kind of law of nature, and we find ourselves unprepared at the discovery that it is not—and that Americans will increasingly expect to be able to avail themselves of innovative online services developed elsewhere.

Everyone is bad at cost‐​benefit analysis. Even if there are myriad ways China or other foreign adversaries can obtain Americans’ user data, one might argue that it’s nevertheless desirable to target low‐​hanging fruit like TikTok. And if that could be done with no cost, perhaps that would be correct. But the Trump administration had been poised to cut off access to two wildly popular services: A platform for free expression with millions of domestic users, and a messaging app that is vital to both businesses and individuals with connections to China. The orders revoked by Biden were stalled in the courts precisely because they would have resulted in massive encroachments on the First Amendment interests of Americans. This despite the fact that the military, intelligence community, and a slew of other federal agencies had already barred the apps from employees’ devices.

The E.U.‘s updated “Standard Contractual Clauses” will, similarly, impose serious burdens on companies seeking to comply with the European Data Protection Board’s recommendations, even though many types of data transfers are not plausibly affected by authorities like §702—a fact the final recommendations mercifully recognize in a way earlier drafts did not—and the actual added protection European’s data will receive is dubious at best. None of the elaborate new rules, after all, will prevent NSA from getting hold of messages individual citizens send to recipients in the United States.

In both cases case, consideration of tradeoffs tends to be cut short by the invocation of values perceived as non‐​negotiable: “national security” on the one hand, and “the right to privacy” on the other. Asking whether the marginal quantity of the good being obtained is worth the purchase price comes to seem faintly sacrilegious.

Privacy policy is trade policy. Donald Trump’s demand for a “piece of the action” if he had succeeded in forcing the sale of TikTok made the economic considerations underlying his “national security” order unusually overt, but they’re there even when they’re not quite so obvious. Data localization mandates and data protection regimes are—whether justifiable or not—trade barriers, whether they bar transnational data flows entirely or merely impose hefty transaction costs. Historically, the United States—by way of the U.S. Trade Representative—has vocally criticized stringent data protection regulations in precisely these terms. Even if the public rationales for these policies are sincere, they are prone to attract domestic constituencies for reasons unrelated to either privacy or national security, yielding a familiar “bootleggers and baptists” dynamic.

Trade scholars have raised compelling doubts about whether the GDPR regime is compatible with Europe’s obligations under international trade treaties, though the CJEU has made clear that it will not recognize purported treaty obligations that conflict with its interpretation of the right to privacy enshrined in the E.U. Charter. Predictably, China has also complained that the Trump Administration’s attempted app bans flouted World Trade Organization rules—though it is hard to envision the U.S. deferring to international trade bodies on a purported national security matter. The difficulty of adjudicating such disputes is, of course, compounded when the rationale for data flow restrictions turns in substantial part on the purported surveillance practices of intelligence services—and on risk assessments that may themselves depend on classified intelligence. In any event, it will be difficult to sustain broad norms of data openness if fresh exceptions are created whenever a major power discovers that other countries engage in intelligence collection.

An international problem is going to require an international solution. A patchwork of national and regional data protection regimes, whether couched in terms of national security or personal privacy, seems unlikely to satisfy anyone. Ad hoc bans on foreign‐​owned technology firms that make headlines will not meaningfully secure citizens data from hostile governments. Threatening companies with stiff fines unless they figure out how to fend off the NSA is doomed to produce either costly security pantomime or an even more costly disruption of international commerce. We need binding international norms for cross‐​border data flows.

This is hardly a novel suggestion: Recent trade agreements (and proposed trade agreements) have included provisions explicitly addressing international data transfers, and the World Economic Forum recently issued its own “Roadmap for Cross‐​Border Data Flows.” But typically the issue is framed in terms of consumer privacy and cybersecurity. State surveillance remains the elephant in the room.

What we need, in effect, are arms reduction treaties for intelligence surveillance, establishing shared (and consistent) norms of self‐​restraint and mechanisms for accountability and redress. At a minimum, countries could reciprocally extend to citizens of allied nations some measure of the procedural rights and safeguards that apply to surveillance of their own citizens (anemic as those often are). The United States might, for instance, agree to treat E.U. citizens as de facto “U.S. persons” for FISA purposes, requiring specific approval from the FISA Court before they could be targeted for collection.

There are obvious difficulties with this idea—beyond the inevitable horror it will inspire across intelligence agencies. Reductions in nuclear stockpiles can in principle be verified, albeit with difficulty; surveillance that can be verified is incompetent surveillance. It would be hard to imagine China or Russia agreeing to such a scheme, foolish to put much stock in it if they did, and in any event not much of a bargain given how promiscuously they spy on their own citizenry.

Neither is it wholly unthinkable, however: The Obama administration’s Presidential Policy Directive 28 established modest restraints on foreign intelligence collection in the wake of the Edward Snowden disclosures, motivated in part by a desire to placate outraged allies. ““The leaders of our close friends and allies deserve to know that if I want to learn what they think about an issue, I will pick up the phone and call them,” Obama said at the time, “rather than turning to surveillance.” In a recent paper, law professors Ira Rubenstein and Peter Marguiles propose the creation of an American “Algorithmic Rights Court” to field privacy complaints from E.U. citizens. Direct notice of FISA surveillance to foreign targets is unrealistic, except perhaps in extraordinary outlier cases, but where an intelligence sharing relationship already exists, limited information about monitoring of allied nations’ citizens could be shared with designated advocates authorized to petition the court or some suitably independent oversight body. 

Needless to say, even if an approach along these lines might be suitable for allies, it seems less viable at present for China or Russia—though limited intelligence sharing already occurs even between adversaries. But the existence of wicked problems is no reason not to begin making progress on more tractable ones, and a set of norms, rules, and institutions developed initially for western liberal democracies might be incrementally extended over time. If even war is subject to international conventions recognizing universal human rights, it should not be too radical to envision surveillance constrained by recognition of a human right to privacy.