Shutting the Backdoor
“If a surveillance capability were quietly added into the core of the Internet and an attempt was made to keep it a secret, in some respects this would be antithetical to the overarching philosophy upon which the Internet was built.” Tom Cross, BlackHat 2010
The Internet and associated digital media have deeply penetrated all of society. We now live in the world of the “Internet of things,” by some estimates there are now 10 billion internet connected devices on the planet networked together via common protocol. This common space is not only a forum for communications, it is a major repository of data about each and every one of us, our daily habits, movements, social relationships, and even private thoughts. For government agencies whose mission is to enforce the law or gather intelligence, cyberspace has become an extraordinary asset and an object of security in its own right. As governments have ramped up digital surveillance programs, they have had to turn to the private sector, which controls not only the familiar services we all use (Google, Facebook, Microsoft), but the vast majority of the physical infrastructure of cyberspace as well—the cell phone base stations, undersea fibre optic cables, Internet Exchange Points, and routers.
One of the ways governments have approached the private sector has been to compel, legally or informally, the development of special modifications to private company’s technical systems to enable access to data, what I call the “back door” paradigm. Strictly speaking, backdoors refer to special methods of bypassing normal authentication procedures to secretly access computing systems. But I use the phrase in a broader sense, to refer to the paradigm of state-directed modification of and/or intrusions into communications infrastructure and services for security purposes. Examples of the backdoor paradigm include lawful intercept mechanisms coded directly into software, to “splitters” that surreptitiously fork copies of data streams to alternative destinations, to other, informal, means of data sharing that may go on between private sector and security services. The backdoor paradigm is not new, but the Edward Snowden/NSA revelations have cast a spotlight on them and underscored how deeply entrenched they have become in the post 9/11 era of Big Data surveillance. In what follows I lay out several concerns about the backdoor paradigm, foremost among them being the ways, in the name of security, backdoors actually contribute to greater insecurities down the road. Shutting the backdoor is, therefore, an urgent public policy matter for all liberal democratic countries. Rather than sacrifice cyberspace at the altar of security, law enforcement and intelligence agencies should be encouraged to develop alternative modes of data collection strictly within the framework of the rule of law.
History of Backdoors
Built-in backdoors on communications equipment for law enforcement and intelligence agencies are not new. It is often said that espionage is the second oldest profession, and as long as governments have been engaged in espionage they have looked to assert control through the means of communication. In the early 20th century, for example, telegraphic messages processed by Western Union and other companies, including those of foreign diplomatic communications, were copied and secretly given to US authorities, much the same as Verizon has reportedly done so today. Codenamed “Shamrock,” the program required the companies to, on a daily basis, hand over to the NSA copies of all of the cables sent to, from, or through the US. During the Cold War, a variety of signals intelligence collection mechanisms, from specially equipped naval vessels to huge land-based antennas, intercepted stray radar and radio transmissions, even those that bounced off the surface of the moon.
The rise of the Internet and digital technology presented a new challenge to signals intelligence, but also a special opportunity. Digital media flowing through fibre optic cables do not unintentionally leak data into the atmosphere in the same way, making signals interception a more difficult task. At the same time, huge and exponentially growing volumes of digital data, processed and archived through the Internet’s physical points of control (e.g., switches, gateways, exchange points), presented irresistible targets for data mining and analysis, leading the government to seek access through cooperation with the companies that own and operate the infrastructure. Then, as now, the government sought to ensure access to data by deliberately weakening one type of security in the name of another. Occasionally, these efforts trickled into the public domain. During the 1990s, there was a cantankerous “cryptodebate” over US government proposals to control the export of strong encryption to foreign jurisdictions to weaken the security of adversaries and make them more open to US surveillance. Criticism at that time questioned the wisdom of such controls, arguing that restricting cryptography would end up hurting the US itself. As Bruce Schneier explains
[t]he government deliberately weakened U.S. cryptography products because it didn’t want foreign groups to have access to secure systems. Two things resulted: fewer Internet products with cryptography, to the insecurity of everybody, and a vibrant foreign security industry based on the unofficial slogan ‘Don’t buy the U.S. stuff—it’s lousy.'
The US Communications Assistance for Law Enforcement Act (CALEA), passed in 1994 and still in effect, requires telecommunications carriers to install technical capabilities for lawful surveillance into all equipment manufactured or sold in the US. Although the law applies to telecommunications carriers, US government officials have appealed to expand the law to cover VOIP and other broadband digital services. During the “Clipper Chip” debate of the 1990s, the US government attempted to mandate the production of special chips into telecommunications equipment that would allow the NSA to eavesdrop on voice traffic using a surrendered encryption key. While critics of the Clipper Chip managed to scuttle the proposal, the same basic backdoor philosophy has continued to drive government surveillance efforts up to and including the present time. 9/11 gave added impetus to these approaches, with a perceived “failure to connect the dots” driving ambitious plans and a thriving new market to gather as much information from as many discrete information sources as possible and then mine and analyze it.
The Snowden Revelations and the Backdoor Paradigm
Edward Snowden’s detailed leaks have opened a chasm into the otherwise secretive world of intelligence practices, revealing widespread monitoring of both US and international communications by the US NSA and some of its allied agencies, like the UK’s GCHQ. Among other revelations, the leaked documents have provided details on elaborate programs involving the cooperation of some of the Internet’s largest companies, like Google, Microsoft, Facebook, and Yahoo! as well as major tier 1 telecommunications companies, like Verizon and Global Crossing. Although the exact details are in dispute, the program codenamed PRISM appears to have involved a hiving off of customer data onto special servers to give law enforcement and intelligence agencies direct access. In the case of leaks involving Verizon, the company supplied the NSA with metadata on all domestic and international telephone call records. Metadata might best be described as everything but the content of the communications, meaning the endpoints of the transmission, the length and time of the call, and so on. Other leaks have shown how foreign owned telecommunication companies seeking to operate in the United States are approached by US legal officials, called “Team Telecom,” who may request similar data sharing arrangements, such as compliance with CALEA or archiving of data on US soil for national security investigations.
In the case of leaks concerning Microsoft, the company reportedly helped solve encryption cracking issues surrounding email, cloud, and VOIP services, including providing access to videoconference streams over Skype without users’ knowledge.
Other leaked documents have shed light on top secret programs, codenamed “Bullrun,” whose objectives are to weaken encryption standards worldwide, including by encouraging companies to insert deliberate vulnrabilities into their encryption algorithms known only to the NSA. Given Canada’s own signals intelligence agency, the Communications Security Establishment of Canada (CSEC) has a long-standing special relationship with the NSA, it should come as no surprise that we would develop similar programs. According to recent Globe and Mail reports, “for nearly two decades, Ottawa officials have told telecommunications companies that one of the conditions of obtaining a licence to use wireless spectrum is to provide government with the capability to bug the devices that use the spectrum.” Documents obtained by The Globe also reveal that as part of these requirements, Ottawa has demanded companies scramble encryption so that it can be accessed by Canada’s law enforcement agencies.
The leaks have touched off vigorous public policy and security debates in the US and abroad. Their scope and scale as highlighted in the leaked documents (which themselves are probably only a partial peek), call into question recent law enforcement lobbying efforts, claiming that the FBI was at risk of “going dark” because of new communication technologies. In light of Snowden’s revelations about wholesale surveillance programs, as well as the abundance of information in the public domain users willingly give out about themselves, their friends, interests, political preferences, etc, the assertions seem ludicrous. The controversy has also raised questions about the degree and effectiveness of oversight and public accountability for backdoor arrangements. While it is true that three branches of US government approved the programs associated with the Snowden leaks, the deliberations of the court that oversees them, the FISC, are themselves secret. Moreover, it appears that many elected officials were either ignorant, or deliberately misled by government officials, about the true scope and scale of the programs in question. The leaks have also raised questions about oversight of similar programs operating outside the United States, such as the UK’s GCHQ or Canada’s CSE, both of which have arguably far less independent scrutiny of their activities. While European countries are subject to more stringent privacy protections, some of their signals intelligence programs have even less rigorous oversight than what appears to be the case in the UK and Canada, and certainly less than in the United States. Backdoors into massive and detailed databases of private information, as appears to have been constructed in the US and allied countries, calls for an urgent debate about proper checks and balances around national security in the world of Big Data.
Quite apart from these concerns about privacy and potential abuse of unchecked power is an additional concern around the security implications of backdoors. Building backdoors into devices and infrastructure may be useful to law enforcement and intelligence agencies, but it also provides a built-in vulnerability for those who would otherwise seek to exploit them and in doing so actually contributes to insecurity for the whole of society that depends on that infrastructure. In 2013, a team of twenty computer security researchers issued a report published by the US-based Center for Democracy and Technology, arguing that “mandating wiretap capabilities in endpoints poses serious security risks,” and that building “intercept functionality into ... products is unwise and will be ineffective, with the result being serious consequences for the economic well-being and national security of the United States.” As noted security expert Bruce Schneier points out with respect to the FBI’s claims, but in terms that could be generalized to other security services:
The FBI believes it can have it both ways: that it can open systems to its eavesdropping, but keep them secure from anyone else’s eavesdropping. That’s just not possible. It’s impossible to build a communications system that allows the FBI surreptitious access but doesn’t allow similar access by others. When it comes to security, we have two options: We can build our systems to be as secure as possible from eavesdropping, or we can deliberately weaken their security. We have to choose one or the other.
MIT’s Susan Landau contributes an additional argument to these security concerns, which she labels the “time factor.” The US law, the Communications Assistance for Law Enforcement Act (CALEA), mandated that telecommunications equipment be built with law enforcement access capabilities, and also applied retrospectively to older technologies. But it never factored in that the back doors could become vulnerable over time as attack capabilities progressed, and knowledge of the back doors spread. These types of concerns were amply demonstrated in a 2010 Black Hat presentation by IBM researcher Tom Cross of security vulnerabilities connected to lawful intercepts in Cisco routing products (which were based on standards set by the European Telecommunications Standards Institute) in which he showed how the lawful intercept could be exploited to enter the systems and eavesdrop on communications without leaving a trace.
There have been several real-life demonstrations of these types of vulnerabilities that should give us pause. Recent political scandals in Greece and Italy, in which prominent officials and business people had their phones secretly tapped with the information used for purposes of blackmail and slander, were enabled by poorly designed lawful access back doors on cell phone infrastructures. A project to search the internet for access to critical infrastructure, called the Shodan search engine, has vividly demonstrated the number of easily accessible systems, some critical, that can be accessed from the wider Internet. Some of the access points have emerged because of vulnerabilities in the code, but in other cases back doors have been built in and then forgotten about, or poorly maintained, by the engineers. In one case, the Canadian company Rugged.com, whose industrial grade software is used to control everything from power utilities to military plants and traffic control systems, had a back door that was hard coded with a single username (“factory”), and a password that was automatically generated depending on the MAC address of the device, which itself could easily be searched for using Shodan. In 2008 Citizen Lab researchers discovered that the Chinese version of the popular VOIP product, Skype (called TOM-Skype) had been coded with a special surveillance system in place such that whenever certain keywords were typed into the chat client, data would be sent to a server in mainland China (presumably to share with China’s security services). Upon further investigation, it was discovered that the server onto which the chat messages were stored was not password protected, allowing for the download of millions of personal chats, many of which included credit card numbers, business transactions, and other private information.
By definition, back doors engineered for lawful interception are engineered vulnerabilities by a different name. In these and likely numerous other undiscovered cases, those vulnerabilities offer a direct point of access for exploitation. Building insecurities into the communications infrastructure that surrounds us may be a shortcut for law enforcement and intelligence, but is it one worth making relative to the vulnerabilities that are created for all of society?
Not only are back doors a concern for civil liberties and infrastructure security, they set a bad precedent for, and help legitimize, the very same practices abroad. The Tom-Skype example may look amateur in comparison to programs like the NSA’s PRISM, but it is a local variation on a common theme. No doubt one implication of Snowden’s revelations will be the spurring on of numerous national efforts to regain control of information infrastructures through national competitors to Google, Verizon, and other companies implicated, not to mention the development of national signals intelligence programs that attempt to duplicate the US model. Already prior to the revelations, numerous companies faced complex and, at times, frustrating national “lawful access” requests from newly emerging markets. Many countries of the global South lack even basic safeguards and accountability mechanisms around the operations of security services, and their demands on the private sector could contribute to serious human rights violations and other forms of repression. For example, India, the United Arab Emirates, Saudi Arabia, and other countries have all demanded that Canada’s Blackberry put in place lawful interception and monitoring capabilities, compliance with which could begin to affect the nature of the Blackberry product itself. Although the company rarely discloses anything about these agreements, a spokesperson for the company admitted that it had developed a monitoring system for its consumer devices for the government of India and will even train technicians in interception back in Canada. (In India, the government requires all telecommunication companies to make available data to security agencies without a warrant or other basic safeguards). What was once a signature feature of the Blackberry product (uncrackable encryption), has been compromised in the name of opening up new markets.
Alternatives to BackDoors
If the backdoor paradigm is questionable on civil liberties, economics, security, and foreign policy grounds, what are the alternatives? As long as we live in a dangerous world with real threats, intelligence agencies are going to be an important element of liberal democratic government. Likewise, without agencies to enforce the law, the very basis for the exercise of human rights, including privacy, would quickly diminish.
Elsewhere, I have argued that civil society needs to develop a cyber security strategy, and part of that strategy must involve a response to the back door paradigm on security grounds. Rather than building in insecurities by design, a better strategy would emphasize the opposite: encouraging the widespread use of state of the art encryption systems, as well as adoption of standards such as “https by default” and “two factor authentication.” Another component of such a strategy would be to emphasize the security benefits of open source software, which provide greater assurances that companies have not built in special back door privileges by design, that customers get what they sign off on in their terms of service. In line with this approach would be regulations around deletion of stored data and proposals like the “right to be forgotten,” which, although typically seen in a rights-based framework, have security implications insofar as they restrict the copies of stored data that might be exploited for nefarious purposes.
As for legitimate lawful access requests, the group of experts involved in the CDT study argue that “law enforcement’s use of passive interception and targeted vulnerability exploitation tools creates fewer security risks for nontargets and critical infrastructure than do design mandates for wiretap interfaces.” In a world of “Big Data” in which so much of our information is routinely given away as part of our daily life, law enforcement and intelligence agencies need to find ways to work within this universe as it exists, rather than drill holes in it from the inside-out in ways that undermine confidence and create additional risks for all of society. Those lawful access provisions that are still required should be infrequent and strictly controlled with rigorous oversight and public accountability provisions. Direct tapping of entire services wholesale should be eliminated. Not only will this protect civil liberties and prevent the concentration of power in unchecked hands, it will ensure that we are not doing more to undermine our own security in an overzealous surveillance quest.
The communications environment we have created around us is complex and expanding, and the security issues surrounding it are serious and important.The backdoor paradigm is symptomatic of a larger trend that privileges intelligence and security agencies, builds borders around cyberspace, and undermines checks and balances around concentrations of power. The backdoor paradigm is not only bad for civil liberties, it is bad for security, and sets dangerous precedents for the legitimization of practices abroad we ostensibly oppose. Law enforcement and intelligence agencies are necessary and important to liberal democracy, but there is more than one way for them to go about their missions. In the world of Big Data, in which so much personal information is readily abundant, new methods of “connecting the dots” must be explored other than those that drill holes into our communications infrastructure. Government surveillance practices need radical re-thinking, beginning first and foremost with a reinforcement of basic checks and balances that have, for too long, been sidestepped in the name of security.