Active Resistance against Passive Surveillance

So I complain about a lull in the news about the more-or-less complete compromise of the Internet by the National Security Agency et al, and then this goes and happens.

One of my old standard interview questions for people applying for jobs with some responsibility for information security was “are you paranoid”? When the lighting was good, and my eyes bugged out just right, this could be a little scary. It’s time to retire this question, I think, because the answer would seem to be “no, I am clearly not paranoid enough”, unless the applicant shows up to the interview in a tin-foil hat.

Okay, I wouldn’t be

too alarmist about it: the theoretical underpinnings of public-key cryptography are probably secure, so some implementations of some protocols using some ciphers are probably still trustworthy. And there is nothing new to the revelation that the NSA spends ridiculous amounts of money every year trying to break cryptosystems both in theory and in practice: this is, after all, exactly what the NSA was chartered to do. But there is now a great deal of public uncertainty about which communications channels are safe. Add to this the fact that the NSA has an asymmetric ability to compromise physical infrastructure on American soil and/or owned by entities subject to American jurisdiction, and the United States just became a much less attractive place to do any kind of business which involves (1) any communications network and (2) any sort of secrecy whatsoever (i.e., most kinds of business). This effect will not be immediate. But I fear it will be prove to be devastating.

As for Schneier’s call to make the upcoming IETF meeting in Vancouver about engineering resistance to pervasive surveillance into the Internet, well, I’m doing my part. The outcome of this work will probably be an exploration of exactly what information radiates off IETF protocols, leading to recommendations to protocol designers to:

  1. Use transport-layer security everywhere. We’ve been moving in this direction for quite some time. There are serious problems with the certificate authority system — stemming in large part from the decision to include support for the CA business model in the design requirements – which current work in DANE is attempting to address.
  2. Use end-to-end security for protocols which use multiple hops at the application layer. SMTP is the big one here, and here we have a problem:  S/MIME and PGP are concerned with protecting message payload, leaving headers (addresses, subjects, etc.) unprotected. Addresses need to be left in plaintext to route messages to their destination, so for electronic mail, anyway, there are design changes that need to be made.
  3. Resist protocol fingerprinting by adding randomness to metadata everywhere we can: to interpacket times (as in the SSH timing hack), packet and flow sizes (as used, e.g., by the snack Skype detector). There’s lots of work to do here.
  4. Add indirection to the network where possible to make it difficult to associate network addresses with physical locations, organizations, or individuals. Tor does this, but has the problem of directing all anonymized traffic through a set of exit nodes, which represent tempting targets for state security services. So there’s probably lots of work to do here, too.
  5. Design for **end-to-end [So I complain about a lull in the news about the more-or-less complete compromise of the Internet by the National Security Agency et al, and then this goes and happens.

One of my old standard interview questions for people applying for jobs with some responsibility for information security was “are you paranoid”? When the lighting was good, and my eyes bugged out just right, this could be a little scary. It’s time to retire this question, I think, because the answer would seem to be “no, I am clearly not paranoid enough”, unless the applicant shows up to the interview in a tin-foil hat.

Okay, I wouldn’t be

too alarmist about it: the theoretical underpinnings of public-key cryptography are probably secure, so some implementations of some protocols using some ciphers are probably still trustworthy. And there is nothing new to the revelation that the NSA spends ridiculous amounts of money every year trying to break cryptosystems both in theory and in practice: this is, after all, exactly what the NSA was chartered to do. But there is now a great deal of public uncertainty about which communications channels are safe. Add to this the fact that the NSA has an asymmetric ability to compromise physical infrastructure on American soil and/or owned by entities subject to American jurisdiction, and the United States just became a much less attractive place to do any kind of business which involves (1) any communications network and (2) any sort of secrecy whatsoever (i.e., most kinds of business). This effect will not be immediate. But I fear it will be prove to be devastating.

As for Schneier’s call to make the upcoming IETF meeting in Vancouver about engineering resistance to pervasive surveillance into the Internet, well, I’m doing my part. The outcome of this work will probably be an exploration of exactly what information radiates off IETF protocols, leading to recommendations to protocol designers to:

  1. Use transport-layer security everywhere. We’ve been moving in this direction for quite some time. There are serious problems with the certificate authority system — stemming in large part from the decision to include support for the CA business model in the design requirements – which current work in DANE is attempting to address.
  2. Use end-to-end security for protocols which use multiple hops at the application layer. SMTP is the big one here, and here we have a problem:  S/MIME and PGP are concerned with protecting message payload, leaving headers (addresses, subjects, etc.) unprotected. Addresses need to be left in plaintext to route messages to their destination, so for electronic mail, anyway, there are design changes that need to be made.
  3. Resist protocol fingerprinting by adding randomness to metadata everywhere we can: to interpacket times (as in the SSH timing hack), packet and flow sizes (as used, e.g., by the snack Skype detector). There’s lots of work to do here.
  4. Add indirection to the network where possible to make it difficult to associate network addresses with physical locations, organizations, or individuals. Tor does this, but has the problem of directing all anonymized traffic through a set of exit nodes, which represent tempting targets for state security services. So there’s probably lots of work to do here, too.
  5. Design for end-to-end]13 as opposed to closed services. The IETF does this reflexively anyway, but I think we have an opportunity here to advocate publicly for an end-to-end Internet, as an effort such as PRISM would be rather pointless if most communications among people, by volume, hadn’t already moved to single points of failure.

Working to change the Internet to actively resist an adversary intent on widespread, pervasive surveillance will be hard work. It will involve tradeoffs for latency and bandwidth. It will make life easier for those who wish to stay hidden from authority, and will make the job of those authorities – even those with a legitimate interest in protecting the security of their citizens – more difficult. But if the Internet is to continue to form the basis of a trustworthy global communications network, it is necessary.