A couple of months ago, I posted about leaving academia. Two weeks ago, I joined Google as a Site Reliability Engineering (SRE) manager. I’ll be working to keep bits of Google’s technical infrastructure running smoothly, at least once I’ve learned enough about how it works and what all the various switches and levers do to be dangerous. The past two weeks have been a deluge of new things to learn, but I’ve finally got my head far enough above water to reflect on things a bit.

Hitting DNS with a Sledgehammer (for Fun and Profit)

About three years ago I started working part-time (20%) on SCION, a secure, available future Internet architecture. Since I wasn’t around much, I was given a nice easy project that wasn’t on anyone’s critical path: desigining the naming system for SCION (as to that time it was assumed SCION would just use DNS with new RRTYPEs to handle the new address families it introduces). After a few months of part-time thinking about (and rejecting) blockchains and distributed hash tables, I arrived at the design of RAINS, whose recursive acronym ostensibly stands for “RAINS, Another Internet Naming System”, but is really a comment on the weather in Zürich in November.

m11y and o11y

Looking back over the arc of my career in pseudoacademia, especially over the last three years of digging into transport stack evolution with the MAMI project, there are a few bits of work I’m especially happy to have been a part of. One of these is the inclusion of the spin bit into the QUIC transport protocol. The spin bit was conceived as the minimum useful explicit signal one could add to a transport protocol to improve measurability, the benefit for the overhead is IMO quite worth it.

On the Security Ratchet

The IETF uses Jabber for instant messaging during working group meetings, as does the IAB for its own teleconferences and meetings. Since I didn’t really feel like shopping around for a Jabber account, and XMPP integration with Google Talk shut down in the middle of the decade, I decided a few years ago to run my own server, which I pretty much only use for connecting to IETF conference rooms and for chatting with IETF folks as a backchannel during meetings.

Leaving Academia

I always love going to Schloss Dagstuhl, a retreat for computer scientists in the middle of nowhere in Saarland, Germany. It’s a little difficult to get to, but the train ride (Wallisellen to Saarbrücken via Zürich and Mannheim) is a nice, slow way to step back from whatever context-switching overhead is dominating my days at the moment and start thinking about the theme of the workshop. Last October, I went to what’s probably my last Dagstuhl seminar for a while, spending three days around the billiard table and in the wine cellar figuring out whether there’s anything to be done about Encouraging Reproducibility in Scientific Research of the Internet.

And yet, it spins

I’m writing today from Berlin, after an excellent Passive and Active Measurement conference and a very long but fruitful week in London for IETF 101, which, for me, came to be dominated by the The Spin Bit. The spin bit is an explicit signal for passive measurability of round-trip time, currently possible in TCP but not in QUIC due to lack of acknowlegment and timestamp information in the clear. It’s an example of a facility designed to fulfill the principles for measurement as a first class function of the network stack we laid out in an article published last year.

What does it mean to trust the Internet?

Tomorrow, I’ll take part in a panel discussion at ETH Zürich, entitled “Internet and Trust”. From the flyer for the discussion: “The Internet relies on so many layers of trust that one is sometimes surprised that [it] actually works”. This is true, but I suppose that’s a property of any system of sufficient complexity, when viewed by someone who understands it well enough to know how much bubble gum and duct tape is used to hold it together.

Live, via Internet, from the hammock

Internet architecture and Internet-centered research being a global enterprise, I spend between four and seven weeks a year on the road, depending on which year, your definition of road and your definition of week, and a fair amount of time in teleconferences in various timezones in the time in between. One of the fixtures in my calendar is the thrice-annual meeting of the Internet Engineering Task Force (IETF), taking place right now in Chicago.

Making the Internet Safe for ECN

I’m off to New York in a couple of weeks to present a paper at PAM (which I mentioned here, though sadly the flashy automated demo I was hoping to build was a bit optimistic). The question: “is it safe to turn on ECN on client machines by default, completing the end to end deployment of a simple fifteen year old protocol to give us a better way to signal network congestion than simply dropping packets on the floor?” The answer is: “define safe.“ Our key findings:

On Repeatable Internet Measurement: Part Two

The issues identified in of part one of this post led to yet another search for solutions to the problem of making (especially passive) measurement repeatable. Of course, this has been done before, but I took as an initial principle that the social aspects of the problem must be solved socially, and worked from there. What emerged was a set of requirements and an architecture for a computing environment and set of associated administrative processes which allows analysis of network traffic data while minimizing risk to the privacy of the network’s end users as well as ensuring spatial and temporal repeatability of the experiment. For lack of a better name I decided to call an instance of a collection of data using this architecture an analysis vault. The key principle behind this architecture is if data can be open, it should be; if not, then everything else must be.