Geekery

On Repeatable Internet Measurement: Part Two

The issues identified in of part one of this post led to yet another search for solutions to the problem of making (especially passive) measurement repeatable. Of course, this has been done before, but I took as an initial principle that the social aspects of the problem must be solved socially, and worked from there. What emerged was a set of requirements and an architecture for a computing environment and set of associated administrative processes which allows analysis of network traffic data while minimizing risk to the privacy of the network’s end users as well as ensuring spatial and temporal repeatability of the experiment. For lack of a better name I decided to call an instance of a collection of data using this architecture an analysis vault.

The key principle behind this architecture is if data can be open, it should be; if not, then everything else must be.

On Repeatable Internet Measurement: Interlude

Part one of this post painted a somewhat bleak picture of the state of Internet measurement as a science. The dreariness will continue later this month in part two. And yet there seems to be quite a lot of measuring the Internet going on. It can’t all be that bad, can it?

A Media Policy for the 17th Century

I’ve been reading Tom Standage’s “Writing on the Wall” of late, which I can heartily recommend. It’s less subtle than “The Victorian Internet”, which counts among my favorite books of all time, but that was written before Twitter, and Twitter’s made us all less subtle, I think. What strikes me about his new book is not his thesis — that the “social media revolution” is nothing really new, just the application of new technology to our apparently instinctive love of gossip — but how well it illustrates that much of the present public policy debate over new media technology is very, very old.