Friday, January 29, 2010

cookies by many different names

Cookies are great, and everyone loves them (chocolate chip are my favorite) but if we leave the Internet to its own device it could potentially drive itself into a state of udder deception where other technologies are secretly used in place of cookies for tracking and identification purposes.

Spending the past two days submerged in various privacy discussions, I've started again deeply thinking about cookies and tracking. The fundamental privacy concerns about HTTP cookies (and other varieties like Flash LSOs) come from the fact that such a technology gives a web server too much power to connect my browsing dots. Third-party cookies exacerbate this problem -- as do features like DOM storage, google gears, etc.

Come to think of it, cookies aren't unique in their utility as dot-connectors: browsing history can also be used. A clever site can make guesses at a user's browsing history to learn things such as which online bank was recently visited. This is not an intended feature of browsing history, but it came about because such a history exists.

But wait, cookies, Flash LSOs, DOM storage, and browsing history aren't uniquely useful here either! Your browser's data cache can be used like cookies too! Cleverly crafted documents can be injected into your cache and then re-used from the cache to identify you.

In fact, all state data created or manipulated in a web browser by web sites has the potential to be a signal for tracking or other dot-connecting purposes. Even if the state change seems to be write-only there could be other features that open up the other direction (e.g., the CSS history snooping trick mentioned above -- or timing attacks).

Stepping Back and thinking about these dot-connecting "features" in the context of the last couple days' privacy discussions has got me wondering if there's not a way we can better understand client-side state changes in order to holistically address the arbitrary spewing of identifying information. I think the first step towards empowering users to protect themselves better online is to understand what types of data is generated by or transmitted by the browser, and what can be used for connecting the dots. After we figure that out, maybe we can find a way to reflect this to users so they can put their profile on a leash.

But while we want to help users maintain the most privacy possible while browsing, we can't forget that many of these dot-connecting features are incredibly useful and removing them might make the Web much less awesome. I like the Web, I don't want it to suck, but I want my privacy too. Is there a happy equilibrium?

How Useful is the web with cookies, browsing history and plug-ins turned off? Can we find a way to make it work? There are too many questions and not enough answers...

Labels: , ,

Friday, November 20, 2009

update on HTTPS security

Version 2.0 of my Force-TLS add-on for Firefox was released by the AMO editors on Tuesday, and in incorporates a few important changes: It supports the Strict-Transport-Security header introduced by PayPal, and also has an improved UI that lets you add/remove sites from the forced list. For more information see my Force-TLS web site.

On a similar topic, I've been working to actually implement Strict-Transport-Security in Firefox. The core functionality is in there, and if you want to play with some demo builds, grab a custom built Firefox and play. These builds don't yet enforce certificate integrity as the spec requires, but aside from that, they implement STS properly.

The built-in version performs an internal redirect to upgrade channels -- before any request hits the wire. This is an improvement over the way the HTTP protocol handler was hacked up by version 1 of Force-TLS, and doesn't suffer from any subtle bugs that may pop up due to mutating a channel's URI through an nsIContentPolicy. I'm not sure that add-ons can completely trigger the proper internal redirect, since not all of the HTTP channel code is exposed to scripts, and add-ons would need to replicate some of the functions compiled into the nsHttpChannel, opening up a possibility of obscure side-effects if the add-on gets out of sync with the binary's version of those functions.

Edit: The newest version of NoScript does channel redirecting through setting up a replacement channel in a really clever way -- pretty much the same as my patch. It replicates some of the internal-only code in nsHttpChannel, though, and it would need to get updated in NoScript if for some reason we change it in Firefox.

Labels: , , , ,

Friday, October 02, 2009

CSP Preview!

Brandon Sterne and I released a preview of Firefox with Content Security Policy features built in. There are still little bits of the specification that aren't yet ready (like HTTP redirection handling), but most of the core functionality should be there.

If you'd like to play around with this pre-release version of Firefox (very alpha, future release) that has CSP built in, download it here! You can test it out at Brandon's demo page.

In case you're not familiar with CSP, it's a content-restriction system that allows web sites to specify what other types of stuff can be embedded on their pages and where it can be loaded from. It's very similar to something called HTTP Immigration Control that I was working on in grad school, so I'm very exited to be part of the design, specification and implementation -- hopefully a big step towards securing the web.

Previously: Shutting Down XSS with Content Security Policy and CSP: With or Without Meta?

Update: The old download link expired. New one should have a much longer lifetime (here).

Labels: , , , ,

Thursday, September 17, 2009

notawesome

While discussing privacy and Firefox 3.5 with Chris a couple weeks ago, we stumbled upon the thought that people might want to be able to select which bookmarks show up when they're given automatic suggestions in Firefox 3's Awesome Bar. This discussion really started with a bit of public metrics and discussion in the blogosphere.

In mid August, Ken Kovash wrote about reasons users gave for not upgrading from Firefox 2 to Firefox 3.0. The number one reason was, surprisingly, the Awesome Bar. Without going into detail, the gist was that people didn't really want certain bookmarks to show up when they start typing URLs.

Perhaps the settings weren't obvious enough, but users can set the awesome bar to search only bookmarks, only history, both, or neither (Alex Faaborg discussed it in June, in fact).

Here's the use case: Bob bookmarks a couple porn sites, then during a public presentation, he starts typing "www" in the URL bar. His porn sites show up in the suggestion list, and everyone in the audience gasps.

The work-arounds for this I see are:

  1. Use a separate browser for "private" sites.
  2. Use a separate Firefox profile for browsing "private" sites.
  3. Use Private Browsing when browsing "private" sites (but then you can't bookmark the sites).
  4. Turn off bookmarks and/or history searching for awesome bar.

But maybe this isn't good enough for everyone. Some folks might want to just hide a couple of bookmarks from the awesome bar. We need a way to make certain bookmarks "not awesome" so they won't show up.

Enter bookmark tags... you can add tags to bookmarks to find them easily. Why not tag bookmarks with "notawesome", then somehow hide those from the awesome bar search?

On a whim, I hacked together a quick addon to do this: notawesome!

lifehacker picked up on this (dunno how they found it buried in AMO), and apparently some folks find it useful.

To those 800 people using it already: thanks for trying it out, and your comments! I'll see if I can find some time to make it better. If anyone else wants to hack on it, let me know...

Labels: , , ,

Thursday, August 06, 2009

inheriting XPCOM across languages

I've been working on an Add-On for Firefox 3.* recently, and came across a situation where I wanted to do a little XPCOM component inheritance. Basically, there's an HTTPProtocolHandler in Firefox that is used in a variety of places, mainly in the creation of URIs and connections through HTTP. I wanted to modify the HTTP Protocol Handler so that it would get to "filter" each HTTP URI before a connection is created, and then maybe upgrade it to HTTPS if necessary (ForceTLS: see the AMO listing, my site, and the blog entry).

Anyhow, since there can be only one HTTP protocol handler, I have to somehow modify it, and since it's written in C++, I basically have to write my own from scratch to deploy it in an add-on.

But wait, there's got to be an easy way. Here's a thought: create a really basic component, capture a reference to the existing HTTP protocol handler, register the new one as the HTTP protocol handler, and for all method calls and property accesses on my handler, delegate back to the original protocol handler. In js-pseudocode:

myHandler.aPropertyAccessed = function(propName, context) {
if(typeof this[propName] === 'undefined')
return oldHandler[propName];
return this[propName];
};

myHandler.aFunctionCalled = function(fname, args) {
if(typeof this[fname] === 'function')
return this[fname].apply(fname, args);
return gOldHandler.functionCalled(fname, args, gOldHandler);
}


But of course it's not that easy because there is no propertyGetter or functionCalled general methods (like in python). So instead, I had to take to playing with prototypes, aided of course by my JS guru Ben:

// "@mozilla.org/network/protocol;1?name=http"
var kCID = "{4f47e42e-4d23-4dd3-bfda-eb29255e9ea3}";
var gOldHandler = Components.classesByID[kCID]
.getService(Ci.nsIHttpProtocolHandler);

function MyHandler() {}
MyHandler.prototype = {
//custom methods and overridden stuff here
}
MyHandler.prototype.__proto__ = gOldHandler;


But this didn't work because of XPCOM and QueryInterface: the JS object oldHandler may have other interfaces it supports, but the appropriate methods aren't in the JS instance. So I have to do something a bit more elaborate:


// Given two instances, copy in all properties from "super"
// and create forwarding methods for all functions.
function inheritCurrentInterface(self, super) {
for(let prop in super) {
if(typeof self[prop] === 'undefined')
if(typeof super[prop] === 'function') {
(function(prop) {
self[prop] = function() {
return super[prop].apply(super,arguments);
};
})(prop);
}
else
self[prop] = super[prop];
}
}

function MyHandler() {
//grab initial methods (nsIHttpProtocolhandler)
inheritCurrentInterface(this, gOldHandler);
}

MyHandler.prototype = {
QueryInterface:
function(aIID) {
gOldHandler.QueryInterface(aIID);
inheritCurrentInterface(this, gOldHandler);
return this;
},

newURI:
function(spec, originCharset, baseURI) {
var uri = gOldHandler.newURI.apply(gOldHandler, arguments);
//... do my stuff here ...
return uri;
}
};


Essentially, I have to import the functions and variables from the old HTTP protocol handler, and every time my instance (which is replacing the old protocol handler) is QI'ed, I have to QI the old one and re-import all its properties. This is because the old handler was also an nsIObserver and who knows what else.

I implemented my own newURI method by wrapping the one in the old handler and manipulating the URI that comes out of it. Because this is manually defined, it won't be shadowed by functions imported by the inheritCurrentInterface() calls.

The only lingering XPCOM question I've got is what to do with getInterface(). I think because of the inheritCurrentInterface() implementation, getInterface will get imported with the appropriate functionality when it's needed, but I'm not sure.

So I guess the next step is to try and figure out how to provide a JS library that makes this all a lot easier. I'd like some syntax like:

Components.utils.import("resource://gre/modules/XPCOMUtils.jsm");

var MyService = XPCOMUtils.extendService(kCOMPONENT_CLASS_ID);
MyService.constructor = function(foo) {
//do something with foo
};
MyService.prototype = {
//override methods here
componentMethod: function(a, b, c) {
super.componentMethod(a, b, c);
},
};

// then the rest of XPCOMUtils init stuff...


That syntax may not be workable, but something like it would be nice.

Labels: , , , ,

Tuesday, April 21, 2009

roll your own EV

In working on a project recently, I found myself wanting to become an EV-SSL certificate authority (EV means Extended Validation). Lofty goals, yes, but really I just wanted to play with EV certificates and see if a couple of things were feasible. I'll post what happens as I figure it out.

Anyway, I needed to find a way to get a browser to accept a root CA that I created, and then get the browser to trust that root CA to issue EV certificates. This is harder than it sounds; regular SSL root certificates can be added easily to any browser, but the EV root certs can't. This is to protect users from accidental or malicious installation of EV root certs -- but unfortunately also protected me from easily doing it too.

Turns out, Firefox will let you "test" some CA certs as EV authorities, but you have to get your hands on a debugging build. Not only that, but unless you want to maintain a fresh CRL or OCSP server, you'll have to modify the source code. Sounds daunting, but it really isn't too bad. I've documented the whole process here, and I'll summarize in this blog post.

1. Create an EV-SSL Certificate Authority, and make an EV cert. This sounds fancy, but basically means: create a certificate authority, then issue a cert with a specific policy OID. The differences between regular CAs and EV CAs are minimal except in how the browser decides to classify them. In short, this should do the trick:
./CA.pl -newca

openssl req -config ./openssl.cnf -new -keyout newkey.pem \
-out newreq.pem -days 30

openssl ca -config ./openssl.cnf -policy policy_anything \
-out newcert.pem -infiles newreq.pem
Details here.

2. Tame Firefox. This involves patching the Firefox source code to perform lazy freshness checks on certificates (and there's a patch for that here), and set it up to accept externally defined EV root authorities (you will list them in a text file). Then you must compile the source in debug mode to enable it. Details here.

3. Install your CA and go. You have to extract the base-64 encoded subject and serial number out of your CA certificate by installing this patch, compiling the NSS tools, and running the pp tool on your root certificate. Once you've got that data, put it, the EV policy OID of your choice, and the CA cert fingerprint in a file called "test_ev_roots.txt". That text file goes in your Firefox profile directory. Once that's set up, you run Firefox, install the root CA as a regular SSL trusted authority, and you're ready to go. Details here.

Summary. It's not impossible to install a root certificate and get Firefox to consider it an EV root, but it is surely difficult (and this is good). The instructions presented in this post are simply summary, and not indended to be details, which can be found here.

Edit: I guess I should explain that EV means Extended validation; basically a more thorough check is performed by a certificate authority before issuing an EV certificate [EV on wikipedia]

Labels: , , ,