Cross-Site Request Forgery (aka CSRF or XSRF) is a dangerous vulnerability present in just about every website. An issue so pervasion and fundamental to the way the Web is designed to function we've had a difficult time even reporting it as a "vulnerability". Which is also a main reason why CSRF does not appear on the Web Security Threat Classification or the OWASP Top 10. Times are changing and it’s only a matter of time before CSRF hacks its way into the mainstream consciousness. Chris Shiflett (principal of OmniTI) and I were speaking about this today and how to best convey the issues importance. CSRF may in fact represent an industry challenge far exceeding that of Cross-Site Scripting (XSS).
Dare we speak of The Dangers of Cross-Domain Ajax with Flash?
Volume- Nearly every feature on every website is vulnerable to CSRF. When/if we begin reporting CSRF issues its going to be on the average of dozens per website, thousands when counting open source and commercial web applications (look out bugtraq), and in the millions when speaking on a Web-wide scale.
Identification- Finding CSRF is very difficult to automate with current scanning technology and by enlarge must be performed manually. Therefore what would be considered a comprehensive vulnerability assessment becomes more time consuming and expensive.
Hard to Solve- This is the real bad part about CSRF, it’s much more difficult to fix. That is, relative to the 1 or 2 line fixes we’re used to with XSS or SQL Injection. CSRF solutions may require CAPTCHA's (blech), Session Tokens, Flow Control, etc. Solutions requiring many more lines of code where a proper implementation is harder to get right. Imagine having to inform a developer they're going to have to put CAPTCHA’s or Sessions Tokens on every one of the hundred forms. Ugh.
Where we go from here
We are looking ahead to a serious and wide-reaching yet-to-be-exploited vulnerability, which the bad guys will eventually figure out how to monetize and our solutions are sorely lacking. For those in the industry who want to make a significant difference, THE FIELD IS WIDE OPEN. We need generic and innovative technology solutions for both CSRF identification and defense.
This post reminded me of a paper I read recently by Thomas Schreiber entitled "Session Riding - A Widespread Vulnerability in Today's Web Applications", where he does a pretty good job of explaining this class of attacks.
Chris Shiflett also makes mention of this paper on his blog.
Session Riding reminds you of CSRF because Session Riding is another name for CSRF, as is XSRF and "One Click Attacks".
I have helped clients implement fixes for XSRF and the problem isn't that hard to address with a little code. The trick is to implement the solution in the libraries that generate forms in pages.
Note that this blog protects against XSRF for comments by using a CAPTCHA.
Hard/Easy is a relative term and I should have spent more time on that. What I'm saying is XSRF is more difficult to fix than XSS for instance. Also, adding CAPTCHA's everywhere on a website is not exactly a viable alternative.
Conceptually the XSRF fix is simple, but as you well know from the field, nothing is ever THAT easy. Forms and websites are all over the place, no centralized libraries, strange dependencies, template weirdness, and not to mention people telling you, DON"T TOUCH IT!
Jeremiah i am kinda shoked reading this blog entry.
CSRF is an kinda OLD topic, to call it a new attack to rise sounds so "shiflett" like. People were cheating with CSRF vulnerabilities in voting scripts for ages. And fun attacks like avatar images in bulletin boards that logout the user reading your thread are also nothing new. (Same for URLs that add Swearwords to another persons guestbooks). These attacks are used by kiddies for ages. It is not quite intelligent to believe that these attacks are not used by blackhats for ages. (To embedd URLs that change admin passwords or add admin users is also quite old...)
The second thing is: defending against CSRF is simpler than XSS. This comes from a very simple calculation: The amount of output of a web application is far bigger than the number of forms. Therefore you have to protect many more entities.
Additionally protection against CSRF can be done transparently by parsing the HTML output and adding security tokens to the forms and then dropping all POST variables on script start if the token is invalid. Additionally you could kill all POST variables if the request fails your referrer check.
Both solutions can be implemented transparently as an extension to PHP or to your httpd. Such a transparent protection is NOT possible for XSS and proves that protecting is simpler.
And saying that CSRF cannot be found by actual scanning techniques is also quite confusing... What is the problem in sending POST requests found on a site with a) a referrer entry that points to a different site (to detect referrer checks) and b) to try to detect token variables by content or by name and send them empty back. All blackbox scanning techniques use this approach to detect SQL etc... vulnerabilities...
(And this kind of trick has bypassed token and captcha protections advertised by mr. shiflett and his "consortium" in the past)
It seems like you're accusing Jeremiah of calling CSRF a new attack, but I don't see where he makes that claim. In fact, he references an article of mine that was published more than 3 years ago. Why would he do that if he wanted to present this as a new discovery? You should read more carefully before making ad hominem attacks. Raising awareness is a good thing.
More importantly, CSRF is a generic label. There have been a number of new exploits discovered over the years that still fit into the CSRF category. Jeremiah even mentions one example.
Checking Referer may seem like an effective safeguard, but it's not. Amit Klein has published some recent research on this topic that's worth reading.
Your scanning technique isn't very thorough, and it still has the potential to identify many false positives. How to effectively scan for CSRF is probably best discussed in a separate blog post. I wish it were as easy as you suggest.
Lastly, if you're aware of a "trick" that bypasses the use of a one-time token in forms, please disclose it. I'm quite familiar with your history of making hollow claims, so hopefully you can understand my skepticism.
Wow, the comments on this blog and becoming very similar to slashdot. :)
Stefan Esser: As Chris confirmed, I make no claim that CSRF is new. I've also been in the industry long enough to have seen this attack before.
Despite your claims, which seem to lack thorough understanding of the problem, leave me unconvinced that CSRF can be detected by automated means. With Jesse's input, I'm warming up to the idea that CSRF may be solvable in an easier than XSS way. However, that part might not matter so much moving forward. We'll see.
And isn't that was its all about, a meeting of the minds to improve security. There is no need to attack me or others. That approach helps nothing. But that might not matter to you depending on your motives.
This is late to chime in on the topic .. but CSRF prevention techniques, atleast as they are now and the foreseeable future, do not prevent CSRF. They are deterrants. But hear me out..
Putting a $300 Medeco lock on your front door does not make you more secure. You'll certainly feel more secure, and indeed they're probably the finest company that makes practically unpickable locks - i really recommend them as a company. Unfortunately they have a fatal weakness, they can be unlocked from the inside. But hear me out.. from my count, there's 11 other points of entry into my house. 5 of which aren't viewable from the street. Once someone gets inside and unlocks the deadbolt, an awesome Medeco lock provided zero additional protection.
Likewise, one time tokens and CAPTCHAs are like a front door lock on forms. They too have the fatal flaw that they can be unlocked from the inside, that is, read from the same domain - thanks to xmlhttprequest. So the "trick" that bypasses these is to run a script under the same origin policy.
As stefan esser pointed out, "The amount of output of a web application is far bigger than the number of forms." Finding an XSS hole on a major website is often not very difficult, just pay a visit to RSnake's forums if you have any doubts: http://sla.ckers.org/forum/read.php?3,44 . SQL injection is also an alternative sometimes. Once a malicious script is running under the same domain, it can freely read the one-time tokens, and export CAPTCHAs to be answered by others/machines. You're kidding yourself to believe that it's any more than a deterrant.
That being said, i still lock my door when i leave the house if it's more than an hour or so. But i'm not foolish enough to believe that this makes me safe from burglars. So yes, it's no longer as easy as taking candy from a baby - but taking candy from a toddler isn't very tough either.
Never too late for a good comment. Maluc, you make a great point and explained much better than I have been able. XSS and CSRF work hand it hand, rarely exist separate from each other, and serve different purposes. That the kew.
Good Stuff. Probably unlike you guys, I went to school to learn a bit more about information security. Out here in the real world, I've taken a position with a state agency in the US which has some amount of web application work being done. All this Web App Sec stuff is new to me, and I've found your webpages to be informative and helpful. I'm going to check out the links that were mentioned (ha.ckers, sla.ckers, shiflett, rsnake) but if you all have any other suggestions on good books or sites to get a better understanding of WWW-related security concerns, I'd appreciate it. I'll check out this board time to time for any respondents.
-- chemicality (aim: distilledchem)
Thanks for the kind words. Yah in fact I went to school
for electronics engineering, rather than CS or InfoSec. A while back I posted about "Keeping up on web application security". Could be exactly what you are looking for. Good luck!
One Click Attacks - Session Riding and just let me add yet another buzzword: "hostile linking" i have read about.
IMHO, apart from manually recrafting the code to filter potentially dangerous user input (thus primarily addressing the XSS problem) and investing on intelligent state/session manipulation, user on-line habits must change. For example, it's never too late to actually LOG OFF by pressing the site's button rather that leave the site loaded on the browser and without actually surfing it for info. And of course, the root of all evil: curiosity, to click on a link just for the fun(!?) of it, just to see what lurks behind...
Good Job! :)
Post a Comment