Monday, February 21, 2011

Top Ten Web Hacking Techniques of 2011

Update 02.14.2011: Open voting for the final 15 is now underway. Vote Now!


This post will serve to collect new attack techniques as they are published. If you think something should be added, please comment below and I'll add them.

"Every year the Web security community produces a stunning amount of new hacking techniques published in various white papers, blog posts, magazine articles, mailing list emails, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and so on. Beyond individual vulnerability instances with CVE numbers or system compromises, we're talking about actual new and creative methods of Web-based attack. The Top Ten Web Hacking Techniques list encourages information sharing, provides a centralized knowledge-base, and recognizes researchers who contribute excellent work."

Current 2011 List
  1. Bypassing Flash’s local-with-filesystem Sandbox
  2. Abusing HTTP Status Codes to Expose Private Information
  3. SpyTunes: Find out what iTunes music someone else has
  4. CSRF: Flash + 307 redirect = Game Over
  5. Close encounters of the third kind (client-side JavaScript vulnerabilities)
  6. Tracking users that block cookies with a HTTP redirect
  7. The Failure of Noise-Based Non-Continuous Audio Captchas
  8. Kindle Touch (5.0) Jailbreak/Root and SSH
  9. NULLs in entities in Firefox
  10. Timing Attacks on CSS Shaders
  11. CSRF with JSON – leveraging XHR and CORS
  12. Double eval() for DOM based XSS
  13. Hidden XSS Attacking the Desktop & Mobile Platforms
  14. Rapid history extraction through non-destructive cache timing (v8)
  15. Lotus Notes Formula Injection
  16. Stripping Referrer for fun and profit
  17. How to upload arbitrary file contents cross-domain (2)
  18. Exploiting the unexploitable XSS with clickjacking
  19. How to get SQL query contents from SQL injection flaw
  20. XSS-Track as a HTML5 WebSockets traffic sniffer
  21. Cross domain content extraction with fake captcha
  22. Autocomplete..again?!
  23. JSON-based XSS exploitation
  24. DNS poisoning via Port Exhaustion
  25. Java Applet Same-Origin Policy Bypass via HTTP Redirect
  26. HOW TO: Spy on the Webcams of Your Website Visitors
  27. Launch any file path from web page
  28. Crowd-sourcing mischief on Google Maps leads customers astray
  29. BEAST
  30. Bypassing Chrome’s Anti-XSS filter
  31. XSS in Skype for iOS
  32. Cookiejacking
  33. Stealth Cookie Stealing (new XSS technique)
  34. SurveyMonkey: IP Spoofing
  35. Using Cross-domain images in WebGL and Chrome 13
  36. Filejacking: How to make a file server from your browser (with HTML5 of course)
  37. Exploitation of “Self-Only” Cross-Site Scripting in Google Code
  38. Expression Language Injection
  39. (DOMinator) Finding DOMXSS with dynamic taint propagation
  40. Facebook: Memorializing a User
  41. How To Own Every User On A Social Networking Site
  42. Text-based CAPTCHA Strengths and Weaknesses
  43. Session Puzzling (aka Session Variable Overloading) Video 1, 2, 3, 4
  44. Temporal Session Race Conditions Video 2
  45. Google Chrome/ChromeOS sandbox side step via owning extensions
  46. Excel formula injection in Google Docs
  47. Drag and Drop XSS in Firefox by HTML5 (Cross Domain in frames)
  48. CAPTCHA Hax With TesserCap
  49. Multiple vulnerabilities in Apache Struts2 and property oriented programming with Java
  50. Abusing Flash-Proxies for client-side cross-domain HTTP requests [slides]

Previous Winners

2010 - 'Padding Oracle' Crypto Attack
2009 - Creating a rogue CA certificate
2008 - GIFAR
2007 - XSS Vulnerabilities in Common Shockwave Flash Files
2006 - Web Browser Intranet Hacking / Port Scanning

Thursday, February 03, 2011

BINGO! for Application Security

In case you need something fun to do during an RSA 2011 or OWASP Summit 2011 presentation.



Wednesday, February 02, 2011

Web Browsers and Opt-In Security

The last decade has taught us much about computer and information security. We’ve learned the importance of Secure-By-Default because people rarely harden their “security” settings as standard practice. We’re also painfully aware that security is often a trade-off between functionality and usability, which requires a balance be made. Ideally this balance is decided between what level of security a product claims and the customer’s expectations. Operating systems and Web servers have taken a strong supporting stance with regards to Secure-By-Default. Web browsers, well, I think there is much room for improvement.

Let’s look at recent outcomes shall we. According to CA Technologies, "Browser-based exploits accounted for 84% of the total actively exploited known vulnerabilities in the wild." Other industry reports support these findings including, "Of the top-attacked vulnerabilities that Symantec observed in 2009, four of the top five being exploited were client-side vulnerabilities that were frequently targeted by Web-based attacks." 2010 wasn’t much different. This is typically the result of a combination of imperfect software and not keeping browsers & plug-in patches up-to-date.

Even in this context the browser vendors (Google, Microsoft, and Mozilla) should still be given a lot credit for having been vastly improved the overall security of their software in the last two or so years. They have better development practices, publish regular and timely patches, included easy scheduled update mechanisms, added anti-malware/phishing features, sandboxes, and bounty programs. Collectively speaking anyway, but that's where it ends. All great benefits that users receive automatically and/or enabled by default. That is, Secure-By-Default. Memory handling issues aside, where these protections mainly focus, are still many extremely devastating attack classes where users have practically zero ability to defend themselves.

I'm talking about Intranet Hacking, DNS Rebinding, Clickjacking (UI Redressing), Cross-Site Scripting, Cross-Site Request Forgery, CSS History Leaks, and WiFi Man-in-the Middle. I see these as being the most pressing. They break the back of the Same-Origin-Policy, the very foundation of browser security, and there’s evidence that most of these have been used maliciously in the wild. A malicious website can easily detect what websites a visitor is logged-in to, what sites they’ve recently visited, take over their online bank/email/socialnetwork/etc accounts, hack into their DSL router or corporate intranet. Or maybe the attacker wants to get the victim in legal trouble by forcing them to attack other systems, post spam, download illegal content, and so on.

Sure, an individual user can defend themselves with add-ons like NoScript, Adblock Plus, LastPass, Better Privacy and so on, of which I’m a fan and user. To reiterate though, this is in no way a demonstration of Secure-by-Default! Users have to first be aware, download the application, install, and finally configure. The reality is most users don’t know these attackers are possible and even easy to perform. Only the readers of this blog and the browser vendors themselves do. So from a 10,000ft view of Web security, if a protection feature is not enabled by default then it doesn’t matter. Case in point...

To combat these issues, keep the security-minded elite mildly happy, and show that "something" is being done, there’s a mile long list of well intentioned security features that extremely few people outside of out tiny Web security sphere have heard of let alone implemented. HTTP Strict Transport Security, SECURE cookie flag, httpOnly cookies, X-FRAME-OPTIONS header, Origin header, Do-Not-Track header, disable form AutoComplete, iFrame security restriction, Content Security Policy, privacy modes, hidden configuration settings, delete browser data, cookie controls, LSO controls, etc. All of these are opt-in, invisible or buried several mouse-clicks deep in the GUI, and likely implemented differently. No wonder "The Need for Coherent Web Security Policy Framework(s)" was published.

There are lots of competing arguments about why these things haven't been or shouldn't be formally adopted. My intention here is not to rehash those, but instead remind us all about the bigger picture. I mean, it is simply amazing how much we are able to do online with just a browser. We can shop, bank, pay bills, file taxes, share photos, keep in touch with friends and family, watch movies, play games, and so much more. Browsers are the most important connection we have to the Internet. And the “we” is a stunning two billion people strong. Clearly browsers play a vital role in online security. Everyone needs a Web browser that is not only fast and stable, but secure as well. Only it is difficult to say that they are (or have been)... secure. That needs to change, somehow, someway, and preferably soon.

Remote participation for the 2011 OWASP Summit

The OWASP 2011 Summit looks like it shaping up to be quite an event! From across the globe the top Web application security minds, practitioners, vendors, and influencers are showing up to help shape things to come. Check out the working sessions. As mentioned in an earlier post, I'm unable to attend due to a scheduling conflict. However, our own Arian Evans (VP, Operations) will be carrying the WhiteHat Security flag.

Fortunately for the rest of us, it looks they are organizing a professional video/audio feed for remote participation. Dinis Cruz is asking those interested to fill out a form to help accommodate the broadcast scheduling. I did just that.

Tuesday, February 01, 2011

Do-Not-Track (How about piggybacking on the User-Agent?)

I think I’ve read just about every white paper, article, blog post, and tweet about Do-Not-Track (DNT), including the FTC’s recent 121 page preliminary staff report that thrust the concept into public consciousness. For those unfamiliar with what DNT is exactly, not to worry, it is really very simple.

The idea behind DNT is providing online consumers, those sitting behind a Web browser, an easy way to indicate to third-parties that they do not want to be "tracked" -- they opt-out. DNT would hopefully replace todays system of having to register with dozens of different provider websites to obtain “opt-out” cookies.

As the FTC pointed out, the out-out cookie approach proved unscalable and could never have been effective with the spirit of its intent, consumer privacy. Adding insult to injuring, anyone seeking to improve their privacy by deleting all their cookies would simultaneous delete their opt-out cookies too. They’d have to perform opt-out registration all over again. No wonder the advertisers and tracking companies support this model.

The FTC report gave no real technical guidance on how DNT should be implemented. Not that they should have. What you must first understand about DNT is that in all models, there is NO real technical privacy enforcement. Basically the consumer is asking (buried somewhere invisible in the HTTP protocol) anyone who is listening, “please do not track me.” It is then on the honor of the tracking companies across the Internet to support the DNT system and comply with the request when they have no legal obligation to do so. Which is not to say DNT is without value. It would be helpful to have a legal remedy available when all technical self protection mechanisms eventually breakdown.

Since DNT started making headlines Google, Microsoft, Mozilla, and various browser plug-in developers have been experimenting with different approaches to DNT in their respective Web browsers. The one seeming to get the most traction at the moment is adding a special 'DNT' header to each HTTP request. For example:

"DNT: 1" - The user opts out of third-party tracking.

"DNT: 0" - The user consents to third-party tracking.

[No Header] - The user has not expressed a preference about third-party tracking.


At first glance this does appear to be the logical and superior model over all others I’ve seen so far. Then I got to talking with Robert “RSnake” Hansen about this and we came to a slightly different conclusion to where DNT would best go. First remember that there are a lot of great big powerful corporate interests that really don’t like DNT and what it represents. If effective and widely adopted, business models are odds with consumer privacy choice would be seriously threatened. Opponents to DNT will seek to confuse, sabotage, derail, downplay, and stall progress at every opportunity. The final accepted protocol must be resilient to a large portion of the Internet hostile to its very existence.

DNT data must be able to traverse the Internet to its destination unaltered and be logged on the other end (the Web Server) for auditing / statistical purposes. If DNT ends up being a new HTTP request header, those headers like most others are rarely logged and never by default. It would be far too easy for a tracking company to ignore DNT headers and claim they never got them. Proving otherwise would be difficult for a plaintiff.

An alternative is piggybacking DNT onto an already well established header. A header one that no one in the connection stream would typically think of touching and that is already widely logged -- by default. The User-Agent header would make sure an ideal candidate. Imagine something like this with the DNT tacked onto the end:

User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; en-US; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 DNT: 1

Simple. Easy. Logged.

Now if we can just encourage the browser vendors to enable it by default. :)