My Attribute-Based Cross-Site Scripting post stimulated an interesting exchange in the comments. You can read for yourself, but one key thing is that this particular issue is not new. In fact it’s been around for years and only now are we figuring out how to scan for it more effectively through trial and error on real live websites. A couple of people picked up on this and one blogger asked a particular relevant question:
“If current web application scanners can't find an issue which is around for 5 years now, aren't they f*** useless?”
The frustrated tone of the question is obvious. As someone who’s been on the customer side of VA solutions, I understand where they’re coming from. We all know web application scanners don’t (and never will) find every vulnerability, but it's imperative to know what they do check for and how well. That’s the point being made. Just because a tool says it “checks” for something, it doesn’t mean it's any good at “finding” it. This is a key piece of information to efficiently complete an assessment and not wasting time overlapping work.
I’ve talked about this lack of knowledge in my web application scan-o-meter post and invited others to comment, including the scanner vendors. Their marketing teams are very good about generating a big list of “we check for this”, so I set the dials where I believed the state-of-the-art to be. Nothing really came of it from their side. My conclusion is they simply don’t know. While they can test their products in the lab, they’re unable to measure real world capabilities on any kind of scale, where their customers actually use them, like WhiteHat does. Hence their technology improvement is painfully slow, resulting in frustrated questions like the above.
The bottom line is automated scanning is important to the vulnerability assessment process. But it doesn’t help anyone when technology capabilities are withheld from customers. I’m hopeful the next sets of web application VA solution reviews will shed light on this from an independent source.
1 comment:
Too add to this -- I know of one piece of production software with *three* different types of ways of handling tags that are vulnerable to attribute-based XSS. SPI WebInspect, IBM Appscan, and Cenzic Hailstorm are all unable to identify these issues.
I'm sure they could. Some of it is not that hard, if you think about the problem properly. But the point here is: they don't.
We do a pretty good job at WhiteHat with making sure our Sentinel platform can detect these kinds of issues, but the most accruate tests for probability also generate a lot of noise (if say the software is substituting string-enquoting characters but not manipulating your match string).
It's taken a while to try some different approaches to refine a high level of accuracy, and actually requiring some conditional logic changes to our engine to really nail this down tight.
I'd be embarrassed if I had to defend the marketing claims about this stuff because most vendors are flat-out hype as opposed to substance.
I love it when people pop-out their technobafflegab and buzzword compliant speak, and you hear from all the scanner vendors about how their scanner "properly builds and interprets the DOM" and "new AJAX scanning" and "Web 2.0" and so on. Because while that is cool and all (and yes, we have a DOM-based parser too)...
You can beat almost any scanner at their game with some basic regex, by writing smarter tests. :)
Cheers
-ae
Post a Comment