Tuesday, February 09, 2010

Where's WhiteHat? Re: Scanner Comparisons

Last week Larry Suto published, “Analyzing the Accuracy and Time Costs of Web Application Security Scanners,” which reviewed desktop black box website vulnerability scanners: Acunetix, IBM AppScan, BurpSuitePro, Cenzic Hailstorm, HP WebInspect, NTOSpider, and Qualys WAS (Software-as-a-Service). This research was meant to build upon Larry’s initial October 2007 study of the market. Several people have asked what I thought of the paper and why WhiteHat Security wasn’t covered. I’m happy to discuss both questions.

First off, the website vulnerability management space is in desperate need of side-by-side comparisons as they are very few and far between. This represents a challenge for organizations building application security programs that want to create a product short-list to evaluate internally. For a variety of reasons, independent and publicly available detailed technical reviews of website vulnerability management products/solutions are unlikely to come from the expected sources, such trade magazines and industry analysts. One of the main inhibitors is that many of these firms do not have a Web application testbed that would allow for an accurate, fair comparison. Thankfully, researchers, such as Larry Suto, help fill the void by investing personal time and sharing their results. Others independents should be encouraged to do the same. I would recommend the “Web Application Security Scanner Evaluation Criteria (WASSEC)” if you wish to do so. Also, Larry confirmed he was not compensated by any vendor for his work.

During the latter stages of Larry’s research, WhiteHat Security was offered the opportunity to be included. In order to take part, we were asked to assess six vendor hosted “test websites” with WhiteHat Sentinel, identify vulnerabilities and report our findings. The testing process was designed to evaluate desktop black box scanner technology using a “Point-and-Shoot” and “Trained” methodology -- something not well-suited for a SaaS offering like Sentinel. What we do and how we do it is very different from the scanning tools, but is still often seen as an alternative. After much consideration, we did not feel that the testing methodology would provide an apples-to-apples evaluation environment for Sentinel. Additionally, and equally important, finding vulnerabilities is only the first piece of the overall value our customers receive, so we politely declined to participate.

*Not to mention, that as a strict rule, we never touch any website without express written authorization.*

The report states, consistent with my expectations, that significant amounts of human/expert time are required for scanner configuration, vulnerability verification, scan monitoring, etc. to enable the aforementioned products to function proficiently. Behind the scenes, Sentinel is no different, except that we perform all of those activities and more for our customers as a standard part of the annual subscription. Doing so means our engineering time has a hard cost to us and staff resources are dedicated to serving customers.

The report confirms that scanner performance varies wildly from site to site, so it’s best to test where they will be deployed. We agree and provide mechanisms for our customers to evaluate Sentinel first-hand for exactly that reason. We want customers to know exactly what they can expect from us on their production systems and not a generic test website. Finally, where Sentinel really excels, is in areas where the report did not (could not) focus. Scalability is one of these. Whether we are tasked with one site, 10, 100, 1,000 or more -- Sentinel is capable of handling the volume.

Over the years I’ve written extensively about black box scanner efficiencies and deficiencies in depth: Automated Scanners vs. Low-Hanging Fruit, Automated Scanner vs. The OWASP Top Ten, shed light on duplication rates, what scanners are good at finding and what they are not -- with a scan-o-meter, discussed business logic flaws, why crawling matters a lot, how much essential human time is required, and even what it takes to make the best scanner. So obviously the scanners overall general poor showing came as no surprise. Most missed nearly half of the vulnerabilities, on test websites no less, where they are designed to be easily found. Imagine how they would perform in the real world! It gets much uglier and dangerous out there I assure you.

I’d highly recommended everyone interested Web application security read Larry’s report for themselves and get familiar with what is the current state-of-the-art in black box scanning products. Much improvement could be made through additional R&D. Remember, keep an open mind, but at the same time take the results with a grain of salt until you test on your systems. Some vendors will say Larry’s work wasn’t perfectly scientific, possessed data errors, was misinterpreted, ran misconfigured, couldn’t reproduce the results, etc. All of that may be true, but so is the fact that it looks like Larry conducted a deeper and fairer analysis than the average would-be user does during typical evaluations.

Here are the three takeaways:
  1. Scanner results will vary wildly from website to website. For best results, I highly recommend that you evaluate them on the sites where they are going to be deployed. Better still, if you know of vulnerabilities in those sites ahead of time, use that information for comparison purposes.
  2. The significant human/expert time required for proper scanner set-up, ongoing configuration, vulnerability verification, etc. must be taken into consideration. Which vulnerabilities you scan for is just as important as how you scan for them. Estimate about 20-40 man-hours per website scanned.
  3. Scanner vendors should take into consideration that Larry Suto is certainly more sophisticated than the average user. So if he couldn’t figure out how to run your tool “properly,” take that as constructive feedback.

14 comments:

Anonymous said...

I think you guys passed on an excellent opportunity to put your money where your mouth is. The sentinel service you offer is very expensive and I for one have used it along with most of the tools on the list with only marginally better results, you did identify some stuff others missed but the cost of the service does not offset the results.

Anonymous said...

I second what anonymous above just said. As someone who has previously used sentinel as well as three of the tools in the report, the results were not much different and in some cases sentinel missed fairly major (and simple) vulns the tools detected.

Jeremiah Grossman said...

@Anonymous1, first thank you for being a customer -- each is valued highly. While you may be right, it may have been a missed opportunity, but here is where I'm at...

Many fall into the vulnerabilities-per-dollar trap when comparing black box scanners (or any assessment for that matter). If a scanner finds nothing or very little, perhaps because there is not much to be found, often the results are perceived to have little value. This is certainly not the case and I've blogged on the subject before.

Instead black box scans / assessments should measure the hackability of a website given an attacker with a certain amount of resources, skill, and scope. This is what we simulate (an attacker), complete an in depth assessment, continuously (unlimited), at large scales, and without our customers needing to hire additional staff. This is a how a similar solution should be valued. So I’d counter that Sentinel rates cheaper than running scans in-house if you include all technology & human resource costs required for such a program.

So in this case, even if we found more vulnerabilities, it would have demonstrated very little of Sentinel’s core value proposition. Besides, we are not a desktop scanner, we don't want to be seen as anything similar, and yet another reason why we declined.

@Anonymous2, Your feedback is well taken. Missing vulnerabilities happens to everyone no matter what technology or methodology used. It'll be a never ending fact.

In our case you’ll find our engineering team more than happy to receive reports when issues are reportably missed. Root-cause analysis will be performed and we’ll improve the technology. We regularly analyze reports from the aforementioned scanners. We know very well how good they are relative to ourselves.

Anonymous said...

@Jer - well said. I think it's difficult to compare the two models (SaaS and stand alone) in the same light. They have very different value propositions. Likewise, I could see there being a whole other category, that Larry didn't cover at all which is scale. When you get to the high end of the market, a minor configuration hassle suddenly can explode into a nightmare when you have to do it over and over 1000 times. There's a big hidden cost there too, which I don't think most people realize.

Anyway, I respect your decision, and hope you can figure out a way to work your scanner into future tests.

-RSnake

blak3x said...

I think as Jeremiah mentioned, although such comparisons can be helpful and shed some light on what to buy etc, the best solution is to test the scanner or SAAS against the site it will be testing. As we've seen each solution will behave differently on each target. Therefore in a way, it is kind of "useless" trying to blame one or the other, or who is the best.

Good post Jer

lotusebhat said...

Much for food thought

kuza55 said...

I am not a fan of blackbox testing products, or security vendors of any kind, but I (and people I have worked with) have used Acunetix, and no-one has ever had problems figuring out how to enter credentials. Seriously, you can even find script kiddies who can manage it.

He claims that if the scanner popped up a window asking for login details, he would provide them; this seems a really odd feature to rely on, what if a scanner popped up questions related to other things, would he answer them?

Either this is a ploy to favor scanners with that (useless, I might add) feature over others (and probably making several of these types of decisions to end up favoring his prefered scanner over all the others), or he has no idea what the fuck he is doing.

AppSec said...

Jeremiah:
I think SaaS in these tests are a catch 22.

But not getting into it, WhiteHat is left vulnerable to missing out on some perspective clients (I'm not saying this as a postivie or negative). When I started in this industry, the first thing I looked for was something like this (granted it didn't exist). It is possible that I would have tried to find out more information, but I don't know.

The flip side: is seeing that these were all test sites that are publically available, I would have questioned if you've already done analysis before the review was even done and spent more time then disclosed!

The later, of course, is the joys of being in the security industry :-).

Jeremiah Grossman said...

@AppSec... yah pretty much. Actually I didn't even consider your last point. People might have claimed if we did well that our process was somehow unfair to the rest.

Anonymous said...

I'm a bit disappointed - great opportunity to prove WhiteHat's (claimed) superiority but you decide to avoid an honest comparison made by an independent analyst.

Considering all the smack talk about HackerSafe some years ago, the least you could do was to participate in this scanner review.

I'm not defending HackerSafe (McAfee deserves to die in fire), hell no, but I'm asking you to put your money where your mouth is.

Lets face it - all the eye candy reporting, compliance graphs, mitigation help and executive summaries are absolutely worthless if a product/service can't find the vulns.

Jeremiah Grossman said...

@anonymous, tell ya what. Next time there is a "review" that includes McAfee Secure, Qualys WAS, Cenzic Click2Secure, HP AMP, etc. We'll take part. Until then, we are not a scanner and do not want to be grouped with them in the market.

Drazen Drazic said...

Wondering what people were expecting to see from the results? Seriously. Taken for what they are and known limitations, were there any surprise? A pointless exercise to a degree? Maybe, but at least highlighting those "limitations" to the market. But does anyone outside our industry really read this stuff anyway?

DD

Jeremiah Grossman said...

@Drazen I think there is a huge disparity in the general market about what these tools are really capable of. So, if nothing else is provides another recent reference for would-be users. And yes, a lot of people have and will continue to read it.

Unknown said...

Hi thanks for the info but in fact I was surprise to see that the www.gamasec.com website application scanner SaaS was not in the list of web scanner that you compare,after usingf different others online website scanner we chossed to had the www.gamasec.com scanner for our website and we were pleased with the result and the recommendation reporting.

Have a look it is an interesting SaaS web scanner