Most of us understand and accept that Web application vulnerability scanning tools (black and white box analysis) don’t find everything, but that’s its OK since they add value to SDLC processes regardless. Consistency and efficiency is good wherever we can get it. The problem is heated (aggressive/defensive) ideological debates often transpire anytime people who don’t get that come contact with those discussing scanner capabilities. Sometimes though we manage to get past all that to have open and collaborative conversations isolating various technical limitations, theorizing ways to overcome obstacles or improve processes to compensate, and generally move the state of the art forward. This after all is what security is all about, process or not product. That’s where Rafal Los two-part posts come in.
Static Code Analysis Failures
Hybrid Analysis - The Answer to Static Code Analysis Shortcomings
Don’t let the titles fool you into thinking these posts are anti-stactic-analysis. Rafal points out certain scanner shortcomings as premise to put forth ideas on how to improve the technology by combining capabilities. Of course we're all free to agree or disagree, that's kind of the point. Hopefully he’ll add a third installment that’ll dig in deeper into how Hybrid Analysis might function. Seems like an interesting line of research.
5 comments:
Jeremiah - I first want to say thanks for the nod, and second - yes absolutely I'm working on a 3rd article that details some of the inner-workings of the "hybrid analysis" concept and how it's absolutely real in a technology we're developing and researching.
Thanks again, great summary.
Several months ago I read a book by Gary McGraw of Cigitial called "Software Security: Building Security In" (no underline tags allowed during posting). I have found the following two quotes to be both very insightful, and relevant to this topic:
"Doing code review alone is an extremely useful activity, but given that this kind of review can only identify bugs, the best a code review can uncover is around 50% of the security problems. Architectural problems are very difficult (and mostly impossible) to find by staring at code."
"Put in more basic terms, application security testing tools are 'badness-ometers'. They provide a reading in range from 'deep trouble' to 'who knows', but they do not provide a reading into the 'security' range at all. Most vulnerabilities that exist in the architecture and the code are beyond the reach of simple canned tests, so passing all the tests is not that reassuring."
Hey Andrew, I can agree with most of that. Its just that "scanning tools" = badness-o-meters commonly get confused with modern VA (scanning + human), which I believe can measure more things. Either way, passing simple canned tests on a custom website is definitely not that reassuring. I wonder what Gary would say to passing custom tests on a custom website. More badness? Who knows.
Well, there is afaik good point with hybrid tools such as reducing the false-positive. But there is for me huge limitation in such approach.
It is not because you will be able to find a weakness with a sink-to-source flow that you will be able to exploit it correctly with a black-box based tool.
So, I guess in that sense, the tool is adding some false-positive too (since the tool wouldn't be able to exploit the vulnerability).
I think that problem is really tied to the limitation of the black-box itself (and especially the web apps scanners which are not that original with vectors).
Also, the hybrid approach cannot be applied to lots of weaknesses... and this is a concern now I guess and why such tools are not that developed (even though it is technically not that difficult I would say for people who are in the tool industry)
@andrew: security problems ARE bugs.
Post a Comment