At WhiteHat we launch thousands (sometimes millions and everything in between) of customized attacks on our customers’ websites in an effort to find any and hopefully all vulnerabilities before the bad guys exploit them. After performing vulnerability assessments (VA) on hundreds of websites each week your team becomes extremely experienced and proficient at the process while uncovering bucket loads of issues. Experience, consistency, and efficiency are key in webappsec VA. The one thing that’s always on my mind though is the ever-present risk of missed vulnerabilities. So-called false-negatives. What does anyone (enterprise or vendor) do about those?
Just like developers and bugs, assessors are human who make mistakes and inevitably business logic flaws will get missed. Scanning technology is imperfect and will fail to find technical vulnerabilities (XSS, SQL Injection, etc.). Or even certain links for that matter. This is an issue but not the core problem. The real issue is there’s no way to know for sure how many vulnerabilities are actually in a real-live production website. Meaning, there’s no real way for any vulnerability assessment solutions to measure proactively against the true vulnerability total because it’s unknown. (please don’t tell me canned web applications because that’s not the same thing) We can only measure against the next best solution (bake-off) or the next best bad guy (incident). The results generated are simply not the same or as GOOD as compared to the true vulnerability total, which would be more ideal.
7 comments:
Yeah Jeremiah, you are completely correct in saying that you don't know how many vulns are in any given system. In Software Engineering (traditional systems), there are some metrics you can use for estimation (including fault injection and blind 2nd testing), but with a black-box system (which most of the time webappsec is) many of these techniques break down.
Anyway, in my experience with webappsec, there are some "pointers" - very unscientific pointers, but some help none-the-less :)
Generally I've found (and is supported in "traditional" software engineering) that vulns congregate in both similar areas and parts of a system - so for example, if you've found one XSS vuln, you are likely to find more. Also, if one particular part of a system has been found to have a number of vulns, it's "usual" to have others in there are well (at least bounded by the functionality - ie. no point in looking for file upload vulns if the app doesnt support it there)
So, that's my yard-stick on if I should continue hunting in a particular area, or on the system as a whole. As you say, I doubt that you could write an algorithm around this, but as a human you certainly get a "feel" for when enough is enough.
Hey jeremiah, i totally agree with you on this too. Whenever performing a VA, i am always concerned if i can find vulnerabilities and it seems like i don't know where to stop. When i find something, it makes me want to explore more and there is no way of stopping. But i guess, most importantly is to find the most important flaw in the systems are present it to the customer. No point finding all the vulnerabilities when you don't know how many are present in the application.
http://hackathology.blogspot.com
1. Quote: "The real issue is there’s no way to know for sure how many vulnerabilities are actually in a real-live production website"
Jeremiah - isn't this true for ANY system, and any type of VA? when performing network-level VA, do you have an exact number (or an upper bound) of the amount of vulnerabilities in that system?
I don't think this is only true for web application security.
2. Since VA is just one of the building blocks of a good security solution, you shouldn't feel frustrated when you don't (or can't) find all the issues. Like I have said before, good security is achieved by adding layers of protection, VA is just one layer. If you educate developers, create more secure development frameworks and put a well configured web application firewall in front of the app - you'll end up with a much secure solution.
So - while the theoretical question of security coverage is very interesting, you have to always stop and remind yourself why you are doing the VA in the first place...
Related to what Ory said there is a slight difference between network VA and webapp VA. At least in the network/service VA scan we tend to map versions and such so that if we don't detect a vulnerability now, we can store some info that lets us detect it later.
In the webapp world we don't have any such luxury.
One way I'm planning on addressing this problem is measuring how well my scanner products detect manually detected vulnerabilities I know about on my site. In those cases I can at least judge the quality of the scanner, and if it isn't properly detecting a known vulnerability, inform the scanner vendor so they can add logic to detect it.
@MikeA: I've noticed the exact same thing from my experience as well. Good to know I'm not the only one that sometimes does VA by "feel". WebApp VA ESP. :)
@hackathology: Precisely. At some point you figure you've exhausted all avenues of attack and any more time spent is simply not worth the effort. Still a judgment call.
@Ory: Hey Ory, technically your right when you combine all forms of software. The upper bound is an unknown. Though Security Retentive it well....when you look at network VA layer at least there's universe of well-known security vulnerabilities. That's the expectation people have of the solutions. To over simplify, its there or its not. No one is expecting them to find a 0-day. The same cannot be said in custom WebApp VA. One problem is that I don't think that's widely understood.
"you shouldn't feel frustrated when you don't (or can't) find all the issues."
Your probably right. Personality wise its just I have a love-hate relationship with unsolvable problems. :)
@Security Retentive: Again, very well said. In your last paragraph, there is simply no substitute for that type of measurement and improvement.
One more thought on this thread.
One way I'm going to be doing metrics is code/path/feature coverage where I'm using a known-good (or presumed good) framework for certain things like input filtering, output escaping, string handling, etc.
Once I'm doing those things for a certain percentage of my website pages, then I can start focusing less on things like XSS and more on logic flaws, etc.
If I do come across a new type of XSS injection, etc. then I can fix it one place and know I've covered x% of my code/site by doing it.
I'll always want to do automated scanning for certain types of defects from the website perspective, obviously the goal is to prevent these things going out the door in the first place.
Website vuln scanners are more part of the audit/assurance process than they are part of the secure development process in this regard.
Security Retentive, that was well said.
http://hackathology.blogspot.com
Post a Comment