At WhiteHat we launch thousands (sometimes millions and everything in between) of customized attacks on our customers’ websites in an effort to find any and hopefully all vulnerabilities before the bad guys exploit them. After performing vulnerability assessments (VA) on hundreds of websites each week your team becomes extremely experienced and proficient at the process while uncovering bucket loads of issues. Experience, consistency, and efficiency are key in webappsec VA. The one thing that’s always on my mind though is the ever-present risk of missed vulnerabilities. So-called false-negatives. What does anyone (enterprise or vendor) do about those?
Just like developers and bugs, assessors are human who make mistakes and inevitably business logic flaws will get missed. Scanning technology is imperfect and will fail to find technical vulnerabilities (XSS, SQL Injection, etc.). Or even certain links for that matter. This is an issue but not the core problem. The real issue is there’s no way to know for sure how many vulnerabilities are actually in a real-live production website. Meaning, there’s no real way for any vulnerability assessment solutions to measure proactively against the true vulnerability total because it’s unknown. (please don’t tell me canned web applications because that’s not the same thing) We can only measure against the next best solution (bake-off) or the next best bad guy (incident). The results generated are simply not the same or as GOOD as compared to the true vulnerability total, which would be more ideal.