Vulnerabilities identifiable in an automated fashion, such as with a scanner, can be loosely classified as “low-hanging fruit" (LHF) -- issues easy, fast, and likely for bad guys to uncover and exploit. Cross-Site Scripting, SQL Injection, Information Leakage, and so on are some of the most typical forms of website LHF. Some approach their website vulnerability assessment (VA) by basing their programs around a scanner focusing primarily on LHF. The belief is also that by weeding out LHF vulnerabilities break-ins become less likely as the organizations security posture rises above the lowest common denominator. They realize they’re not going to get everything, but in return they expect this VA testing depth to be “better than nothing,” perhaps it meets PCI compliance requirements, and very importantly is low-cost.
Unfortunately things often don’t turn out that way. Due to shortcomings in Web application scanning technology, the LHF scanner strategy in highly unlikely to achieve that desired result. First off all Web application scanners can help tremendously with locating LHF vulnerabilities, no question. However, scanners DO NOT identify all the LHF on a given website, not by a long shot. No doubt many of you have come across URLs laden with account_id parameters where all one needs to do is rotate the number up or down to access someone else’s bank/mail/cms account. What about admin=0 params begging for a true string? Money transfers with a negative value?
We could go on all day.
The point is these are not ninja-level AES crypto padding reverse engineering attacks nor are they edge cases. No scanner, no HTTP proxy, and not even view-source is required to spot them. Just a Web browser. How much easier does it get!? Oh right, CSRF vulnerabilities. CSRF remains one of the most pervasive website vulnerabilities and also the easiest to find. That is, find by hand. Web application scanners have a REALLY hard time identifying CSRF issues, false-positive and false-negative city, without a lot of manual assistance.
Secondly, the output from commercial and open source Web application scanners are routinely inconsistent between each other. As can be seen from Larry Suto’s “Accuracy and Time Costs of Web Application Security Scanner [PDF]" study, products including Accunetix, AppScan, Hailstorm, etc. report different LHF vulnerabilities in type and degree -- even on the exact same website. A scanner may technically find more vulnerabilities than another, this does not necessarily mean it found everything the others did. Also, it is not uncommon for the same scanners to produce varied results from successive scans on the same website. This lack of Web application scanner LHF comprehensiveness and consistency presents a hidden dilemma.
If your LHF strategy says run a scanner, find what you can, and fix whatever it finds. Great, but consider what if your daily LHF adversary runs a different scanner across your website than you did, which is entirely likely. Anyone may buy/download/pirate one of the many scanners available on the market. They might also do some manual hunting and pecking. Most importantly though are the favorable odds that they’ll uncover something your scanner missed. Scanners possess very different capabilities with respect to crawling, session-state management, types of injections, and vulnerability detection algorithms. Individual scans can be configured to run under different user accounts with varied privilege levels or instructed to fill forms in a slightly different way. All of this presents a slightly different attack surface to test, which results in missed vulnerabilities and general inconsistency.
While the LHF scanner strategy is in place often none of this is readily apparent. That is until, someone from outside the organization such as a business partner, customer, or security researcher reports a missed vulnerability from a simple scan they ran. This makes it appear like you don’t test your websites well enough, which is at least slightly embarrassing, but usually nothing truly serious results. What really nails organizations in the end, and we’ve all likely seen it many times, is when a website is compromised and management/customers/media asks why. Saying, “we are PCI compliant” is just asking for the rod. As is, “we take security seriously, our top-of-the-line scanner didn’t find the vulnerability” all while the website is defaced with profanity, infecting visitors with malware due to some mass SQL Injection payload, or defrauded by a cyber criminal. Now the organization is sending out disclosure letters, figuring our who to blame or sue, and answering to an FTC investigation.
Maybe doing nothing was a better idea after all, because at least you didn’t pay good money to get hacked just the same. Anything worth doing, is worth doing right. Web application security included.