The answer varies depending on the importance of each website and the security needs of the organization. Beyond the everyday network noise, in my experience the average attacker targeting custom web applications uses a web browser, an HTTP proxy, Google, and perhaps some specially crafted scripts. I think the at this layer odds are the attackers aren’t using vulnerability scanners of either the open source (because no decent ones exist) or commercial variety (it’s faster for them to find the vulnerability or two they need by hand). The main variable in a bad guys success is their level of persistence and cognitive skill rather than the capability of the tools. This is an important benchmark to understand.
The way the logic works is the more thorough and frequent the VA process, the fewer attackers there are in the world possessing the time/ability to penetrate the system. Depth of testing needs to outpace what is perceived to be the skill of the vast majority of bad guys. Otherwise what’s the point? There are the causal researcher types who just want see if they can find an XSS issue on domain X. Then there are the more dedicated and skilled attackers willing invest several weeks or months to defraud a system who are a level above. Frequency of VA should also match the change rate of the application. For example web applications that change every week doesn’t match up well with an annual two-week engagement.
At some point VA reaches a level of diminishing returns and after thousands of assessments we have a good idea of best practices and what due diligence represents. We understand security can never be 100% and at some point mistakes will be made, bad guys get lucky, or they’re talented and VERY persistent. I created the following diagrams to illustrate these concepts. The diagrams are not meant to be a literal measures but to visually describe the fundamental concepts.
Time/Difficulty vs. Total Vulnerability (%)Now if we overlay the estimated effectiveness of certain solutions in the same vulnerability coverage zone of certain adversaries.
Effectiveness/Skill vs. PersistenceWhen do we stop looking?
As a biased VA vendor it’ll sound self-serving (but so be it): On websites where the code changes more than a few times per year which require at least a modest degree of security, you never stop looking because the bad guys certainly won’t. Applications are changing and the attack techniques are improving even if the applications aren’t. You want to enlist those who are AT LEAST as skilled as the pool of attackers, have a thorough methodology, and a consistent process. To borrow a quote from Bruce:
“Security is a process, not a product.”