The answer varies depending on the importance of each website and the security needs of the organization. Beyond the everyday network noise, in my experience the average attacker targeting custom web applications uses a web browser, an HTTP proxy, Google, and perhaps some specially crafted scripts. I think the at this layer odds are the attackers aren’t using vulnerability scanners of either the open source (because no decent ones exist) or commercial variety (it’s faster for them to find the vulnerability or two they need by hand). The main variable in a bad guys success is their level of persistence and cognitive skill rather than the capability of the tools. This is an important benchmark to understand.
The way the logic works is the more thorough and frequent the VA process, the fewer attackers there are in the world possessing the time/ability to penetrate the system. Depth of testing needs to outpace what is perceived to be the skill of the vast majority of bad guys. Otherwise what’s the point? There are the causal researcher types who just want see if they can find an XSS issue on domain X. Then there are the more dedicated and skilled attackers willing invest several weeks or months to defraud a system who are a level above. Frequency of VA should also match the change rate of the application. For example web applications that change every week doesn’t match up well with an annual two-week engagement.
At some point VA reaches a level of diminishing returns and after thousands of assessments we have a good idea of best practices and what due diligence represents. We understand security can never be 100% and at some point mistakes will be made, bad guys get lucky, or they’re talented and VERY persistent. I created the following diagrams to illustrate these concepts. The diagrams are not meant to be a literal measures but to visually describe the fundamental concepts.
Time/Difficulty vs. Total Vulnerability (%)Now if we overlay the estimated effectiveness of certain solutions in the same vulnerability coverage zone of certain adversaries.
Effectiveness/Skill vs. PersistenceWhen do we stop looking?
As a biased VA vendor it’ll sound self-serving (but so be it): On websites where the code changes more than a few times per year which require at least a modest degree of security, you never stop looking because the bad guys certainly won’t. Applications are changing and the attack techniques are improving even if the applications aren’t. You want to enlist those who are AT LEAST as skilled as the pool of attackers, have a thorough methodology, and a consistent process. To borrow a quote from Bruce:
“Security is a process, not a product.”
hey jeremiah, i saw from the rsnake forum that your book had been released. Is it out on the bookshelves yet? I would love to get a copy and start exploring more from there. I am a network security guy so my main focus will be in that area. As for web, i definitely want to explore more, but not so much in scripting and programming, just knowing what can be exploited and the methodology behind it is good for me though. Of course, i know the basics, but i would love to go deeper. I will definitely grab a copy of it.
Hey hackathology, yah, we're nearing completion and in the last stages of book development. Not sure exactly when it'll be on the shelves, but REALLY soon. The book will focus almost solely on XSS, not exact on the whole of webappsec, so it'll assume a lot of knowledge. You'll probably want to pick up another book or two in the space to get a feel for the topic first.
thats cool to me, i know XSS and how it works, that is why i would love to get deeper by buying this book. I need to show some love to you guys too. I read rsnake's forum, everyday u see different topic on XSS, its gets a little messy, but still i managed to read and understand it. This book will definitely benefit me deeper.
Nice post, very helpful for us.I will come back here again & again...:)
Post a Comment