Thursday, December 16, 2010

Why Speed & Frequency of Software Security Testing Matter, A LOT

The length of time between when a developer writes a vulnerable piece of code and when the issue is reported by a software security testing process is vitally important. The more time in between, the more effort the development group must expend to fix the code. Therefore the speed and frequency of the testing process whether going with dynamic scanning, binary analysis, pen-testing, static analysis, line-by-line source code review, etc. matters a great deal.

WhiteHat Sentinel is frequently deployed in the Software Development Life-cyle, mostly during QA or User Acceptance Testing phases. From that experience we’ve noticed three distinct time intervals (1 week, 1 month, and 1 year), from when code is written to vulnerability identification, where the effort to fix is highly distinct. Below is what we are seeing.

The following focuses solely on syntax vulnerabilities such as SQL Injection, Cross-Site Scripting, HTTP Response Splitting, and so on. Semantic issues, also known as Business Logic Flaws, cause a different environmental impact.

When vulnerability details are communicated within ______ of the code being written:

1 Week (Less than 1 hour fix)
The same developer who introduced the vulnerability is the same developer who fixes the issue. Typically the effort required ranges from just minutes to an hour because the code is still fresh in the developers mind and they are probably still working on that particular project. The code change impact on QA and regression is minimal given how new the code is to the overall system.

1 Month - (1 - 3 hour fix)
The original developer who introduced the vulnerability may have moved onto another project. Peeling them off their current task enacts an opportunity cost. While remediation effort might be only 1 - 3 hours of development time, usually an entire day of their productivity is lost as they must reset their environment, re-familiarize themselves with the code, find the location of the issue, and fix the flaw. The same effort would be necessary if another developer was tasked to patch. If the vulnerability is serious a production hot-fix might be necessary requiring additional QA & regression resources.

1 Year (More than 10 hour fix)
The original developer who introduced the vulnerability is at least several projects away by now or completely unavailable. The codebase may have transferred to a software maintenance group, who have less skills and less time to dedicate to “security.” Being unfamiliar with the code another developer will have to spend a lot of time hunting for the exact location, figure out the preferred way fix it, that is if any exists. 10 or more developer hours is common. Then a significant amount of QA & regress will be necessary. Then depending on the release cycle deployment of said fix might have to wait until the next schedule release, whenever that may be.

What’s interesting is that the time and effort required to fix a vulnerability is not only subject to the class of attack itself, but how long ago the piece of code was introduced. Seems logical that it would be, just a subject not usually discussed. Another observation is that the longer the vulnerability lay undiscovered the more helpful it becomes to pinpoint the problematic line of code for the developer. Especially true in the 1 year zone. Again terribly logical.

Clearly then during SDL it’s preferable to get software security test results back into the developer hands as fast as possible. So much so that testing comprehensiveness will be happily sacrificed if necessary to increase the speed and frequency of testing. Comprehensiveness is less attractive within the SDL when results only become available once per year as in the annual consultant assessment model. Of course it’d be nice have it all (speed, frequency and comprehensiveness), but it’ll cost you (Good, Fast, or Cheap - Pick Two). Accuracy is the real wild card though. Without it the entire point of saving developers time has been lost.

I also wanted to briefly touch on the differences between act of "writing secure code" and "testing the security of code." I don’t recall when or where, but Dinis Cruz, OWASP Board Member and visionary behind the 02 Platform, said something a while back that stuck with me. Dinis said developers need to be provided exactly the right security knowledge at exactly the time they need it. Asking developers to read and recall veritable mountains of defensive programming do’s and don’ts as they carry out their day job isn’t effective or scalable.

For example, it would be much better if when a developer is interacting with database they are automatically reminded to use parameterized SQL statements. When handling user-supplied input, pop-ups immediately point to the proper data validation routines. Or, how about printing to screen? Warn the developer about the mandatory use of the context aware output filtering method. This type of just-in-time guidance needs to be baked into their IDE, which is one of the OWASP O2 Platform’s design objectives. "Writing secure code” using this approach would seem to be the future.

When it comes to testing as you might imagine WhiteHat constantly strives to improve the speed of our testing processes. You can see the trade-offs we make for speed, comprehensiveness, and cost as demonstrated by the different flavors of Sentinel offered. The edge we have on the competition by nature of the SaaS model is we know precisely, which of our tests are the most effective or likely to hit on certain types of systems. Efficiency = Speed. We’ve been privately testing new service line prototypes with some customers to better meet their needs. Exciting announcement are on the horizon.

1 comment:

Richard Buckley said...

I never thought that QA occupies such an important part in the development of the project. But you know, when I created my app, I could not do without the help of this company Thanks a lot for posting this article!