Friday, May 08, 2009

Real-World website vulnerability disclosure & patch timeline

Protecting large trafficked and high valued websites can be an interesting InfoSec job to say the least. One thing you quickly learn is that you are under constant attack by essentially everyone with every technique they got and all the time. Noisy robots (worms & viruses) and fully targeted attackers are just par for the course. Sometimes attackers are independent researchers testing their metal or the third-party security firm hired to report what all the aforementioned attackers already know (or likely to know) about your current security posture.

When new Web code is pushed to production its a really good idea to have a vulnerability assessment scheduled immediately after to ensure security, separate from an SDL defect reducing processes. PCI-DSS requires this of merchants. At this point it becomes a “find and fix” race between the good guys and the bad guys to identify any previously unknown issues. Below is a real-world website vulnerability disclosure and patch timeline from a WhiteHat Sentinel customer who takes security, well, very seriously. The website in question is rather large and sophisticated.

* Specific dates and times have been replaced with elapsed time (DD::HH::MM) to protect identity of those involved. Some exact dates/time were not able to be confirmed.


??::??::?? - New Web code is pushed to a production website
00:00:00 - WhiteHat Sentinel scheduled scan initiates
02:19:45 - Independent researcher notifies website owner of a website vulnerability
02:20:19 - WhiteHat Sentinel independently identifies (identical) potential vulnerability
* WhiteHat Sentinel scan total time elapsed: 00:26:19 (blackout windows)
02:21:24 - Independent researchers publicly discloses vulnerability details
02:23:18 - WhiteHat Operations verifies Sentinel discovered vulnerability (customer notification sent)
02:23:45 - Website owner receives the notifications and begins resolution processes
03:00:00 - WhiteHat Operations notifies customer of public disclosure
??::??::?? - Web code security update is pushed to production website
03:09:06 - WhiteHat Sentinel confirms issue resolved

Notice the independent researcher reported the exact same issue we found, but less than an hour before we found it! They could have very easily not have though (disclosed). Also note the customers speed of fix, under 12 clock hours, which is stellar considering most are in the weeks or months. As you can see the bad guys are scanning/testing just has hard fast and continuous as we are, which is a little scary to think about.

4 comments:

Matt Presson said...

As an AppSec professional that works for a very large enterprise I have to disagree with your statement "[w]hen new Web code is pushed to production its a really good idea to have a vulnerability assessment scheduled immediately after to ensure security." This seems to undermine any SDL process that the organization may have in place.

In my opinion, if your SDL process is of any merit then there should be nothing to find during your assessment. If the application being pushed handles "sensitive data" (using the definition of sensitive that is defined by the organization), then by all means security defects should be fully remediated before the application is loaded into production. In this scenario, an assessment becomes nothing more than busy.

Taking a holistic approach to the entire process, I believe that the first best thing an enterprise can do is to define a clear set of information security standards (which include application security standards) and distribute these to all developers. These standards should spell out everything that a developer should consider when developing an application. To take it one step further, it would also be good f the organization developed a set of best practices and examples for developers to refer to when writing code. In this manner, code reviews become increasingly simple and quick, because the code utilized for a given feature has been vetted by security professionals and is known to work securely under numerous scenarios. At this point, all that is really required in the code review is to verify that the example was properly followed/implemented, thereby giving developers more time to focus on scrutinizing their custom business logic for flaws.

The next great step to take, in my opinion, is to gather information about the organization's standard SDLC process and modify it in such a way that it includes questions and processes that make both developers and business/marketing divisions think about security during the concept and design process. In this way, when a BRS/SRS comes down to the developer, security has been thought of - at least to some level - and hopefully there are development requirements that directly address the security concerns of the system. This modification provides at least a four-fold benefit. One, the security concerns and controls required to adequately protect the system are now documented somewhere. Secondly, the developer does not have an out of I didn't put that [meaning that security feature] in there because it wasn't in the documentation. Furthermore, if security is thought about during concept and design, many problems can be resolved early, which is always good. As most of the flaws caught during these steps are going to be architectural in nature, the benefit compounds on itself exponentially as more and more stuff gets added to the final product. This is true due to the simple fact that architectural changes almost always touch every aspect of a system, and that once you get to a certain point the only recourse you have to remediate is a complete re-write - which effectively nixes any chance of fixing the bug in the first place. Lastly, since the security controls are documented, auditing for the proper controls becomes A LOT easier.

All-in-all, if a large enterprise wants to be secure it has to embed security inside of the software development life cycle. Now I do not necessarily thinks this constitutes forming a "Secure SDLC" but rather necessitates forming an "SDLC that considers Security".

Andy Steingruebl said...

While in theory this isn't necessarily a bad idea Matt, when people push code to production do they do post-push testing of their live site? or, since their QA testing processes were "guaranteed" to find all of the problems, and the administrators who pushed it were guaranteed to not make any mistakes, should they just go home and not test the production system to see if things are working?

I think most people do a code push, and then they do some limited testing on their production site just to make sure its up and running, their process didn't miss anything, no configurations got broken, etc.

Why wouldn't you want to include security testing in there just in case?

Jeremiah Grossman said...

To piggyback onto Security Retentive..

"This seems to undermine any SDL process that the organization may have in place.?"

Or perhaps measure its TRUE effectiveness, which from our vulnerability assessment measurements they vary greatly and never achieve 100% perfection.

"In my opinion, if your SDL process is of any merit then there should be nothing to find during your assessment."

I think this is a bit extreme. Just because you find vulnerabilities in production after an SDL cycle, this doesn't mean the entire process is worthless.

Fact of the matter also is strange things happen where going from test to production systems even in "mirrored" environments. Again, thing WE can and HAVE measured.

dre said...

What's the risk involved?