Inside an enterprise lives an IT security professional responsible for website security. He takes his job seriously because if his employer’s websites get hacked, he gets the late night call from the boss. A big part of the job requires educating developers on the importance of secure coding and informing the business owners of Web security risks. He does this because no amount of patching or firewalling will fend off an attacker with a browser. While doing everything within his power, there’s still a total lack of control in protecting the websites he’s responsible for. He can’t fix the vulnerabilities in website(s) when they’re found without developer involvement.
Does this situation sound familiar? I hear the frustration all the time. The problem is when a vulnerability is identified by either a pen-tester, developer, outsider or whomever, these are the sole and painful options:
1. Take the website down
2. Revert to an older version of the website/code (if it’s secure)
3. Stay up while exposed.
Nothing is better than not having an issue in the first place, but vulnerabilities will crop up despite the best software development lifecycle. Option #1 is typically reserved for occasions where an incident has occurred and option #2 when a product hot-fix is not back-ported to development, which later gets overwritten. So far history shows the vast majority choose option #3 and assume the risk rather than halt business with fixes coming a ways off.
Clearly we need more options - something that allows the mitigation of vulnerabilities without having an impact on business operations.
This is important because 9 out of 10 (or more) websites have vulnerabilities as a result of being built by those who didn’t know or appreciate the severity of today’s attacks. Furthermore I’d say most of the popular ecommerce websites were built either before the discovery of prevalent vulnerabilities such as XSS, SQL Injection, CSRF, etc. or prior to them becoming common knowledge. Consequently we’re burdened by 15 years of insecure website code already in circulation and it’s extremely unlikely code will be rewritten solely for “security reasons.” Over the coming decade their replacement will occur naturally to achieve business goals while taking advantage of emerging technologies and more secure development frameworks.
That means when you take an honest look at website security there must be two different, but equal, website security strategies:
1) Security throughout the SDLC
2) Vulnerability Assessment + Web Application Firewall
Strategy #1 works best for websites soon to be built or undergoing a major rewrite/addition. A Web security program combining executive buy-in, modern development frameworks, awareness training, and security baked into the design and QA simply does wonders. On the other hand, this strategy is often difficult and expensive to implement on an existing website where no such activity took place historically. Even after vulnerabilities are identified it’s time consuming allocating personnel, QA / regression testing the fix, and scheduling production releases. No matter how mature the SDLC, it takes at least days, sometimes weeks, and even months for issues to be resolved. This is where most are today. This will also really challenge true PCI 6.6 compliance when the burden is realized.
That’s why strategy #2 works best for existing websites. The technology integration by which vulnerability assessment results are immediately imported into a WAF creating a “virtual patch.” The integration closes the loop between vulnerability identification and mitigation allowing the opportunity to address the root causes as time and budgets allow. The challenge here is if the vulnerability data source is inaccurate then the virtual patches may cause the WAF to deny valid traffic, allow malicious traffic, or crash entirely. With verified data, the enterprise is able to fully realize its vulnerability assessment investment in real-time, confidently place WAFs in block mode, and provide IT professionals with the control they’ve always lacked. And it would be about time too!
If this solution sounds familiar, that’s because the idea has been used successfully in the past, just never in web application security. Kavado tried in the 2002-2003 era, in 2003-2004 several more vendors attempted using AVDL, and I’m sure there were others, but ultimately all attempts proved unsuccessful for the reasons above. Only recently has the vulnerability scanning and web application firewall technology matured to the point where the combined approach has finally become viable. As the current state of Web security is being realized and business decisions need to be made, having a variety of options available is a great great thing.