- Developers really like to copy code, even insecure code, which may eventually lead to new vulnerable Web applications launched outside deployed WAF protection.
- WAFs, like code, are not perfect and cannot always compensate for complex encoding/decoding application interactions, which could open the door to bypassing security rules.
- A vulnerable Web application feature may be delivered now or in the future via XML APIs, Flash, iPhone application, etc. and by extension live beyond WAF protection.
- WAFs tend to fail open, and when they do, it would be preferable not to have vulnerabilities as an active risk of exposure indefinitely.
- A WAF may not be positioned to protect against the insider threat.
- WAF rules are often exploit and not vulnerability focused, so may protect against some specific attack variants, but miss others. For the same reason, non-exploitable vulnerabilities may continue to be reported by vulnerability assessment solutions.
- Fixing a vulnerability in the code *right* will often systematically resolve an entire class of issues both now and take them off the table in the future.
- Compliance or customer security standards may require an application be tested without the WAF protection.
CEO of Bit Discovery, Professional Hacker, Black Belt in Brazilian Jiu-Jitsu, Off-Road Race Car Driver, Founder of WhiteHat Security, and Maui resident.
Wednesday, July 08, 2009
Why vulnerable code should be fixed even after WAF mitigation
Websites have vulnerabilities, vulnerabilities that are found by vulnerability assessment solutions, which are then communicated to Web Application Firewalls (WAF) for virtual patch mitigation. Given the extremely heightened activity of our adversaries, compliance requirements, volume of existing vulnerabilities, and money/time/human resource constraints this approach is becoming more common every day. What also becomes common is the question management and development groups ask of IT Security, “If the vulnerability is patched by a WAF, then why do we need to fix the code?” A reasonable question and one we need to be prepared to answer with something better than proclaiming, “Because it is the right thing to do!” Obviously this is unconvincing as it provides no reasonable business justification. Here are some ideas:
Posted by Jeremiah Grossman at 2:21 PM
Subscribe to: Post Comments (Atom)
You don't know how many times I have had to answer the question "Why do I need to address code changes if we have a WAF?"...
I have given them most of the same arguments you show in this, to no avail.
It is good to see that I am not crazy and that others think this way.
@Jimmy, did happen to find anything reasoning or line of discussion that did serve to convince?
Hey, as most everyone knows, I am a WAF supporter ;) Even so, I have to answer the same types of questions when we talk with prospects.
Jeremiah - you mention at the end of your opening paragraph that you looking for "reasonable business justification" for still fixing code however all of your ideas are technical/risk based. I would suggest that users focus in on the contractual language that was in place for the development of the web application code. If there was no language for producing "secure code" then they should tackle that first. SANS has a good template to work from - http://www.sans.org/appseccontract/.
Basically - I believe that until developers are hit in the wallet (and start missing out on bonuses) by procuding code that has either functional or security defects in them, things won't change. This goes back to the contract language.
Virtual patch management is sort of a hybrid between vulnerability, IDS and firewall rule management. Any network firewall admin will tell you that over time, the rule sets implemented can be quite complex and always stacking on rules ends up impacting performance later. I have worked with many customers with this process and I always advocate leveraging the customer's change management systems to track both virtual patches and the code changes themselves. If/when the code gets fixed, then you can remove a virtual patch from the WAF and "trim the fat" so to speak.
Another tactic that I have seen customers use, that I don't necessarily agree with on all fronts, is the whole "need to know" scenario. Their belief is that the developers don't need to know that virtual patching is even taking place. Their management is simply provided with vulnerability details and instructed to fix the code. Bringing up virtual patching as a compensating control only muddies the waters and removes much of the motivations for fixing the code.
Another reason is usability, is it not?
That WAF is not likely to produce a user-friendly response to the problematic input.
Any feedback on WH customers running scans with their defensive line down?
From the WAF vendor side of things, I've seen different customers asking us to:
- whitelist the scanner
- treat the scanner like other users
- blacklist the scanner
@Ryan, what can I say... very well said. The decision to fix or not fix IS risk-based IMHO. What I find many are challenged by is understanding and/or articulating the possible risks to the decision makers. Hence the reason for the list.
@Stephan, hmmm... that is an interesting one, but right on the line I think. Why give an informative error to someone who is attacking you in a known vulnerable spot? You probably wouldn't, but maybe not in every case.
@Sylvain, some do... some don't. When and why is always their call and we don't really have great visibility into when this happens. We test whatever they give us in whatever state it happens to be. What we have seen is some who specifically black-list regex our tests, which don't fix the issue, which is particularly frustrating.
What Dre hasn't posted to this yet? Shocking!
The line that I use most is that it boils down to good vulnerability management which is remediation of the root cause not mitigation of the symptoms.
Since the root is the code, anytime that we find that we have an issue that the WAF is defending we should apply true root cause analysis and remediation.
I talk about the fact that code that is secure today is not secure 5 years down the road as new attack vectors are identified and while you can throw something in front of the code to reduce the threat, the threat does not go away until the code is fixed.
I used to say that you can use duct tape to fix anything... but I wouldn't fly in a plane with duct tape on the wings... its that way with vulnerable code... the WAF will fix it.. but I don't want to fly with the code.
I like turtles^H^H^H^H^H^H^HNumber 7.
You could also use a scenario showing the WAF is just an added protection and wouldn't catch all.
Plus, there are times when code is just gruesomely erroneous that it affects performance. You could see these from the request headers. All sorts of things happen. Dev should still check these events. Pose as if ur helping them and not fulfilling ur targets.
#n+1: At some point, you may be able to stop paying the hardware+software+latency costs for your WAF.
Post a Comment