Monday, June 16, 2008

Why most WAFs do not block

I found the reason elegantly described in “Economics and Strategies of Data Security", by Dr. Dan Geer.

“When you know nothing, permit-all is the only option. When you know something, default-permit is what you can and should do. When you know everything, default-deny becomes possible, and only then.”

To implement default-deny Web Application Firewalls (WAF) must know everything about a website at all times, even when they change. That’s programmatically documenting every expected request method, URL, parameter name/value pair, cookie, process flow, etc making default-permit deployments the rule rather than the exception. Some WAF policies though, like HTTP protocol validation, can run in default-deny mode - the rest well, not so much. Which is why putting in point rules (virtual patches) to defend against known vulnerabilities tends to work well in lieu of pure positive security models.

10 comments:

Rafal Los said...

OK Jeremiah - I'll play Devil's advocate for a second. I know of at least one vendor that has a very good "learning" engine for their application firewall, so that's not necessarily 100% of the case. WAFs are evolving - but the fundamental problem is you can't leave a piece of software/hardware to do your security because you don't understand your applications. What happens is when you start to understand that the appliation must be *understood* in order to be secured, you have an epiphany that then starts to drive a better mentality. Problem with that is... developers don't always understand the code they've written - from a business process perspective.

Intelligently assessing it - WAFs can be a short-term tactical solution - but not a long-term strategy. The foundation for security, as I've preached over and over, starts before the first line of code is ever written ... and that you can't bake into a WAF rule-base.

Jeremiah Grossman said...

> OK Jeremiah - I'll play Devil's advocate for a second.

If conversation and an exchange of ideas is sparked, perhaps that good enough for here.

> I know of at least one vendor that has a very good "learning" engine for their application firewall, so that's not necessarily 100% of the case. WAFs are evolving -

Can that vendor claim to everything about the applications of that website (or every website) all the time? I would guess not. Apps change and the WAF has to learn. Between that time it knows less than everything.

> but the fundamental problem is you can't leave a piece of software/hardware to do your security because you don't understand your applications.

Perhaps not "application" security but people leave software/hardware around all the time that does their "security".

> What happens is when you start to understand that the appliation must be *understood* in order to be secured, you have an epiphany that then starts to drive a better mentality. Problem with that is... developers don't always understand the code they've written - from a business process perspective.

Ain't that the truth. There is just too much to know at some point.

> Intelligently assessing it - WAFs can be a short-term tactical solution - but not a long-term strategy.

Please continue that thought? Why not?

> The foundation for security, as I've preached over and over, starts before the first line of code is ever written ... and that you can't bake into a WAF rule-base.

But even that is imperfect and in unable to adapt quickly to a changing environment. See this is where I think WAFs play a larger role. Securing the insecure.

Anonymous said...

Nice post with a good quote. I'm throwing in Marcus Ranum's take on Default Permit (The Six Dumbest Ideas in Computer Security)

The essence of both seems to be that it is incredibly stupid to be an ignorant regarding your own website. I'd back that.

What you neglect is the aspect that default-deny gives you a very clear understanding of your website in an incredibly brief amount of time. At the expense of your users, though.

The only way I see is to invest time and effort into getting to know your application and do so without hurting your users (too much). Setting up a test system that mirrors your site 100% and develop your ruleset. Then start to move this ruleset to the productive site and run it in detection mode (permit all for a start), then slowly move towards deny-all - and be sure to monitor your logs in realtime!

A website is a huge thing and it is incredibly difficult to understand all its aspects. But using a WAF the patterns become visible in a surprisingly short period of time.

Jeremiah Grossman said...

I don't think Default Permit is dumb, at least when it comes to webappsec, because often it IS the only thing you can do. I also think Marcus probably meant that statement for network security rather than webappsec.

The biggest challenge for default-deny in WAFs is that it requires the website developers and whomever manages the WAF (ops) to be in lock step,which is extremely hard organizationally. Which is where "learning" comes in. Its getting better, but its not instantaneous or perfect all the time.

Having said all of that, everyone knows I've very pro-waf, I just think we need to continue exploring their capabilities and limitations to use them as effectively as we can.

Anonymous said...

Oh, I agree that Marcus talked about network security. But I believe we are seeing the same trend in webapp security that lead to the default deny policy in network firewalls: We are moving in direction of default deny for web applications. I'm sure it will take years, though.

It's definitely a organisational problem and one that is very costly to solve. But you get out a very good understanding of your web applications. And imagine the application software would come with a whitelist ruleset for your WAF!

Default deny forces a close cooperation between the OPS guys and the developers / system administrators. I think that is a good thing.
If you are planning well, then you set up your learning process with an automated test suite of your application for maximum benefit. That's when you start to really know your application.

Otherwise I think your original post is entirely legitimate: As long as it does not make sense to do default deny from a n economical standpoint, nobody will do it.

Jeremiah Grossman said...

What can I say, I concur. :) The only thing Im not so sure about is, "Default deny forces a close cooperation between the OPS guys and the developers / system administrators. I think that is a good thing."

Not sure there is a way around it, but do we really want these two groups talking. I mean, I'm not even sure they speak the same language. :)

Anonymous said...

;) I see what you mean. Maybe I've been daydreaming too much.

Lately, I got into a project where nobody talked HTTP. They all talked distributed proprietary framework only. And I felt like a human in an alien world - or the other way around.

Ryan Barnett said...

There are a few WAF capabilities that we are talking about here - negative security, positive security, learning and then finally blocking.

Starting at the end, I can say from first hand knowledge that the vast majority of ModSecurity users do utilize blocking. How do I know this? From a recent user survey where I asked this exact question. So, one item may be that the majority of Whitehat Sentinel users are not ModSecurity users! This is actually something we are aiming to address with the recent Sentinel/ModSecurity virtual patch integration :) Blocking is subjective in that it is not necessarily a global setting meaning block nothing/everything. There are some rules that may log only while other rules are set to block. The rules may be for either attacks of a high severity or attacks where we have rules that we are confident will have a low false positive rate.

Speaking of false positive rates, this is where targeted positive security is very useful. Perhaps it is not feasible to manually create positive security rulesets for the entire application, however you can certainly create these rules for highly sensitive areas (login pages, admin functions, etc...) and for issues identified through scanning/pentesting.

As for negative security, they are useful for blocking however you need to be confident that you have addressed evasion issues. This means that you are looking in all of the possible attack vectors and that you are countering evasion attempts aimed at sloppy regexes or poor normalization features.

One of the biggest challenges facing webappsec, from an operational security perspective, is that the Dev/InfoSec/Ops folks aren't sharing data. If the Dev teams are using security frameworks (STRUTS, ASP.NET, etc...) then they can share their input points during a Threat Modeling process. It is at this time that all parties can focus in on critical areas for scanning/WAF protection rules. The issue is how many organizations are actually doing this???

Anonymous said...

I think that the title should have been "Why most WAFs do not block on everything". As you have mentioned in the post most WAFs do block by default on HTTP protocol violations (at least most of them) and are configured to block against specific known vulnerabilities of the protected application. I might add that there are a few more issues that can (and are actually) blocked by default like some forms of data leakage (detailed error messages, Google hacking, etc.). I work for a WAF vendor (if you haven't guessed that by now) and our statistics from a recent customer survey show that most of our customers deploying WAF use it for blocking.
Anyway, WAFs can surely benefit from vulnerability data provided by scanners (or pentest services). At the same time, vulnerability scanners can benefit from information that a WAF learning engine provides.
- Amichai

Jeremiah Grossman said...

Taking Ryan's and Amichai's comments into consideration, then it would appear that the WAF world is going to continue to be very granular. With network firewalls you basically block everything and manage by exception, but with WAFs, not so much.

WAFs will have positive security models for some areas of the website where we're really sure about and block 100% of the time. Other areas we won't (block) because they might change a lot.

Then we'll also have negative security models where we block 100% of the time based on well-known attack signatures and/or vulnerabilities in the website. On the other hand the more generic classes of attack we'll probably decide to alert rather than block.

Would this be more or less a fair estimation?

Now if we could only describe the situation in simple terms. :)

@Amichai, phase-2 for WH will be the bi-directional chatter. For now just trying to prove out the VA->WAF can work.