Thursday, December 02, 2010

Website Monocultures and Polycultures

Way back in 2003 a group of highly respected security pros released a controversial yet landmark paper, “CyberInsecurity: The Cost of Monopoly.” The text offered two lessons: software monocultures are dangerous and Microsoft, the largest of monocultures, is the most dangerous. At the time this conclusion sparked an industry firestorm. More recently Bruce Schneier and Marcus Ranum faced-off on the monoculture debate, which got me thinking if these ideas apply equally to website security. You see monoculture and polyculture theories are generally based upon commonly understood network and host layer security behaviors, behaviors very different from website security.

Before diving in let’s first establish a baseline on the fundamental assumptions about software monocultures and polycultures. Monocultures, meaning all systems are identical, are at elevated risk to systemic widespread compromise because all nodes are vulnerable to the same attack. For example, one zero-day exploit (not necessarily required) has the capability of ripping through the entire ecosystem. The benefit of a monoculture however is the consistency of all the connected nodes allow for easier management by IT. Manageability makes keeping patches up-to-date less difficult and by extension raises the bar against targeted attacks and random opportunistic worms.

In polycultures exactly the opposite is true. In the event of a worm outbreak, again possibly leveraging a zero-day vulnerability, a polyculture would be more resilient (survivable) by virtue of the diversity in the ecosystem. All nodes are not vulnerable to the same issue. The downside of course is the increased difficulty and cost of IT managing a polyculture, including keeping security patches up-to-date. Therefore targeted attacks and random opportunistic worms are more likely to succeed in a polyculture environment, but to a limited extent.

Depending on tolerance for risk and the threat agent one is most concerned about, targeted or random opportunistic, it dictates if a monoculture or polyculture environment is preferable. We see where the authors of the aforementioned paper fell. And in 2003, the era of extremely large worm infections including SQL Slammer, Blaster, Nimda, Code Red, and so on it is not hard to see why. Today I hazard to guess that most people in the infosec industry would still agree with their conclusion -- monocultures as dangerous, polycultures are more survivable. So when thinking in terms of networks comprised of Windows, Linux, BSD, OS X, and so on all this sounds reasonable, but when the context is switched to websites things get trickier.

First of all, it doesn't matter what OS is underlying when SQL Injection, CSRF, XSS, AuthN, and AuthZ attacks are employed against a target site. Second, save for mass SQL Injection worms that take advantage of one-off Web application flaws, websites compromises are primarily targeted. That is the attacker is to some degree sentient. Lastly, and this is where the monoculture vs polyculture question comes in, in networks the attack surface consists of its many hosts while in a website its really the collection of discreet Web applications (or inputs to those applications). Applications written in one or more programmings languages.

I’m suggesting that a “website monoculture” is one where all the Web applications are written in a single programming language and development framework. Pick Python, Perl, PHP, .Net, Java, Ruby, etc. But one and only one. Conversely, a “website polyculture,” is where there’s some mix of languages and frameworks. Of course the manageability aspects of multi-language website mirrors a multi-OS network. Sites using a single consistent language are easier for an organization to code securely and keep secure. Here’s where it gets interesting, I’m not sure how common a website monoculture really is on the Web.

Any website worth hacking is NOT a static network or host component, but a mass of Web applications that are updated with some frequency over years -- daily, week, monthly, etc. The functionality may be merged or migrated with other websites, built by different teams developers, and whose section of the code is created using whatever programming language is popular at the time. A website is not AN application, but many hyperlinked together. That's why you’ll often see websites using some ASP classic and .NET, Java mixed in with Cold Fusion, perhaps Perl intermingled with PHP, and many other combos. Case in point, the WhiteHat Security 9th Website Security Statistics Report showed that websites exhibiting extensions from one language often had a substantial number of vulnerabilities with extensions in another.

Also important to point out that when one host is compromised on a network the attacker may attempt to exploit another flaw in another host and leapfrog across. A quite common scenario because the first compromised machine is usually not enough to achieve their end goal. Websites on the other hand are different. One exploited bug in a website tends to give the attacker exactly what they wanted, server-side data or end-user compromise. It is rather uncommon to find it necessary to exploit one Web application flaw to then exploit another to achieve the goal. Just one XSS allows someone to hack all users. One SQL Injection pilfers all of the data. So no real compartmentalization exists on a website and therefore there’s nothing to be gained security wise that I can see in a polyculture website.

So if website attacks are generally targeted, again except for SQLi worms, and it's easier to secure code written all in the same language, then we should be advocating monoculture websites. Right? Which is exactly the opposite of how the community seems to want to treat networks. I just found that to be really interesting. What I’m working on now inside WhiteHat is trying to find statistical evidence in real terms how the security posture of the average monoculture and polyculture compare. I’m guessing monoculture websites are noticebley more secure, that is, less vulnerabilities. But what would your theory be?

13 comments:

Chris Eng said...

"One exploited bug in a website tends to give the attacker exactly what they wanted, server-side data or end-user compromise."

I think that may be a slight oversimplification. For example, the attacker might not immediately target the application housing the data they're after. Rather, they'll go after the old crusty web app running some ASP classic application, and then use that as a springboard to reach hosts running other web apps.

Of course, if arguing for a monoculture also means you get rid of all those old crusty web apps altogether, or rewrite them on a modern platform, then yeah, monoculture probably benefits you (but at non-trivial cost).

Jeremiah Grossman said...

@Chris: I don't disagree that the scenario you describe is certainly possible, have seen it happen perform. I'm just not sure how common this scenario is vs the one-shot-one-kill style. Would be nice to have more data.

I suppose my premise is that in network security, what we tend to have more of is monocultures, but we risk management advocates polycultures. In website security its exactly the opposite. What we have more of online is polycultures, but what we really want is monocultures. And as you say we can't have it because of the cost.

I'm also holding the theory that the relative security of a website has more to do with WHEN it was deployed than anything else.

Turk said...

Let’s take this conversation up a step but tossing in critical key systems, and seeing if that changes anything. Websites aside what threat agent is more concerning: Targeted attacks or random/opportunistic worm attacks? Either threat agent can be devastating, but which would be worse and more likely? I am guessing targeted attacks. Plus with poly comes the defensive ability to recognize targeted attacks against mono “normal” traffic. Websites aside, I would propose poly implementations for key systems and mono for all else (medium to low).

Now with Websites, would that same argument hold? I guess I am not yet buying that critical websites should be mono-ly spread, but your data might convince me otherwise.

Thanks Jeremiah for the excellent conversation!

Dan Weber said...

What is more important to your website, confidentiality or availability?

If you have 100 different web server combinations all protecting the same confidential data, then the attacker has 100 ways of breaking in and gaining access to your confidential data.

I personally and explicitly mentioned this to one of the authors of the original monoculture paper and he blew me off.

Or, imagine that you have 3 connections to the Internet, all firewalled. Ask yourself if you want those three firewalls to be as similar as possible or as dissimilar as possible?

Turk said...

@Dan: I agree with you, good way to say it so simply.

Inversely, if you have 100 different web servers all protecting DIFFERENT confidential data, all under the same organization, then there is an argument for website polyoculture.

Anonymous said...

I think there's another factor we're likely to find when looking at the respective security of mono vs. poly cultures.

In brief, low-skill/talent-rich shops might be prompted to stay all one way, e.g. "We only do Microsoft.", whereas if a shop is super-pro they're likely to use whatever technology is best (and therefore have a more diverse flora of technologies).

Calandale said...

There's an aspect of monoculture you're just taking as assumed. Three of the exploits are largely dependent on the existence of a monoculture: SQL Injection, CSRF, XSS (SQL, HTTP, and javascript respectively).

It's the underlying technologies which make these convenient to interoperate which also allow attackers to utilize the same techniques against many different sites. Programming language choice is almost cosmetic compared to these communications choices.

Jeremiah Grossman said...

@Turk: thanks for the comment. I agree these days the targeted attack is far more likely -- Verizon DBIR reports have taught us that. My data will only be able to share the average severity posture of mono & poly and how they compare. I suppose the choice as you outlined is who you are more worried about.


@DanWeber: I believe the original authors of the paper were more concerned about random/opportunistic worms, which would explain their conclusion. In websites, I think the opposite would be true.

@DanielMiessler: and a subtle mix of outcomes all in between. Still, how many monoculture websites do we really see out in the ether. Comparatively few in my experience.

@Calandale: you know, that is a extremely good point that I hadn't considered. Something I'll have to think more about. In the meantime I'll raise ya one. We're finding at WhiteHat that different filter-evasion techniques (for SQLi, XSS, etc) have different success rates across different languages and platforms. Yet another variable needed to be considered.

Jeremiah Grossman said...

via Scott Crawford:

1. SQL is arguably the most commonly targeted monoculture in your world.

2. Language of implementation may not be as significant as the implementation itself. I.e., it’s the design patterns, and not the high-level language in which they are coded, that is the target. Of course, languages also use common design patterns...so their prevalence may be a target as well.

Unknown said...

Based on my pen testing experience, I can't say that I believe a monoculture for web applications.

I'll give an example. If you look at PHP, Java, ASP, or many of the other languages used for development...they all contain packages or libraries that help prevent vulnerabilities like XSS. And if they don't, a server-side function can be written to protect from such attacks.

What I have seen on a regular basis, is that many clients can't even get the security right on a single application. That is to say, they might try to prevent XSS on several pages by correctly implementing server-side input validation, but they will forget to implement this check across all fields on these pages, or forget to secure some pages all together.

That being said, and languages aside...I feel as though they need to become consistent in ANY language before we can truly determine if there is a benefit to monoculture over polyculture for web apps.

I have assessed monoculture websites that were all *.php, *.do, or even all *.aspx for customers, and the security vulnerabilities found don't appear to be any better or worse than those websites that were polycultured. Why? Because if the developers don't know how to prevent XSS or SQLi, it doesn't matter which language they write vulnerable code in. If they got XSS in an ASP app, I would bet that that developer also wrote his CFM app insecurely too.

Eoslick said...

http://blog.snakeeyessoftware.com/2010/12/07/monolithic-os-vs-monolithic-web-app.aspx

Mr. Grossman: Always enjoy reading it, the above is my response. I think it's a little more complicated and didn't want to tie up a ton of comment space.

Jeremiah Grossman said...

@Eoslick: good post, thank you. Your suspicious are correct, we're indeed trying to find statistical proof of what makes website more secure. Is it programming language? Web server type? Software development style? The particular industry? Regulation? Monoculture or Polyculture? What?

We all have our own theories of course, but its time to start proving some things. :)

Anonymous said...

We should not overlook the huge amount of vulnerabilities found in some languages, especially PHP, compared to other languages with a better focus on security.

Aside from that, we should probably call "monoculture" the lack of basic security understanding across many web "developers".