Exploitation of just ONE software vulnerability is typically all that separates the bad guys from compromising an entire machine. The more complicated the code, the larger the attack surface, and the popularity of the product increases the likelihood of that outcome. Operating systems, document readers, Web browsers and their plug-ins are on today’s front lines. Visit a single infected Web page, open a malicious PDF or Word document, and bang -- game over. Too close for comfort if you ask me. Firewalls, IDS, anti-malware, and other products aren’t much help. Fortunately, after two decades, I think the answer is finally upon us.
First, let’s have a look at the visionary of software security practicality that is Michael Howard as he characterizes the goal of Microsoft’s SDL, "Reduce the number of vulnerabilities and reduce the severity of the bugs you miss." Therein lies the rub. Perfectly secure code is a fantasy. We all know this, but we also know that what is missed is the problem we deal with most often, unpatched vulnerabilities and zero-days. Even welcome innovations such as Address Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) only seem to slow the inevitable, making exploitation somewhat harder, but not stopping it entirely. Unless the battlefield itself is changed, no matter what is tried, getting hacked will always come down to just one application vulnerability. ONE. That’s where sandboxes come in.
A sandbox is an isolated zone designed to run applications in a confined execution area where sensitive functions can be tightly controlled, if not outright prohibited. Any installation, modification, or deletion of files and/or system information is restricted. The Unix crowd will be familiar with chroot jails. This is the same basic concept. From a software security standpoint, sandboxes provide a much smaller code base to get right. Better yet, realizing the security benefits of sandboxes requires no decision-making on the user’s behalf. The protections are invisible.
Suppose you are tasked with securing a long-established and widely-used application with millions of lines of insanely complicated code that’s deployed in a hostile environment. You know, like an operating system, document reader, Web browser or a plug-in. Any of these applications contain a complex supply chain of software, cross-pollinated code, and legacy components created long before security was a business requirement or anyone knew of today’s class of attacks. Explicitly or intuitively you know vulnerabilities exist and the development team is doing its best to eliminate them, but time and resources are scarce. In the meantime, the product must ship. What then do you do? Place the application in a sandbox to protect it when and if it comes under attack.
That’s precisely what Google did with Chrome, and recently again with the Flash plugin, and what Adobe did with their PDF Reader. The idea is the attacker would first need to exploit the application itself, bypass whatever anti-exploitation defenses would be in place, then escape the sandbox. That’s at least two bugs to exploit rather than just one. The second bug, to exploit the sandbox, obviously being much harder than the first. In the case of Chrome, you must pop the WebKit HTML renderer or some other core browser component and then escape the encapsulating sandbox. The same with Adobe PDF reader. Pop the parser, then escape the sandbox. Again, two bugs, not just one. To reiterate, this is this not say breaking out of a sandbox environment is impossible as elegantly illustrated by Immunity's Cloudburst video demo.
I can easily see Microsoft and Mozilla following suit with their respective browsers and other desktop software. It would be very nice to see the sandboxing trend continue throughout 2011. Unfortunately though, sandboxing doesn’t do much to defend against SQL Injection, Cross-Site Scripting, Cross-Site Request Forgery, Clickjacking, and so on. But maybe if we get the desktop exploitation attacks off the table, perhaps then we can start to focus attention on the in-the-browser-walls attacks.
SELinux - reinvented again.
Or, longer ago still, with Multics:
"A Hardware Architecture for Implementing Protection Rings" and the Access Isolation Mechanism (AIM).
It's not SELinux reinvented again, it's the UID reinvented again.
Operating systems are supposed to isolate process run as different UIDs (Unix) or SIDs (NT) from each other. Android makes use of this design capability, running each application as a distinct UID.
Unfortunately, no kernel is strong enough to actually uphold this most basic guarantee, even though we have been depending on it for decades. Why would a new (likely more complex and featureful) sandboxing mechanism work where the old, simple one has proven unworkable?
Although we actually do have some code that is very strong (DJB's comes to mind), I do agree with you that we cannot really expect vulnerability-free code. Since experience shows we also can't expect sandboxing to work, aren't we basically doomed?
@Anonymous: "doomed," nah, everything still works fine. Sandboxing is a lot about protecting known broken code with something smaller and stronger. Guard an enclosed perimeters rather than all the doors within.
Recently I attended a talk by IBM Researcher Richard Gabriel called “Design Beyond Human Abilities”. Part of his talk discussed an idea from Martin Rinard (MIT) that since software cannot be made perfect, we should accept that software will be defective. And once we accept defective software, we can use it to our advantage, e.g. if we want faster runtime, one method would be to ignore half of the iterations. He did go on to clarify that there’s a difference between “forgiving” and “unforgiving” parts of the code, where you identify that which must always be correct vs. that which can be approximate/wrong (such as the color of a single pixel).
Here's his talk in PDF form (see page 17 for the Rinard reference):
Microsoft has had a sandbox in IE since IE7 on Windows Vista (2006 timeframe, see link below). By today's standards this is a very weak sandbox, but it is a sandbox all the same. Keep in mind that PMIE was retrofitted onto an existing browser whereas, in Chrome's case, Google was able to design their browser with a sandbox in mind. It is fair to say that Adobe Reader should have had the same problems as IE, but I the compatibility concerns from sandboxing Reader are significantly less than IE (hence the weakening of the sandbox in IE's case). Adobe also had significantly lower investment costs for sandboxing Reader because they were able to pick up the sandbox that Google had already developed.
In my opinion software dev teams should first invest in FULLY adopting exploit mitigation technologies (DEP/ASLR) before focusing resources on other areas. What we see today is that many software companies (even Microsoft in some cases) have not fully enabled mitigations -- these are the weak spots that attackers generally go after today (such as non-ASLR DLLs).
After fully adopting mitigations I agree that product teams should invest in sandboxing. The reason I prioritize this second is because, as I'm sure you will agree, allowing an attacker to execute code is a much weaker defensive position than preventing reliable code execution in the first place. It is fair to say that exploit mitigations won't stop everything, but so far when properly enabled they appear to have had a significant impact on exploit development cost and feasibility.
You need at least 2 bugs to exploit, but not necessarily in the same application.
Some sandbox designs allow you to hop into another sandbox. For example, code running in the IE7/8/9 sandbox can open a handle to the Adobe Reader X sandbox and inject code in that sandbox. So you could escape by exploiting one bug in IE and one bug in Reader. Makes the attack surface larger.
Jim - thanks for your comment! :)
Post a Comment