If you ask the average expert what organizations should do about Web security you’d almost universally hear what’s become like a religious commandment, “Thou shall add security as part of the application from the beginning. Blessed are those who develop secure code.” Amen. I am a loyal follower of security in the SLDC church. I’ll humbly try to ensure my code does what I preach others should also do. The problem is code security by itself will NOT deliver us unto to the pearly gates of Web security that many people wish for. There are other issues at play.
As an information security professional my responsibility is assisting organizations mitigate the risk of their website being compromised. If the process requires rewriting some insecure code, great, let’s do it. The responsibility also means being open to solutions such Web application firewalls, configuration hardening, patching, system decommissioning, obscurity, a lucky rabbits foot, etc. Anything and everything should be used to our advantage because the odds are stacked in the bad guys favor. Lest we forget the bad guys don’t need more than to exploit a single weakness.
At WhiteHat we assist the effort by rapidly identifying Web application vulnerabilities and helping to get them fixed before attackers exploit them. We also invest significant R&D analyzing website vulnerability data, matching them up to publicized incidents, measuring the benefits of various security strategies, and ascertaining what best practices provide the most bang for the buck in a given situation. And software security proves to be one of those things that’s difficult to measure, however there are a few thing we do know for sure about it.
Important as it is the SDLC processes can't always take into consideration unknown attack techniques, current techniques we don’t fully appreciate and ignore, or the massive amounts of old insecure code we depend upon already in circulation. Think
165 million websites and mountains of new code being piled on top all the time. How do we defend our code against attacks that don’t yet exist? And once these the techniques are disclosed its obvious we can’t instantaneously update all the world’s Web-based code (far far from it). As an industry we fail to realize these SLDC limitations, as a result don’t prepare for them, and inevitably pay a heavy price. Sin of omission.
Only a short time ago we didn’t know that integer and heap overflows were exploitable and were something to worry about. Code inspected and declared clean all of a sudden was vulnerable even though not a single line changed. The same happened in the webappsec with
Cross-Site Scripting (XSS), ignored for years until the bad guys loudly demonstrated its potential. The same is happening with
Cross-Site Request Forgery (CSRF),
HTTP Response Splitting, and hundreds of other attack variants. Now the vultures are circling
null pointers attacks. Secure code is only secure, if there is such a thing, for a period of time impossible to predict. We can’t future-proof our code and I’ll guarantee
new attack techniques are on the way with the
existing ones often becoming ever more powerful.
On the horizon are clever and evilly lucrative uses for timing attacks, passive intelligence gathering, application DoS, CSRF, and several other rarely explored examples I plan to present at
Black Hat USA (if accepted). And that’s not to mention vulnerabilities that have nothing at all to do with the code.
Crossdomain.xml,
Predictable Resource Location,
Abuse of Functionality, and a dozen other issues. Lately I’ve also been noticing in our data a link between a website’s security posture and when it was actually launched/built - equally or more so than the technology in use. Newer websites developed after an attack class became mainstream appears to stand a higher chance of being immune. If true this would make a lot of sense to me, more than developers suddenly having learned the virtue of input validation.
Secure coding best practices even if implemented perfectly mostly only account for the attack techniques we’re currently aware of, but once something new comes up, we got a big problem because of the scale of the Web. That’s why XSS, SQL Injection, and CSRF are
biting us in ass so hard. For years we really didn’t fully understand what they could do or effectively get the message out where anyone would care. Now significant portions of the Web are vulnerable, we just don’t know where exactly, and even if we did are we really going to go back line-by-line? Now we’re in a spot where hundred of thousands of pages are being infected with JavaScript malware. I don’t expect this to end anytime soon, get worse if anything because the bad guys have a lot of green field to work with.
My point is we need to look at Web security in a new way and accept that code (or developers) will never be perfect or even close to it. To compensate we need solutions, including Web application firewalls (virtual patches), wrapped around our code to protect it. Some might call this approach a band-aid or a short-term solution. Whatever, I call it realistic. Just ask those who are actually responsible for securing a website and they’ll tell you the same thing. We need nimble solutions/products/strategies that help us identify emerging threats, react faster to them, and adapt better to a constantly changing landscape. Now when a vulnerability or new attack class shows up IT Security should have a fourth option for the business to consider giving the developers time to fix the code:
1. Take the website off-line
2. Revert to older code (known to be secure)
3. Leave the known vulnerable code online
4. Vulnerability Mitigation (“virtual patch”)