Friday, May 07, 2010

Replacing Happiness with Pride (Rugged)

Developers are blissfully ignorant in knowing how insecure the code they write is. To overly simplify, an application security specialists job is to remove a developers bliss, their happiness. Happiness is not something a person will want to let go of willingly unless an equitable replacement is offered. If this is what it takes, no wonder application security is so challenging. Perhaps that is what the Rugged Software movement is all about. Replacing happiness with pride.

You know the drill -- an application security specialist sits down with a group of developers. The developers know anytime "security" comes around they’ll being asked to do more work. They must resist new tasks or revenue generating features will be placed on the back burner, product deadlines will slip, and upset their bosses. They’ll probably have to sit through training programs when they could be doing important work. And, for what?! To make sure nothing unexpected happens. The developers feel that this person, this ASS, is supposed to be the one responsible for “security” anyway, not them. They are doing someone else's job.

The ASS starts by going over the results of a recent penetration-test, which resulted in a number of reportedly high-risk security vulnerabilities. Right there, the first stage of Web security grief begins -- Denial. The developers are thinking there is no way their code is exposed to something called Cross-Site Scripting or SQL Injection. They ask for proof, to which the ASS happily complies with ready made proof-of-concept code. The document.cookie alert strings were confusing and unimpressive, but extracting raw database content was rather disconcerting. Next enter the Anger stage.

But, why all the fuss? Why don’t developers write secure code? Wait, strike that. “Why should developers write secure code?” There, that’s the question the application security industry needs to be answering, and answering convincingly. Secure code is NOT implicit, its explicit. Meaning, code cannot not be considered even remotely secure unless one specifically asks for it, in "the requirements," and then it must be developed smartly and tested thoroughly. If secure code isn’t explicitly asked for, you almost certainly won’t get it.

To further emphasize the point, if you read any software end-user licensing agreement (EULA) you’ll notice software makers directly state that there is no warranty and no guarantee regarding the performance of their product, which includes security, and at the same time they waive all liability should any errors occur. Therefore unless a new and profound legal precedence is set regarding the enforceability of these EULA provisions, secure code being explicit, rather than implicit, is unlikely to change. I’m not holding my breath.

What are the reasons why developers might want to develop, or learn to develop, "secure code?"

Perhaps these skills, with formal training and certification, may make them more attractive to employers, lead to promotions, bonuses, etc. I submit while this may happen occasionally, it is largely the exception than the rule. Instead, learning iPhone Development, HTML 5, Ruby, Python, Ajax, and Flash are far more financially rewarding. Don’t take my word for it, next time you attend a security conference, try to find an actual developer. It’ll be like playing Where’s Waldo. Clearly, they are not seeking out application security on their own.

OK, that’s the carrot, what about the stick?

If a website is hacked due to shoddy code -- or maybe just a vulnerability spotted by a customer, how often is the offending developer singled out and punished (written up, stern talking to, pay cut, job loss, etc.)? Rarely is my experience. Now, I’m not necessarily advocating these, just citing the facts. On the other hand, what a developer knows for certain is that if a product doesn’t ship on time, then there will be real consequences, which incentivized skipping a security stage in the SDL.

Maybe Adrian Lane has it right about what things really lead to secure code, “the techniques that promote peer pressure, manifesting itself through fear or pride, are the most effective drivers we have.” Through peer pressure, if a developer can feel proud about good work or embarrassed when not, then real change can be affected.

Thursday, May 06, 2010

Ceding the desktop security battle, almost the war

Fresh from the FS-ISAC conference in lovely St. Pete Florida, one predominate theme was that Financial Institutions must assume the client, their customers rather, are compromised (infected with malware) and they must continue doing business anyway. Given the threat landscape this a reasonable operating parameter. The prevalence of man-in-the-browser attacks force FIs to make very tough business decisions. If a client PC infection is detected, do they continue to allow transactions with the customer while trying to detect and minimize fraudulent transactions? Further, are the FIs obligated legally or ethically to inform the customer of the infection? Or, do they suspend all transactions and incur support costs to help the customer fix their PCI before allowing money to move?

These are very challenging questions with no singular correct answer, but what really concerns me is the premise itself. If we operate with this assumption, that the client is compromised (again not unreasonable), then the good guys have ceded victory in the desktop security battle. With over 1 billion people on the Internet, that is no small loss. What’s worse is there are signs that the loss of the home network could be permanent.

Botnets are starting to target and infect routers and DSL modems. Scary, and a possible trend. Think about what this could mean. Should this become problem become pervasive, it won’t matter if PCs are disinfected, swapped out, or replaced with iPads, the bad guys are still control because they own the network below. They’ll own DNS, the routers in between, and so on. There is effectively little defensive countermeasures to protect home routers and DSL modems, which are not exactly secure to begin with, or detect if they’ve been compromised.

I know this is a little FUD, but not exactly implausible.

Time to start blogging again...

No doubt many have noticed that I’ve been on a blogging hiatus. Between attending to literally life and death personal matters, an overwhelming work schedule, and taking some much needed time off -- blogging was put on hold for a while. Now, finally, I can see some light at the end of the tunnel and responsibilities becoming more manageable.

Over the last couple of weeks I’ve been putting a lot of time into our 9th WhiteHat Website Security Statistic Report, “Which Web programming languages are most secure?” Full report is available (reg. required).

I always love doing this report to see how Web security is changing, but this time around it was even more exciting. For years the industry has been conditioned to believe that the selection of a development technology is one of the most important decisions affecting website security. However, the empirical data behind the comparison of development languages / frameworks from our latest report paints a very different picture. The bottom-line is that there just isn't a large measurable difference in the security postures from language to language or framework to framework -- specifically Microsoft ASP Classic, Microsoft .NET, Java, Cold Fusion, PHP, and Perl. Sure in theory one might be significantly more secure than the others, but when deployed on the Web it's just not the case.
Introductions
"Security-conscious organizations make implementing a software security development lifecycle a priority. As part of the process, they evaluate a large number of development technologies for building websites. The assumption by many is that not all development environments are created equal. So the question often asked is, “What is the most secure programming language or development framework available?”

Clearly, familiarity with a specific product, whether it is designed to be secure- by-default or must be configured properly, and whether various libraries are available, can drastically impact the outcome. Still, conventional wisdom suggests that most popular modern languages / frameworks (commercial & open source) perform relatively similarly when it comes to an overall security posture. At least in theory, none is markedly or noticeably more secure than another. Suggesting PHP, Java, C# and others are any more secure than other frameworks is sure to spark heated debate.

As has been said in the past, “In theory, there is no difference between theory and practice. But, in practice, there is.” Until now, no website security study has provided empirical research measuring how various Web programming languages / frameworks actively perform in the field. To which classes of attack are they most prone, how often and for how long; and, how do they fare against popular alternatives? Is it really true that popular modern languages / frameworks yield similar results in production websites?"
This type of data is likely to stir up emotion within the industry because many people are extremely attached to their development language / frameworks. They have strong convictions about the perceived security performance and opinions on why their choice is the best for others too. At the end of the day, this report shows that no one language / framework is vastly more secure than another...none are so special that they stand out. The first step to improve application security is to focus less on the technology and more on creating an executive level mandate. Unless we bridge the gap between perception vs. reality, the problem will never be properly addressed.