Developers are blissfully ignorant in knowing how insecure the code they write is. To overly simplify, an application security specialists job is to remove a developers bliss, their happiness. Happiness is not something a person will want to let go of willingly unless an equitable replacement is offered. If this is what it takes, no wonder application security is so challenging. Perhaps that is what the Rugged Software movement is all about. Replacing happiness with pride.
You know the drill -- an application security specialist sits down with a group of developers. The developers know anytime "security" comes around they’ll being asked to do more work. They must resist new tasks or revenue generating features will be placed on the back burner, product deadlines will slip, and upset their bosses. They’ll probably have to sit through training programs when they could be doing important work. And, for what?! To make sure nothing unexpected happens. The developers feel that this person, this ASS, is supposed to be the one responsible for “security” anyway, not them. They are doing someone else's job.
The ASS starts by going over the results of a recent penetration-test, which resulted in a number of reportedly high-risk security vulnerabilities. Right there, the first stage of Web security grief begins -- Denial. The developers are thinking there is no way their code is exposed to something called Cross-Site Scripting or SQL Injection. They ask for proof, to which the ASS happily complies with ready made proof-of-concept code. The document.cookie alert strings were confusing and unimpressive, but extracting raw database content was rather disconcerting. Next enter the Anger stage.
But, why all the fuss? Why don’t developers write secure code? Wait, strike that. “Why should developers write secure code?” There, that’s the question the application security industry needs to be answering, and answering convincingly. Secure code is NOT implicit, its explicit. Meaning, code cannot not be considered even remotely secure unless one specifically asks for it, in "the requirements," and then it must be developed smartly and tested thoroughly. If secure code isn’t explicitly asked for, you almost certainly won’t get it.
To further emphasize the point, if you read any software end-user licensing agreement (EULA) you’ll notice software makers directly state that there is no warranty and no guarantee regarding the performance of their product, which includes security, and at the same time they waive all liability should any errors occur. Therefore unless a new and profound legal precedence is set regarding the enforceability of these EULA provisions, secure code being explicit, rather than implicit, is unlikely to change. I’m not holding my breath.
What are the reasons why developers might want to develop, or learn to develop, "secure code?"
Perhaps these skills, with formal training and certification, may make them more attractive to employers, lead to promotions, bonuses, etc. I submit while this may happen occasionally, it is largely the exception than the rule. Instead, learning iPhone Development, HTML 5, Ruby, Python, Ajax, and Flash are far more financially rewarding. Don’t take my word for it, next time you attend a security conference, try to find an actual developer. It’ll be like playing Where’s Waldo. Clearly, they are not seeking out application security on their own.
OK, that’s the carrot, what about the stick?
If a website is hacked due to shoddy code -- or maybe just a vulnerability spotted by a customer, how often is the offending developer singled out and punished (written up, stern talking to, pay cut, job loss, etc.)? Rarely is my experience. Now, I’m not necessarily advocating these, just citing the facts. On the other hand, what a developer knows for certain is that if a product doesn’t ship on time, then there will be real consequences, which incentivized skipping a security stage in the SDL.
Maybe Adrian Lane has it right about what things really lead to secure code, “the techniques that promote peer pressure, manifesting itself through fear or pride, are the most effective drivers we have.” Through peer pressure, if a developer can feel proud about good work or embarrassed when not, then real change can be affected.
15 comments:
A developer should be found and punished for the decisions of the organization. Interesting. Sounds like management speak.
When a construction job is behind schedule, do you think the architect or super finds a nail pounder to blame?
Wouldn't it be more effective to concentrate on how organizations can support and foster secure software development practices though every level of the organization?
Congrats for the slashdot front page quote, http://it.slashdot.org/story/10/05/07/1459223/The-Desktop-Security-Battle-May-Be-Lost
@Anonymous "Wouldn't it be more effective to concentrate on how organizations can support and foster secure software development practices though every level of the organization?"
That was exactly the point of the post. How to make application security in everyone's best interest.
I've come to the conclusion, that web application security has to be done without the developers.
Possibility one: you force them using just a hand full of (secure) methods for common tasks like reading user input and printing out software output. So no longer "$_GET['foo']" and "echo $foo" - but: "SECGET('foo')" and "SECECHO('foo')".
Possibility two: you put a WAF in front of the application. So you admit that the software will certainly have a security hole and try to fix it afterwards.
Look at all the security blogs and literature - since 10 years there is no sign of increasing web application security.
Jeremiah, as always, it is interesting to hear what you have to say about the problems of software security. As you know, I have a different perspective, coming from the development side of the issue rather than the application security side. For me, security, and reliability and robustness are all reflections of quality. All of this is important to our customer – it’s our responsibility as professional developers to take our customer’s requirements seriously and do what needs to be done.
In the scenario you describe, where the unfortunate and embattled application security specialist confronts resistance, denial and anger from the developers, it’s clear that security is not considered by the team (and likely their customer and their management) as part of the quality problem. A security review shouldn’t be confrontational – it certainly isn’t in my shop. Just like a QA engineer reporting a problem isn’t confrontational: in fact, the developers are more likely to be embarrassed than defensive in these cases.
What are the reasons why developers might want to develop, or learn to develop, "secure code?"
Because it is part of their job. Because it was made it clear that it was part of their job. The problem isn’t so much with developers, except insofar as they don’t understand some of the technical issues: but that is what static analysis tools and defensive programming training and fuzzing and reviews and other technical practices in a secure SDLC are for.
The problem lies more with managers: development managers, project managers, product managers, product owners, sponsors… whoever sets the goals and drives the priorities, whoever it is that is paying for the work and decides how their money is to be spent.
Get these people to attend security conferences, make the conferences relevant to them.
BTW, don’t expect the “Rugged Software movement” to be much help in this. I don’t see the point in it, nor do most of my colleagues – one of them, a leading thinker in the agile development community, thought that the Rugged announcement was a joke:
http://www.improvingwetware.com/2010/02/28/the-onion-has-written-a-software-manifesto
Rugged doesn’t add any clear value to the problems at hand, it doesn’t have the support of the development community, and it certainly isn’t moving anywhere. I ranted about this a while back when I first read the announcement, and, to my disappointment but certainly not my surprise, nothing has happened with Rugged since.
http://swreflections.blogspot.com/2010/02/and-now-we-need-to-be-rugged.html
In fact, yours is the first post in months that has referenced Rugged, and I think it’s safe to assume that it won’t make much of an impact on how software gets built.
@Jim, stellar comment, thank you. Clearly your development team has something the vast majority of other groups do not, a "security mandate." In that sense, your approach is dead on. Have only seen such a culture a hand full of times personally.
I'd wager if you ask most developers at most organizations if "security" was part of their job, they'd probably say no.
I'm curious, was their an event where your organization placed security into the quality zone as a necessary requirement? Where did the motivation come from because it is very rare.
@Jim,
I agree 100% with 90% of what you're saying. I think it absolutely comes down to managers to set the security agenda for an organization, especially development. We've seen amazing recoveries take place in companies where the CEO publicly says "Security is now our top priority."
But as for the other 10%, I think we do see security influence coming from other places as well. Did we need a management awakening to get devs to stop using sprintf? No, it came from a viral influence and vendors like Microsoft disabling support.
As for customers demanding security, I would be interested to hear more about your experiences. I assume customer expectations fall in line with uptime and robustness. It appears in the recent past that companies, such as McAfee, are not losing customers because of an incident. So, I like the idea of incorporating security into the pitch for quality.
While I strongly agree with the origins of the problem you described, I also strongly disagree on the proposal to make developers finally "responsible".
Developers just do their job, and by this I mean, they do what they are asked to. In now way should they be held responsible for insecure code unless not asked to.
Let's jump to the next level: project managers and CTOs. What are their incentives? In the recent ISVs' I stepped into for appsec missions, none of them had direct incentives related to software security and reliability.
Incentives with these guys are (too) often planning oriented instead of quality oriented. They want to deliver ASAP. This is what I call negative incentives: people managing developers actually don't have any incentive in expecting developers to code securely.
Next incentive comes from the upper management. In a proper ISV company, you have two strong departments: marketing and customer support/services.
If you consider the known fact that secure coding increases overall software quality, thus reducing overall defects, then you quickly conclude to another negative incentive: a vast majority of managers in an ISV company have no direct interest in producing secure and reliable code.
What do you think would happen to these departments should the company start delivering secure and reliable?
Finally, consider the turnover distribution at an established ISV. How much money actually comes from editing the product and how much money actually comes from supporting the product?
From what I see, the appsec industry is not targeting the right persons when trying to bring security into software. The real stakeholders are ISVs customers with strong security needs: banks, insurance companies, public organizations and so on.
They are still unable to correctly define their security requirements and expectations when buying software from an ISV.
The appsec industry needs to focus on ISVs customers:
- define security requirements
- identify quality/security metrics that need to be enforced and their minimum acceptable rate
- penalties that occur when one of these metrics cannot be verified at the customer side
- penalties that occur when a security issue is raised
- penalties that occur when the customer is exposed to a security incident, which was facilitated by an insecure software
If you manage to get software customers into transiting towards "smart" contracts and SLAs, then ISVs will have a direct financial incentive into building security in their products.
They will then ask how.
And that's where appsec professionals will enter the game into a win-win situation.
Holding developers as responsible for insecure code and letting their managers hit on them due to project delays is, from my humble point of view, a huge error.
It seems that a very good first step is to actually set forth some reasonable expectations of security within the actual requirements documentation.
It's not reasonable to expect a developer to be creating code to a purpose which is not set forth in the documentation - and so long as that is all we're doing,
we shall be faced with coders (at best) following a set of standards. The person who best knows how to break their code is the one who wrote it -
which is why unit tests are so effective. If the specifications state that 'x shall be the case', a good software engineer will try and think of why it might NOT be so,
in order to make certain that their code does not fail at a critical time.
Specifications which are not formally defined (and this goes for the class and even method level - not the application as a whole) lead to sloppy code conforming to some vague idea of what it should do. Such practices lead to the developer slapping whatever needs to be, onto a given piece of code. In the end, good security practices start with good software engineering practices.
Jeremiah,
There was no single event that made security important for us. We build software for the financial industry, and our investors and customers expect that what we build will be reliable and secure. It's been a journey, and there is always pressure to deliver, forcing design and implementation trade-offs. This is why I am especially interested in those areas where security and reliability and robustness and general design/code quality overlap: where building a system in a careful, reliable and robust way also helps to ensure that it is secure. The bigger this overlap is, the more value we get out of doing things right.
I agree with Marisa that grass roots participation from developers is important, and developers shouldn't be ignored. they need to understand the problems better, how they can help. But starbuck is right in that the customers and sponsors, the people who spend the money, need to demand that software is written better: not only so that it is secure, but that runs correctly in the first place. We're back to a basic quality problem then, and I believe that quality can be managed to a high-enough level. I hope that Erich is not right, and that security has to be done without (and despite of) developers.
@Erich
Possibility two isn't a "real" solution. If the WAF fails there are only two options:
1) It fails closed (as any secure/security product should) and you end up with an outage and lost sales/revenue/whatever.
2) It fails open and you end up being breached because your base app. is insecure because your Devs didn't write secure code.
I've come to the conclusion, that web application security has to be done without the developers.
Nice posting at blog spot blog Nice blog and excellent post i really appreciate on hard work.
What's odd is that it's perfectly normal to have a QA team that verifies every new feature, but it's somehow strange to have a Security Assurance team that verifies every bit of new code. We have a dedicated developer whose job is to review everyone else's code for security weaknesses, reporting bugs as they go along, and develop additional security features to integrate with the code base. We treat security issues found this way exactly like bugs. Some may prevent shipping, and the person that caused the problem has to fix the problem. Forcing all your developers to take on the entire security topic A to Z is not realistic, it's too much knowledge. A dedicated expert who boils everything down to a few simple tools and bug reports for the developers, that's realistic.
@kingthorin
No, a WAF is definitively no 'real' solution. I could say: it's better than leaving it to the mercy or the developers if the application is (kind of) secure.
But imho in most cases security is all about making sure that user input is checked by type and length - I pretend it is as simple as that at least for web applications. There are some more attack vectors that's true. But with this simple type & length check in front of 'any' web application you get rid of 80% of classical attacks. Check NVD for vulnerabilities for Joomla! for example: lots of SQL injections and XSS through parameters which should be just integers.
Post a Comment