Update: 07.23.2009: As Andrew explains, he got caught up in the moment and really didn't mean what he said (read below). Apologies accepted and I hope to continue working with him in the community. Thanks.
Update: 07.22.2009 - Two great follow-up comments by Security Agent and Jim Bird that really dig into the meat of the issue I was trying to get at. I'd say probably better insights and stated more eloquently than my original posts!
As any reader here knows, I don’t shy away from discussing hot button issues, questioning conventional wisdom, or suggesting controversial ideas. I’ve found doing so is highly rewarding as it affords others an opportunity to share differing points of view, which furthers our collective understanding. 99% of the time criticisms are positive. However, Andrew van der Stock made a comment near the beginning of the OWASP Podcast #32 on my “Mythbusting, Secure code is less expensive to develop” post, which is completely false and out of line. I’ve long considered Andrew a well-respected Web security expert and colleague, so these words caught me by surprise (0min / 50sec).
“Jeremiah has a particular service model that encourages folks to model bad programs and he needs more bad programs to be modeled.”
Andrew: This shows a complete lack of understanding of what I’m personally all about, the value WhiteHat Security offers, and the current security posture of the Web. First, I would NEVER do something like that! Secondly, our business model directly encourages us to help customers improve themselves over the long-term. And lastly, do you really think the Web is so secure that I would need to encourage more vulnerable code to ensure job security!? Please.
Fortunately, the rest of the podcast provides for some very interesting conversation between Jim, Andrew, Boaz, Jeff and Arshan.
My original point was the investment in software security ROI cannot live in a vacuum. As one example, organizations justify adding security to an SDLC in effort to help prevent vulnerabilities, which reduces the risk of security breaches. Again, not getting hacked is the motivation. Today we are getting a stronger grasp through metrics on the types of issues websites are really vulnerable to and getting hacked by. As such we can start focusing our efforts and reconsider conventional wisdom. So my question, “Is secure code is less expensive to develop?” Once again, TO DEVELOP, as opposed to find & fix vulnerabilities during late stage code or production release. I knew this was going to be a controversial subject. To even question the belief some consider as heresy, but felt it needed to be asked just the same.
Given all the numbers I’ve studied to date I think the jury is still out. Perhaps the answer is in how you define “secure code.” At the end of the day though, and this is very important, when you take the costs and ramifications related to incident handling into account, that is what really justifies a software security investment -- not so much cheaper code.
Here is what I don’t get though. Why do some have such an emotional attachment that secure code absolutely MUST be cheaper to develop? Sure it could, but are organizations really that unwilling to pay extra for quality secure code if that is what it takes? We pay a premium for quality in other products (Rolex, BMW, MacBook Pro, LOL). Why not software too!? Perhaps this belief exists because the aforementioned risk of compromise is simply too hard to quantify and build business case around. If so, we should try to tackle that problem as well. Anyway as stated, I remain open and interested in the thoughts of others.
Coming from a development background, a generalization I have noticed in the InfoSec community (NOTE: generalizations are usually bad including this example but let me apply it here anyways) is the tendency to see security as the most important aspect. Unfortunately this is somewhat myopic view of business - most commercial web applications are built for a business purpose. The truth is security is just one minor factor in the overall business decision. Take the problem of phishing - I spoke with a security officer of a major bank 5 years ago, at the time they saw phishing as a considerable problem, however they were able to mitigate their costs in various ways (she wouldn't admit it or outright say so but some of the costs were pushed onto the consumer) _WITHOUT_ completely resolving phishing. Let's be honest, there are ways to resolve phishing (read: rip out HTTP as it currently stands altogther) but the costs are deemed too high. The insecurity of an application is weighed against the cost of mitigating or resolving the issue, sometimes the 80/20 rule applies (resolve the 80% of the lowest hanging fruit with minimal effort).
> "At the end of the day though, and this is very important, when you take the costs and ramifications related to incident handling into account, that is what really justifies a software security investment -- not so much cheaper code."
I agree and this is where I would like to point out 3 factors that seem to be overlooked:
-the startup costs of building web applications are significant
-the failure rate of the web applications are fairly high - although I admit I don't have numbers
-the early stages of a deployed web app is often a slow uptake (there are contrary examples to this - such as marketing splashes made by large companies) - meaning that there is less incentive to attack users of a new web app as compared to the same web app later in time as it has gained market share or user base - meaning that there is time after the app goes live to assess costs of adding security.
In other words, I suspect that one could argue that cases exist where you build the web app WITH LITTLE/NO SECURITY at the outset, then if/when the web app looks like it's going to succeed and/or users come onboard THEN you add security (even if it means the costs of adding that security are now double or triple as designing it in from the outset). In poker you have to know your poker pot odds (can you tell Vegas is coming?), in business you have ROI and the cost of implementing the security measures.
Just offering a different perspective.
Jeremiah, perhaps your colleague Andrew misunderstood the point of your Mythbusting post. My reading of your post was that a secure SDLC doesn't come for free of course, and building secure software is going to cost more than building insecure software, unless you factor in the total cost of managing security incidents. I agree, building secure software isn't cheap. If you don't need to do it, you could save some money. It is a simple economic argument: how valuable is what I need to protect, how much should I be prepared to spend to protect it? If I do need to protect something, it is cheaper and smarter to invest up front, rather than running the risk of spending a lot more later. Your end-to-end cost argument was clear to me.
But I would argue that the cost of secure development is actually lower than many people would think. Coming from a software development perspective, a significant percentage of what the software security community considers to be security problems are basic reliability, quality problems. In Microsoft's SDL book for example, the authors analyze security bugs assigned a CVE between 2002 and 2004, and state that 25% of them are basic reliability problems. I would argue that the number is higher. You can break the CWE list of common software weaknesses roughly in half:
1. problems in secure design, poor crypto, use of insecure libraries and APIs, insecure deployment and configuration, ... - real software security problems.
2. basic reliability / quality problems caused by crappy coding, sloppy or missing input validation (the direct or indirect cause, according to another Microsoft study, of close to 50% of all security vulnerabilities alone), missing error handling, race conditions, etc.
Why is this significant? Because writing crappy software is more expensive than writing good software: any good software developer or project manager knows that it is more expensive to fix the problem later in the cycle, and most expensive once a bug has hit the field. So the cost of writing secure software isn't as high if you write good software in the first place.
The costs of writing secure software can and should be broken out into quality costs (writing good software in the first place, which should fix somewhere between 25% and 50% of security problems) and pure security costs: session and credential management, password management, authorization, secure APIs and secure libraries, crypto, attack surface reduction, secure configuration and deployment, etc. Why make writing software that works in the first place a security problem, and a security cost? The problem should be, and needs to be, attacked on both fronts: reliability and security. They go together like chocolate and peanut butter, or peanut butter and chocolate.
@SA and Jim, great follow-up commentary. Much appreciate that my message was getting through. And Jim, personally I prefer apple pie and ice cream. Just sayin'. :)
So, coming from an engineering background (then switching over to computer science post-grad), I think there are some missing assumptions going on here. There is a distinct separation between "secure code" and "knowing code is secure". By happenstance, a monkey could write secure code, but would they know it’s relatively secure? Secure code can be inexpensive, but it can also be ridiculously expensive.
Just like in manufacturing, you can make a gizmo that is cheap and low quality and you can make one that is expensive and higher quality…and anywhere in between. There is a chance that a cheap low quality gizmo may in fact perform as well as, or better than, the expensive higher quality one. However, you can say that, in general, the higher quality one is going to perform better than the cheap one due to the variations that are allowed in both quality control processes. The same is as true for software development (side note, I will not call it software engineering), with the right tools/processes you can say with some level of surety that the security of the code is within a certain variance…and without those measurements you simply have no idea…
Please a thousand apologies - I didn't mean for you to take it so strongly.
I partook in a ad hominem attack against White Hat instead of dealing with the post's contents.
The reality is all of us have more than enough work and jobs to go on without having to ambulance chase.
The issue I had (and probably failed to articulate) is that I believe secure coding done well is actually cheaper in the long run:
* Less cost due to not having to pay out on opportunity costs
* Less cost due to not developing the same non-functional requirements again and again (this one is provable)
* Less cost due to less fraud
* Banks (for one) are required to put money in the drawer to cover likely risks. If you have a risky application, and spend money to fix it, you can take money out of the drawer and use it for other things.
For one customer I've dealt with, they had vastly lower fraud, and as they stopped (re-)developing authentication, authorization, logging, IDS, input validation, output encoding (XSS proofing was built into the tag library), SQL injection (they simply had none due to their data model), they saved heaps of cash. I bet it cost a pretty penny to do, particularly the bit where they eliminated SQL injection to a safer model, but they saved and continue to save heaps every year due to investing (i.e. spending) in secure coding.
It's not free, but it's a hell of a lot cheaper than doing it the wrong way.
Again, a thousand apologies for targeting you instead of your arguments.
Even with a secure development lifecycle you need to have a guarantee that the end result (production web site) is bug free, no?
I would like to see some statistics (Jeremiah?) where a company has a secure development lifecycle (whatever religion) or a maturity model (whatever religion), internal security testing, static code analysis, external penetration testing and still some vulnerabilities in the production web site.
@Andrew, thank you, I appreciate that. I'm sure no harm was intended. Apology accepted. Now, back to business.
Perhaps oddly, we are both very close in our beliefs, at least in one particular aspect. The value of secure code can be recognized in stopping or preventing fraud loss, which you stated. You've seen it, I've seen it. Is the return universal? I dunno, but lets move on.
Where we differ is if adding "security" does increase development costs. In some aspects it does because the activities focus on quality, thereby reducing support costs. Again, as you mentioned. Other of the myriad of activities, more eloquently stated in OpenSAMM and BSIMM, "I THINK", MUST add cost to the overall process. One brief example. Would organizations perform Threat Modeling as a quality code exercise? You tell me.
@Erwin, we have quite of few customers on the roster, large and small, who fit that description. Very large brands you'd would recognize.
While they may or may not have open vulnerabilities currently... they basically all had at least one serious issue identified by us in the past. Unfortunately for obvious confidentiality reason I can't be specific about the organization(s).
Perhaps in the next stats report I should focus on which sites have 0 vulnerabilities. Are they all brochure-ware or is something special going on.
Post a Comment