All around stellar event, everyone I talked to had a good time, and was able to take something of value home with them. ~150 people attended on behalf of many different organizations, large and small, from banking, telecom, ecommerce, software development, healthcare, etc. The format favored enterprise speakers rather than experts, which made it less about the newest attacks/threats and more about how enterprises went about solving problem X. This was great because I don’t think we have to push as hard anymore to promote general webappec awareness. In my opinion the early adopters are here and we should be supporting them in being mentors and evangelists. We need to continue facilitating knowledge exchange.
Judging from the feedback my keynote was well received (whew). I was a little nervous because the material was largely new and also because I touched on some, well, sensitive and deeply held beliefs about ways in which we approach web application security. I’ll post the slides and speech text as soon as I can. I also want to thank the SANS team, especially Carol Calhoun, for allowing myself and WASC to participate. The sponsors (Breach, Cenzic, Core, HP, WhiteHat) were very generous who paid for a lot of the food, drinks, and evening entertainment. Also thank you to the attendees who really made it all possible.
Before I forget them, there we a several interesting I noted down:
1) There was a lot of talk about how to effectively communicate with upper management on web application security issues. If the security guy manically informs the CIO, “OMG, we got 30 XSS and 10 SQLi issues!”, chances are they’re not going to know what you are talking about or understand well enough to make an informed business decision. However if you are able to put vulnerability reports into a meaningful business context you stand a better chance of influencing action in the right direction. For instance we have 5 high severity issues, that if exploited could lead to X dollars lost or X number of users impacted. I don’t think anyone REALLY had good answers here on specifics, but its clear something better is needed.
2) The verbiage some people used to ask for advice on how to get programmer to develop more secure code was a little concerning. They used terms such as “how do we strong arm them?”, “we need to beat them on the head”, and “can we force them in some way?”. Ed Pagget from First American keyed on this negativity right away. Basically he said nobody, including developers, likes to be manipulated, or otherwise forced to do something and this is exactly the wrong approach. I agreed as this approach could easily backfire. Security people must work to establish productive working relationships with the business and its developers or nothing will change. If we believe developers (people) generally want to do the right thing if only empowered to do so, then let’s do that in the best way possible.
3) Several people said of the white and black box scanner tools (AppScan, WebInspect, Hailstorm) that when they integrated them in development, they disabled all but the most accurate of rules. Apparently developers have a high tolerance for false negatives and low tolerance for false positives -- perhaps in contrast to security folks. I guess this makes sense when you have to get some form of reliable security testing in the SDLC that’s managed by developers. But I’m left wondering how much security has actually been gained as a result? How much harder does it make it for the bad guy to find the next vuln?
4) Several enterprises described their investment in security “champions” inside development groups as opposed to trying to tackle the entire group as a whole. For web security matters the developers can go to someone in the immediate vicinity to consult with and ask questions. This is actually quite clever as that person acts as a mentor for the rest. You are effectively training the trainer.
5) I thought I’d have to do more “selling” on the VA+WAF subject, but overall people seemed highly receptive to the notion. I even had a short discussion with Gary McGraw and figured I’d have to spar a bit at least with him. Instead he basically said it’s a security solution that has a specific time and place, just like everything else. Indeed. When exactly that is, is the question we’re all trying to figure out and get comfortable with. Still as Sharon Besser from Imperva picked up on, our time-to-fix metric are less than desirable. We can and need to do better.
6) Several people were asked for their thoughts on the Microsoft SDL and OWASP ESAPI. Consensus on the SDL, great for enterprise/desktop software, not so good for web application development. Agile development sprint cycles are too fast for security to be built in the way its described. ESAPI, good ideas and lots of potential, difficult to fork into existing projects. Also, to be effective alternative APIs must be ripped out to prevent developers from rolling back to less secure code their more used to working with.
8 comments:
I'm going to do something completely out of character and agree with everything you wrote in points 1-4 and 6. These are really great take-aways.
With particular regards to point 5 (which I disagree with), it looks like you need some work. I think Rich was saying that you need to do better! I'm going to have to disagree with Imperva, and in particular: your statistics on how long it takes to fix vulnerabilities. I think we need more data from different sources before any of this becomes conclusive, let alone theoretical.
Since I have my head in your number 6 everyday, did anyone say anything about what does work? Sure, Microsoft SDL: good for fat apps, bad for web apps. Agile: good for web apps, but it's too fast for security. What does work for secure SDLC improvements in web applications? Did anyone have any suggestions?
RSnake and Rich Mogull's conversation would have been my favorite part of the summit. Luckily, I got to meet with Rich tonight to discuss things further. The concepts are dead right -- we need to speed things up and make them more secure (slow them down?). Can we do it? I think that we can. That's what my CPSL process and recent Software Security retrospective blog post are all about.
Jeremiah, on the issue with the first point you have made isn't it already the responsibility of the Information Technology, or Security, staff to put together such a report anyway? I thought it was an actual and annual duty to perform such risk management exercises, and then present the findings to the "superiors" in the company via a well put together document, or series of documents, in which both likelihood and valutions were given weighted values. Does this not generally apply to Web Application Security, or at least not at the present time?
In webappsec, not typically. Website vuln reports might get folded into some larger risk report, but its rare in my experience. Rare because no one knows exactly how to measure web app vulns from a loss/risk/value perspective. Even well established severity/threat ratings are not yet been developed. Corp auditors don't know what to ask for either, as no standards have been developed. Another reason is companies with anything over 5-10 websites, no one has a good idea of what they are, what they do, or how valuable they are to the business.
Obviously its still a very mature space, but people are not starting to ask for these types of things or how to go about building them.
Obviously you have a lot more years and experience than I have in this field, but I have given it some thought, and wouldn't it be somewhat difficult to establish such ratings on a general scale? I mean I am not too familiar with all of the possible open-source, or proprietary applications used by various corporations' developers to put together their websites, but wouldn't each assessment (risk-related) have to be tailored to each individual company rather than a de facto standard?
New ideas and ways to look at things are incredible valuable, despite whatever experience someone might have. So let me throw out some things that need to be measured and taken into consideration trying to answer management’s question of "how are we doing?" And yes, each report and weight would need to be tailored to each business. Here are several things we can take into consideration:
Vulnerability Severity: If exploited how bad would it be.
Vulnerability Threat: Ease or difficulty of the vulnerability being exploited.
Likelihood: What's the probability of the vulnerability being exploited. The difference between what’s possible and probable.
Cost of mitigation/remediation: Before being exploited, how much time and money would be needed to reduce the risk by X% using X method.
Website Importance: How valuable is the website and how much of what could be lost? Brand, intellectual property, money, etc.
Recovery Cost: If exploited, how much time and money would need to be spent to recover.
Security Comparison: Describing to the business how they are doing relative to other in their market vertical or global average.
This what you were asking for?
Yes, I'm sorry. I understood that portion perfectly, but was thrown off by the line, "even well established severity/threat ratings are not yet been developed", which made me believe there were some factors that were even more issue-specific. I suppose I misinterpreted that segment of the reply. Oh well in any event thank you for clarifying.
Wish I could have been there.
In talking about web app vulns to management I usually make the company "reputation" and "loss of customer confidence" argument, since defacement, phishing, malware download are the most likely real world attacks. Putting in links to blogposts, news articles helps too.
About getting developers to develop securely, I always am sensitive to the fact that, as a former developer and dev manager, developers HATE being told how to write code. The way to get them to do write secure code is to make them want to write secure code. So, I appeal to ego and wallet: a coder who codes securely is a better coder than one who does not code securely. And, "If you can put on your resume that you can code securely, you will beat out the guy who cannot put that on his resume." I think being able to say, "I've haven't had one of my sites hacked in x years" is a strong case for getting hired.
@anonymous, that last point was really interesting. You make it into their personal best interest to get better. Hmph, clever. Might be another reason to have more web security programming certifications as well. Lets not debate over their "true" value shall we. :)
Post a Comment