It is simply impossible to physically schedule the time to meet with everyone that I’d like. So save for a 1 or 2, I’m not even going to try this year. Instead I’m going to try something more social and spontaneous. Float between events, parities, presentations, and the WhiteHat booth to see who's there and strike up interesting conversation. About what? Who knows! That’s the beauty of it because the hallway track at Black Hat is particularly good. Highly recommended. And if anyone is so inclined to shoot some video for YouTube purposes, we can try that as well. Can't wait!
Tuesday
Black Hat USA 2009 Speaker Party (9pm)
Wednesday
OWASP Breakout Briefings (4:45pm)
Pwnie Awards (6pm)
* Can’t stay too long as I have to get to a customer appreciation dinner (1 of the 2 meetings)
Thursday
Securosis/Threatpost Disaster Recovery Breakfast @ Cafe Lago (8am)
Mo' Money Mo' Problems (11:15am)
Syngress Tweetup @ Seahorse Lounge (6pm)
Microsoft Party (after 9pm)
CEO of Bit Discovery, Professional Hacker, Black Belt in Brazilian Jiu-Jitsu, Off-Road Race Car Driver, Founder of WhiteHat Security, and Maui resident.
Monday, July 27, 2009
Wednesday, July 22, 2009
OWASP Podcast #32 pulls no punches
Update: 07.23.2009: As Andrew explains, he got caught up in the moment and really didn't mean what he said (read below). Apologies accepted and I hope to continue working with him in the community. Thanks.
Update: 07.22.2009 - Two great follow-up comments by Security Agent and Jim Bird that really dig into the meat of the issue I was trying to get at. I'd say probably better insights and stated more eloquently than my original posts!
As any reader here knows, I don’t shy away from discussing hot button issues, questioning conventional wisdom, or suggesting controversial ideas. I’ve found doing so is highly rewarding as it affords others an opportunity to share differing points of view, which furthers our collective understanding. 99% of the time criticisms are positive. However, Andrew van der Stock made a comment near the beginning of the OWASP Podcast #32 on my “Mythbusting, Secure code is less expensive to develop” post, which is completely false and out of line. I’ve long considered Andrew a well-respected Web security expert and colleague, so these words caught me by surprise (0min / 50sec).
“Jeremiah has a particular service model that encourages folks to model bad programs and he needs more bad programs to be modeled.”
Andrew: This shows a complete lack of understanding of what I’m personally all about, the value WhiteHat Security offers, and the current security posture of the Web. First, I would NEVER do something like that! Secondly, our business model directly encourages us to help customers improve themselves over the long-term. And lastly, do you really think the Web is so secure that I would need to encourage more vulnerable code to ensure job security!? Please.
Fortunately, the rest of the podcast provides for some very interesting conversation between Jim, Andrew, Boaz, Jeff and Arshan.
My original point was the investment in software security ROI cannot live in a vacuum. As one example, organizations justify adding security to an SDLC in effort to help prevent vulnerabilities, which reduces the risk of security breaches. Again, not getting hacked is the motivation. Today we are getting a stronger grasp through metrics on the types of issues websites are really vulnerable to and getting hacked by. As such we can start focusing our efforts and reconsider conventional wisdom. So my question, “Is secure code is less expensive to develop?” Once again, TO DEVELOP, as opposed to find & fix vulnerabilities during late stage code or production release. I knew this was going to be a controversial subject. To even question the belief some consider as heresy, but felt it needed to be asked just the same.
Given all the numbers I’ve studied to date I think the jury is still out. Perhaps the answer is in how you define “secure code.” At the end of the day though, and this is very important, when you take the costs and ramifications related to incident handling into account, that is what really justifies a software security investment -- not so much cheaper code.
Here is what I don’t get though. Why do some have such an emotional attachment that secure code absolutely MUST be cheaper to develop? Sure it could, but are organizations really that unwilling to pay extra for quality secure code if that is what it takes? We pay a premium for quality in other products (Rolex, BMW, MacBook Pro, LOL). Why not software too!? Perhaps this belief exists because the aforementioned risk of compromise is simply too hard to quantify and build business case around. If so, we should try to tackle that problem as well. Anyway as stated, I remain open and interested in the thoughts of others.
Update: 07.22.2009 - Two great follow-up comments by Security Agent and Jim Bird that really dig into the meat of the issue I was trying to get at. I'd say probably better insights and stated more eloquently than my original posts!
As any reader here knows, I don’t shy away from discussing hot button issues, questioning conventional wisdom, or suggesting controversial ideas. I’ve found doing so is highly rewarding as it affords others an opportunity to share differing points of view, which furthers our collective understanding. 99% of the time criticisms are positive. However, Andrew van der Stock made a comment near the beginning of the OWASP Podcast #32 on my “Mythbusting, Secure code is less expensive to develop” post, which is completely false and out of line. I’ve long considered Andrew a well-respected Web security expert and colleague, so these words caught me by surprise (0min / 50sec).
“Jeremiah has a particular service model that encourages folks to model bad programs and he needs more bad programs to be modeled.”
Andrew: This shows a complete lack of understanding of what I’m personally all about, the value WhiteHat Security offers, and the current security posture of the Web. First, I would NEVER do something like that! Secondly, our business model directly encourages us to help customers improve themselves over the long-term. And lastly, do you really think the Web is so secure that I would need to encourage more vulnerable code to ensure job security!? Please.
Fortunately, the rest of the podcast provides for some very interesting conversation between Jim, Andrew, Boaz, Jeff and Arshan.
My original point was the investment in software security ROI cannot live in a vacuum. As one example, organizations justify adding security to an SDLC in effort to help prevent vulnerabilities, which reduces the risk of security breaches. Again, not getting hacked is the motivation. Today we are getting a stronger grasp through metrics on the types of issues websites are really vulnerable to and getting hacked by. As such we can start focusing our efforts and reconsider conventional wisdom. So my question, “Is secure code is less expensive to develop?” Once again, TO DEVELOP, as opposed to find & fix vulnerabilities during late stage code or production release. I knew this was going to be a controversial subject. To even question the belief some consider as heresy, but felt it needed to be asked just the same.
Given all the numbers I’ve studied to date I think the jury is still out. Perhaps the answer is in how you define “secure code.” At the end of the day though, and this is very important, when you take the costs and ramifications related to incident handling into account, that is what really justifies a software security investment -- not so much cheaper code.
Here is what I don’t get though. Why do some have such an emotional attachment that secure code absolutely MUST be cheaper to develop? Sure it could, but are organizations really that unwilling to pay extra for quality secure code if that is what it takes? We pay a premium for quality in other products (Rolex, BMW, MacBook Pro, LOL). Why not software too!? Perhaps this belief exists because the aforementioned risk of compromise is simply too hard to quantify and build business case around. If so, we should try to tackle that problem as well. Anyway as stated, I remain open and interested in the thoughts of others.
Friday, July 10, 2009
Picks for BlackHat 2009

Day 1
The Laws of Vulnerabilities Research Version 2.0
Sniff keystrokes with Lasers /Voltmeters
Analyzing Security Research in the Media
There's a Fox in the Henhouse
Hacking Capitalism '09
Pwnie Awards
Day 2
Cloud Computing Models and Vulnerabilities - Raining on the Trendy New Parade
Mo' Money Mo' Problems *only because I have to be there. ;) *
Clobbering the Cloud!
Breaking the Security Myths of Extended Validation SSL Certificates
Reconceptualizing Security
Thursday, July 09, 2009
The Best of Application Security 2009 (Mid-Year)
Every year the application security industry receives a number of phenomenal research papers and other great contributions. Even for those dedicated to appsec as their primary job function it is challenging to stay up-to-date, which means resources to help track them become extremely valuable. As such Ivan Ristic and I have been working on the "The Best of Application Security", a list of the ten most remarkable contributions (in no particular order) published bi-annually and then combined at year end. Obviously some painful, but necessarily omissions had to be made. If readers disagree with the list, great! Please comment your suggestions for consideration. Lastly this effort will be different from the annual Top Ten Web Hacking Techniques, which is solely dedicated to breaking stuff.
- Typing The Letters A-E-S Into Your Code? You’re Doing It Wrong!
- Mozilla Content Security Policy proposal
- Software Security Maturity Models (BS-IMM & OpenSAMM)
- Verizon 2009 Data Breach Investigations Report
- Vulnerability Prevention Cheat Sheets (XSS & SQL Injection)
- Creating a rogue CA certificate
- HTTP Parameter Pollution
- CWE/SANS TOP 25 Most Dangerous Programming Errors
- Socket Capable Browser Plugins Result In Transparent Proxy Abuse
- Anti-Clickjacking w/ Internet Explorer 8, NoScript and Safari 4.0
The Most (Potentially) Lucrative Vulnerabilities

Cross-Site Cooking enabled a website (ie http://www.example.com/) to set arbitrary cookies associated to an entire domain with a foreign TLD such as *.com.pl or *.com.fr. The cookie value would be sent to every website having those TLDs. This could lead to delete stored preferences, session identifiers, authentication data, cart contents, etc. Now assume for a moment a similar browser bug existed where a website could set arbitrary cookies for generic *.com, *.net, *.gov, *.mil, or better yet perhaps just *. That cookie value would be sent to all those TLD, or in the latter case all sites. If such a bug existed it would seriously impact all websites not reissuing a session ID post authentication. Forcefully load up (PHP|JSP|ASP)SESSIONID to website visitors and then walk into any account you’d like! While bad as that is defrauding affiliates and affiliate networks is another possibility.
For those unfamiliar, affiliate revenue is BIG business, and generates money based upon cost-per-click and cost-per-conversion. Five and six figures per month is not unheard of and commissions owed are largely tracked through the use of cookies. For a fraudster imagine being able to simultaneously load your Amazon, eBay, Google, etc. affiliate cookie into tens or even hundreds of thousands of browsers in a single banner campaign. Anytime they purchase something on those sites you get paid because your affiliate cookies was received -- stepping on any others if they exist. Kaaachink! Websites receiving unexpected affiliate cookies would more than likely not even see or log it. The same for the user-side of the connection. Plus, cookies have no information on what website set the cookie. All nicely invisible. The reality is we really have no idea if this hasn’t already happened. We do know is some browser plug-ins and ISPs do this sort of thing already (load up their affiliate cookies).
Other targets such as (transparent) proxies could be targeted in a similar way. I'm also keeping my eye on Cross-Origin Resource Sharing, Flash Cookies, and clever timing attacks. Oh, and all the new browser standards coming out. Guaranteed goodness inside. Happy hunting!
Wednesday, July 08, 2009
Why vulnerable code should be fixed even after WAF mitigation
Websites have vulnerabilities, vulnerabilities that are found by vulnerability assessment solutions, which are then communicated to Web Application Firewalls (WAF) for virtual patch mitigation. Given the extremely heightened activity of our adversaries, compliance requirements, volume of existing vulnerabilities, and money/time/human resource constraints this approach is becoming more common every day. What also becomes common is the question management and development groups ask of IT Security, “If the vulnerability is patched by a WAF, then why do we need to fix the code?” A reasonable question and one we need to be prepared to answer with something better than proclaiming, “Because it is the right thing to do!” Obviously this is unconvincing as it provides no reasonable business justification. Here are some ideas:
- Developers really like to copy code, even insecure code, which may eventually lead to new vulnerable Web applications launched outside deployed WAF protection.
- WAFs, like code, are not perfect and cannot always compensate for complex encoding/decoding application interactions, which could open the door to bypassing security rules.
- A vulnerable Web application feature may be delivered now or in the future via XML APIs, Flash, iPhone application, etc. and by extension live beyond WAF protection.
- WAFs tend to fail open, and when they do, it would be preferable not to have vulnerabilities as an active risk of exposure indefinitely.
- A WAF may not be positioned to protect against the insider threat.
- WAF rules are often exploit and not vulnerability focused, so may protect against some specific attack variants, but miss others. For the same reason, non-exploitable vulnerabilities may continue to be reported by vulnerability assessment solutions.
- Fixing a vulnerability in the code *right* will often systematically resolve an entire class of issues both now and take them off the table in the future.
- Compliance or customer security standards may require an application be tested without the WAF protection.
Subscribe to:
Posts (Atom)