I have a love-hate relationship with PCI-DSS. Love it because it provides IT Security a firm lever to do something about web application security. Hate it because the way the process has been implemented. No matter what though I remain generally optimistic and eager to read whatever clarification the council offers as to the ambiguity of section 6.6. We all know the deadline is right around the corner. So when Standards Council General Manager Bob Russo took the time to comment about section 6.6 in a recent Information Security magazine article, I was keenly interested because customers ask me questions daily about it.
The first thing we’re told is a draft (1.2 or 2.0) will be out for review in August with the official version slated for September. Fortunately Bob revealed the industry wouldn’t be left hanging without official guidance prior to the June 30 deadline passing. They are going to, “clarify a lot of this stuff”, and the sooner they do better. One can only hope they do a good job because so far there is no authoritative clue to be had that I can find. BUT… here comes the kicker, check out the last snippet:
"Personally, I'd love to see everyone go through on OWASP-based source-code review, but certainly, that's not going to happen," Russo said, referring to the expensive and time-consuming process of manual code reviews.”
Whoa, that’s HUGE and should send a lot of people reeling. Bob Russo comes right out and says 6.6a is “source-code review”, contrary to some beliefs that black box scanning/analysis may fit the bill. Typo/misquote? Unknown for sure. Secondly, and more astonishing, his candor that the OWASP-based testing process (what's that?) is not possible anyway. I can only think that the council did the math as I have that the source-code review method is simply too cost prohibitive at internet-wide scale. We're talking potentially billions in cost, not mention too many vulnerabilities to fix anyway. The next bombshell…
"So the application firewall is probably the best thing to do, but there needs to be some clarification around what it needs to do.”
To me this basically sounds like a WAF endorsement and a dream come true to all the vendors out there. I can almost here the PR machines gearing up for a marketing blitz on this one before the impending “clarification” imposes any doubt. Good thing I've been getting well educated in this space and familiarizing myself with the players technology. Everyone said I was crazy a year ago exploring this route, but here we are.
5 comments:
i'm still saying "you're crazy" today.
as for PCI-DSS, this doesn't change anything or make anything new. but what about PA-DSS?
before you jump to conclusions about what i think - let me make it clear that i'm not "pro code review".
here are the minimum requirements we want in every audit standard for building software and things like "payment applications":
1) a defect-tracking system
2) a test specification and management tool (e.g. FitNesse + HtmlFixture if open-source ; HP TestDirector or IBM RequisitePro if commercial)
3) a test environment that is separate from the development environment
4) reasonable plan and schedule for testing. everything in mitre capec (and/or wasc tc) needs to be tested for, with all errors accounted for (false positives, false negatives, and false-false positives aka type III errors)
5) all bugs of a given priority are fixed before a release is made. how to prioritize bugs is a separate subject (*)
6) a code coverage analyzer, preferably done as white-box (i.e. with source code, although bytecode will do in a pinch)
7) a dynamic analysis tool, preferably done as white-box (i.e. with - at the very least - knowledge of the security high-points in the source code, design, and/or specifications)
(*) A bug is considered "critical priority" when there is no workaround for that particular bug available. additionally, a bug is still considered "critical", even if a workaround is possible, if the bug can destroy, change, or conceal data.
whitehatsec does not provide any of the above, so i really don't see how you could be proud of yourself.
There you go again. You have your opinion(s), you like them, so I'll keep this really simple.
WhiteHat is not in the source code review business as you so elegantly describe, nor am I in the WAF business. Should PCI endorse WAFs are the preferred website protection option, it doesn't mean anything to my business. Should they OK black-box testing by experts as an alternative to whitebox, then I have an option to add value to the business should my offering provide compelling ROI.
What you mistakenly call pride is just me seeing the writing on the wall as to what's to come based upon the variables as I understand them. Be my guest and preach "your way or the highway" approach all you like, the rest of us have business issues to attend to.
Should they OK black-box testing by experts as an alternative to whitebox, then I have an option to add value to the business should my offering provide compelling ROI
The PCI Council already did this for PCI-DSS. You're just not reading the right material!
As for "adding value", you can call it that if you want. I simply will not call it that. Dynamic analysis should be done as white-box, not black-box (no matter what the PCI Council says today or in the future). The only thing your solution does is black-box "feature testing", and is therefore always going to be sub-standard.
Nobody can say that they've found most (or even some) of the bugs without a code coverage report and/or source code. Your solutions' testing methods are ad-hoc, are not supported by arguments or proofs, and the expertise provided is unmeasured/unproven.
Most bugs come from problems in the requirements and in the design. This means that manual/automated source code review and manual/automated functional testing is *useless*, especially when it's not measured.
Be my guest and preach "your way or the highway" approach all you like, the rest of us have business issues to attend to
I'll keep that in mind as I write all of these future audit standards. Since I'm currently empowered to do this, it really is "my way or the highway".
First off, let me state my bias. In 2002, I started a company, Ounce Labs, to look for problems in software source code. I did it because many of those problems are not knowable any other way. I didn't come to this problem with an existing product, we built a product because we thought this problem needed to be solved.
That caveat aside, a recommendation that organizations rely wholly on application firewalling neglects the enormous variety of risks that private information faces. An application firewall, no matter how good, cannot protect against a lack of cryptography (Section 3.4) an accidental storage of a CVC (Section 3.2) or insecure networking, bad authorization, access control, or logging (Sections 4,8,7,10). These things can only be known by reviewing the source code. In addition, the assessment of high cost ignores the fact that new technologies are making the analysis of applications at the source level extremely practical and cost-effective.
An application firewall is an excellent solution for protecting against knowable front-end attacks, and can be the only solution for applications where deeper analysis is not possible or permitted.
Having said that, source-level analysis is clearly still required, because a majority of customer credit information exposures occur because of issues with access control, authorization, and data storage/transmission. These problems are, and will continue to be, outside the capability of a firewalling technology to distinguish.
My bias is that I work for a company that sells application data security solutions…in other words application firewalls and database auditing/activity monitoring products (Imperva). We are also an Ounce Labs partner. However, I'm going to disagree with some of Jack’s comments (and also agree with a few).
Imperva has started to advocate for some time the position that 6.6 presents a false choice. Code review or WAF is a bad choice to present. (and, actually that statement would be the same if you want to add in “black box pen testing” which may happen in the forthcoming clarification). Most security experts I’ve spent time with would agree that all 3 have their place.
We think the right question is how and when to use each. I’m biased, but feel that the WAF is the best first step as it can provide an immediate solution for immediate threats. It can also provide new immediate solutions as other methods uncover issues over time, or as new attack methods evolve over time. We also feel that there is a great opportunity for a feedback loop in both directions from WAF to code review and/or pen testing and/or scanning solutions. This why we’ve started forming partnerships with companies in these areas (like Ounce).
An issue that’s not been brought to the front is what happens from a PCI perspective if an organization chooses code review (or if the clarification allows for pen test / scanning in the future) and that review turns up an issue requiring a long fix cycle. The spec says that the organization must “Ensure that all web-facing applications are protected…” and the famous choice of methods follows. However, code review and scanning don’t technically fix an issue, they identify it. In my experience there are often issues that require significant lead time to fix. Again, I’m biased, but I feel the best tool is to use a WAF to fix it today, and then you can start the process of convincing your development team to fix it in the next release.
I think on the points above, I mostly agree with what I feel is Jack’s point – that a combined approach is best.
I would like to disagree with some other statements. Jack listed several sections that he feels application firewalls can’t address.
“An application firewall, no matter how good, cannot protect against a lack of cryptography (Section 3.4) an accidental storage of a CVC (Section 3.2) or insecure networking, bad authorization, access control, or logging (Sections 4,8,7,10). These things can only be known by reviewing the source code.”
Some of them a WAF actually can help with, but would require a more specific discussion (a WAF can add cryptography to an application by rewriting HTTP to HTTPS for instance; it can also add authorization and access control via integrations with SSO products).
Others, I agree that an *application firewall* can’t specifically address, but disagree that source code review is the only way to address them. For instance accidental storage of a CVC – a good database activity monitoring product will see this being stored in a database and alert the admin. Same goes for logging – a database auditing tool can meet almost every part of section 10 for CC data that’s stored in a database (as much of it is).
Post a Comment