Tuesday, June 19, 2007

PCI Certification doesn’t make a website harder to hack

Update 07.14.207: As a result of this post a good discussion has emerged over at PCI Compliance Demystified. I clarified my points in a blog comment over there and duplicated the content here as well (see below).

I’ve posted often about PCI with a particular interest in, “what web application vulnerabilities are ASVs required to identify?” For merchants and ASVs alike this is a very important question. For a website to pass PCI, what depth of testing is sufficient for an ASV to pass their entrance exam? Is the bar low enough for a network scanner or will something comprehensive be necessary? The answer to which benchmarks PCIs level of security assurance for merchants and the rest of the industry.

More than a year ago MasterCard informed the ASVs that they’d drop of 8 of the OWASP Top 10 for the scanning requirements. Leaving only Cross-Site Scripting (XSS) and SQL Injection. Then in a seeming contradictory statement they said, “...there are no plans to make any of the PCI Data Security Standard requirements less robust. Any future enhancements to the standard are intended to foster broad compliance without compromising the underlying security requirements of the current standard." Left in confusion, we didn’t know what to believe, so we waited for the answer.

Recently I looked up the newest PCI 1.1 documents and in the Technical and Operational Requirements for ASVs, it looks like we have the answer. On page 10 it says the following:

Custom Web Application Check
The ASV scanning solution must be able to detect the following application vulnerabilities and configuration issues:
• Unvalidated parameters which lead to SQL injection attacks
• Cross-site scripting (XSS) flaws

How about that! PCI only requires 2 out of the OWASP Top 10 remain, 2 out of the 24 classes of according to the WASC Web Security Threat Classification, and absolutely no mention that the scanner has to be logged in during the scan. Great. So “technically”, the PCI standard itself has NOT been watered down, that much remains the same. What has been lowered is web application security PCI compliance enforcement, which is down to virtually nothing.

My concern is merchants will be getting a clean bill of security health and never informed that their websites are very likely riddled with unreported vulnerabilities that weren’t even tested for. XSS and SQL Injection too! I understand and appreciate the business challenges of cost/performance for the PCI Council to consider, but come on, this sets of very dangerous precedence. Scans conducted like this will do NOTHING to make a website more secure or thwart anyone from finding that one vulnerability they need for exploitation.

Oh well, I guess the bright side is we have our answer and things could be improved upon later. How much later?

6 comments:

Ryan Barnett said...

Here is yet another example of the whole Compliance != Security problem with PCI. This is taken from the same document under the "General Requirements for Scanning Solutions" section -

The following are examples of some of the tests that are not permitted:
• Denial of service (DoS)
• Buffer overflow exploit
• Brute-force attack resulting in a password lockout
• Excessive usage of available communication bandwidth


While I understand while you wouldn't want an ASV to execute a DoS attack against your website, you also then need to realize that a PCI vulnerability report that states that you have no vulnerabilities does not mean that you don't have any vulnerabilities! You may well have problems in areas that ASVs were not allowed to check.

This is somewhat similar to the misunderstanding that many people had with the Center for Internet Security (CIS) benchmark documents, and specifically, with the reports generated from the Scoring Tools. Users (actually Management) were mistakenly equating a score of 10 to mean - you have no vulnerabilities. Actually, what a score of 10 meant was that you followed all of the steps outlined in the document. But what they were missing was that the document is a MINIMUM Baseline guide and not all encompassing. So it did not mean that if you scored a 10 that you were hack-proof.

Jeremiah Grossman said...

@Ryan, well said. With CSI's benchmarks, and I was part of the Apache Benchmark, the minimum isn't THAT low. At least it doesn't have to be. In the case of PCI, sheesh, I'm hear wondering if its even worth it web application security wise. That level of testing isn't going stop anyone on the sla.ckers.org board, let alone someone more determined to hack a website for financial gain.

Ben said...

This is a good article and raises some valid points re: scanning.

It's important to note however the availability of a live web application can not be compromised during a scan which is why things like DoS and Buffer overflow can not be tested for. You can't blame PCI DSS for not wanting to take down a customers web site!

The way that a merchant or Service Providers web apps ARE validated against most of the OWASP Top Ten is through their PCI DSS validation exercise itself. For Level 1's especially the security consultant will examine software development and secure coding practices for ALL web apps in scope of DSS.

In my experience so far (so long as the consultant does their job, and the client doesn't lie) this is a pretty sound methodology for providing a fairly robust and secure web application.

Jeremiah Grossman said...

Hi Ben:

> You can't blame PCI DSS for not wanting to take down a customers web site!

Certainly not.

"The way that a merchant or Service Providers web apps ARE validated against most of the OWASP Top Ten is through their PCI DSS validation exercise itself. For Level 1's...."

I take this to mean the annual audits performed by QSAs. And yes, if everything is performed on the up and up, this would be a comprehensive approach to vulnerability assessment. What the annual assessment process doesn't take into consideration is web application change rate. This is why I brought up the quarterly PCI scanning aspect.

In my experience the typical eCommerce website has code updates every two weeks and more often than that wouldn't be unheard of. Its entirely possible and likely for a QSA to honestly give a website a clean bill of security health only next week to have SQL Injection, XSS, or whatever other vulnerability appear due to a new code push. At WhiteHat we see this happen all the time. 2 weeks of security and 50 of uncertainty is not what I'd call robust.

Jeremiah Grossman said...

To clarify my post above:

As we know, the goal of PCI-DSS is to protect cardholder information and in doing so make websites more secure. Probably like everyone here I’ve spent a lot of time studying the PCI-DSS, from version to version, and the truth is I like it. A lot in fact and have said so many times. What I don’t like is the validation of compliance implemtation, specifically the ASV certification process, because its WAY behind what the standard requires. This is what I’m saying needs to be improved in order to overcome the compliance != security checkbox game. My point is if Qualys/ScanAlert/Nessus/etc scans (as good as they may be at network VA) can vouch for the web application security requirements of PCI-DSS, that’s sad. This defeats the purpose of ensuring a website or cardholder information is “secure”.

Sure, scanning custom web application vulnerabilities is tough, no doubt about it. I know this well because I’ve been building the technology to do so for almost a decade and have been in the webappsec field even longer. And yes, many scanners on the market suck. Scanners have been known to suck at crawling, suck at vulnerability identification, suck at false-positives, suck at scale, suck at process, and consume more assessment time than they save. I’ve blogged about these challenges often, but also explained why scanning is essential to the assessment process. Point #2 is just because many scanners suck, or the ones you’ve tried, it doesn’t mean all the vendors do and their technology/process hasn’t matured.

Speaking for WhiteHat, our Sentinel Service (SaaS) is fully equipped to handle the size and complexities of the world’s largest websites (lots of them), verify all vulnerabilities, complete a comprehensive assessment process with qualified personal, on an ongoing basis, for less annually than the average one-time pen-test, which is above and beyond what the ASV standard SHOULD be. Again, I know this because we’ve been doing so for years for many of the biggest merchants on the Web. A number of which are also “PCI Compliant” through checkbox ASVs, but still get web application VA through us because they value REAL website security. Point #3: If merchants already raised their own webappsec bar voluntarily, then perhaps the PCI Council should be able to as well.

I’m also under no illusions that the PCI Council has any plans to strengthen the ASV test to resemble anything like the best practices requirements they dictate in the PCI-DSS. Nor would I expect them to do so without ensuring there is an adequate number of vendors capable of providing the service. But consider this, a large amount (majority?) of ASV passed their test using either Qualys or ScanAlert, so in essence the PCI Council is already reliant upon a small number of scanning vendors to handle the load. In any event, my crystal ball says the PCI Council will continue setting their webappsec bar to whatever Qualys and ScanAlert is capable of. Which means waiting for the end of 2007 when their technology reaches where web application vulnerability scanners were back in 2002. Remember when they didn’t even log-in?

wilde1family said...

ITs compliance that gets budget not security ergo compliance is more important than security