Monday, October 08, 2007

Website Vulnerability Statistics (17 mo. and counting)

Update 10.20.2007: Additional links and press coverage

WhiteHat: 90% Websites Have Vulnerabilities
Study - 90 percent of all sites at hacking risk


It’s that time of the quarter where we get to release our WhiteHat Website Security Statistics Report (PDF) - the aggregate vulnerability data we’ve collected when assessing the custom web applications of hundreds of the largest and most popular websites on a continuous basis (weekly is typical). This data is also very different from Symantec, Mitre (CVE), IBM (ISS) X-Force, and others who track publicly disclosed vulnerabilities in commercial and open source software products. WhiteHat’s report focuses solely on previously unknown vulnerabilities in custom web applications, code unique to that organization, on real-world websites. The full report is available and here are some highlights if you want to skim:

Top Ten
1. Cross-Site Scripting (7 out of 10 websites)
2. Information Leakage (5 in 10 websites)
3. Content Spoofing (1 in 4 websites)
4. Predictable Resource Location (PRL) (1 in 4 websites)
5. SQL Injection (1 in 5 websites)
6. Insufficient Authentication (1 in 6 websites)
7. Insufficient Authorization (1 in 6 websites)
8. Abuse of Functionality (1 in 7 websites)
9. Directory Indexing (1 in 20 websites)
10. HTTP Response Splitting (1 in 25 websites)

1) Both technical and business logic flaws had a strong presence in the Top Ten. Which means, by focusing on one type and not the other, significant issues will get unchecked, and by extension, unremediated.

2) An increase in technical vulnerabilities including Cross-Site Scripting (XSS), Information Leakage, SQL Injection, and HTTP Response Splitting. This can be directly attributed to the discovery of new attack techniques and our improvement in vulnerability identification technology - not necessarily that the security of Web application software is worsening.

3) HTTP Response Splitting took over the #10 spot from XPath Injection, which has proved to be one of the industry’s most misunderstood and underestimated issues. Heavy second quarter R&D efforts resulted in new checks being introduced and vetted across all websites.

4) CSRF should be in the Top Ten, but isn't since state-of-the art scanning technology across the industry is extremely limited at identifying CSRF and most reported issues are found by hand. The challenge with CSRF is that it’s a valid request from the authenticated user. There is no “hack,” so to speak, only the behavior of the website can be used to identify an issue. Once again scanners have little to no contextual reasoning to determine if a CSRF attack worked or not or, if it did, how bad it would be if exploited.

5) The vast majority of websites (roughly 9 in 10) have had at least one urgent to high severity vulnerability in the last two years while nearly 30% have one or more critical vulnerability. This should have significant meaning for the PCI Council and credit card merchants as websites either having Urgent, Critical, or High severity issues would not pass the PCI compliance test.

6) The Retail vertical performed better than the others including Financial Services, Insurance, Healthcare, and IT. We believe its because the bulk of a retail website’s functionality is accessible without the need to login. This means more external attackers are able to target these websites and spot weaknesses. This is in contrast to the others where the functionality is behind a login screen, so once an attacker gets an account, considerably less people have tested these areas of functionality before them.

In the report I’ll attempt to compare platform technologies, vulnerability half-life by class and severity, and the average number of vulnerabilities per website by vertical

11 comments:

Anonymous said...

Interesting statistics and nice Top Ten, Jeremiah. But you forgot about such holes as captcha bypass (on which I'm working a lot for last months). Which is Insufficient Anti-automation vulnerability and there are a lot of such holes in the Internet (so it can be put to the top ten).

And from my statistics Cross-Site Scripting holes are more widespread (8-9 in 10 web sites). So you need to look more thoroughly ;-) (to find more of them).

Anonymous said...

Captcha bypass test.
Don't worry Jeremiah, I'll take care of your blog ;-).

Anonymous said...

Captcha bypass test.
Don't worry Jeremiah, I'll take care of your blog ;-).

Anonymous said...

Captcha bypass test.
Don't worry Jeremiah, I'll take care of your blog ;-).

Anonymous said...

Captcha bypass test.
Don't worry Jeremiah, I'll take care of your blog ;-).

Jeremiah Grossman said...

OK OK!, I get the point! I didn't make this blog system. :)

About the Top 10 though, I didn't forget about anti-automation, but remember what my blog measures. Its not about the total # of vulnerabilities, its about the likelihood of a website having a particular class of vulnerability. Not every website uses a CAPTCHA or needs anti-brute force solutions.

Anonymous said...

Captcha bypass test.
Don't worry Jeremiah, I'll take care of your blog ;-).

Anonymous said...

Captcha bypass test.
Don't worry Jeremiah, I'll take care of your blog ;-).

Anonymous said...

Captcha bypass test.
Don't worry Jeremiah, I'll take care of your blog ;-).

Anonymous said...

I just show you the power of my captcha bypass method and the real protection of Blogger's captcha. You got the point ;-).

It is in context of my future Month of Bugs in Captchas (I'll write you about it soon).

> Not every website uses a CAPTCHA or needs anti-brute force solutions.
Not every site, but many and the their number is growing. Anti-brutforce, anti-automation, anti-spam and others protections is very actual in current time.

Anonymous said...

"9. Directory Indexing (1 in 20 websites)"

This stat really surprises me, of the app/website assessments I've done in the last year I'd put this at about 85%.

"2. Information Leakage (5 in 10 websites)"

This also surprises me, only have the assessments you do have information leakage? Does that include WebServer banners or only actual leakage of WebApp related data? What about things like issuing a 403 confirming a resources existence but denying access to it?