Thursday, May 31, 2007

Blog interview with Ken from bloginfosec.com

During my travels I get to meet a lot of interesting people from all over the world with similar passions as my own (webappsec/aussie rules/jiu jitsu, the usual). On one such recent occasion I met Kenneth F. Belva after presenting for the 16th Annual NY Metro ISSA conference at the very posh and exclusive New York Athletic Club. After a few minutes of webappsec industry conversation I find out he reads my blog and is also a blogger himself (bloginfosec.com)! Cool. Ken asked if I'd like to do a follow-up interview blog-style asking my thoughts about CSRF, XSS, their importance, solutions, safe surfing habits, etc for his readers. The simple stuff that most developers, netizens, and website owners continue to grapple with. Sounded like fun. Its just been posted. Enjoy!

Tuesday, May 29, 2007

Web application scan-o-meter

The new OWASP Top 10 2007 has recently be made available. Excellent work on behalf of all the contributors. As described on the website, “This document is first and foremost an education piece, not a standard.”, and it’ll do just that. Educate. Last week I provided project team with updated text (unpublished) that more accurately describes the current capabilities of “black box” automated scanners in identifying the various issues on the list. The exercise provided ideas for the remainder of this blog post; estimating how effective scanners are at finding the issues organized by OWASP Top-10.

In the past I’ve covered the challenges of automated scanning from a variety of angles including technical vs. logical, vs. low hanging fruit, vs. the OWASP Top 10, and in one occasion I threw down the gauntlet. Ory Segal (Director of Security Research, Watchfire) also weighed in with his thoughts via his shiny new blog that’s worth a read. Everyone agrees scanners find vulnerabilities, though most including product vendors admit they certainly don’t find everything. But that doesn’t explain nearly enough. Does this mean scanners find almost everything or just above some? Where does the needle land on the web application scan-o-meter? Sure, scanners are good at finding XSS. How good? Scanners are bad at identifying flaws in business logic. How bad? This is a very limited understanding and we could really use more insight into scanner capabilities with quantification of capacity.

Going by experience in developing scanner technology for the better part of a decade, testing scanners written by many others, feedback from the surveys, personal conversation, and quality time huddled with Arian Evans (Director of Operations, WhiteHat Security) — below are estimates of where I think state-of-art scanning technology has reached when speaking of relative averages. This is actually a harder exercise than you might think when all things are considered. I invite others to comment from their experience as well whether they agree or disagree. Should any of the scan-o-meter readings be higher or lower or anything I might not have considered?

Estimates are based on the completed automated results of a well-configured scanner, that gets a good website/web application crawl and is able to maintain login state during testing. We’ll also focus only on vulnerabilities that are exploitable by a remote-external attacker. In short, a best chance/case scenario.

A1 - Cross Site Scripting (XSS)










A2 - Injection Flaws











A3 - Malicious File Execution











A4 - Insecure Direct Object Reference











A5 - Cross Site Request Forgery (CSRF)










Identification of CSRF turns out to be fairly easy, filtering out which issues we care about is where automation falls down. That’s what this meter reflects, purely automated results.

A6 - Information Leakage and Improper Error Handling










A7 - Broken Authentication and Session Management










A8 - Insecure Cryptographic Storage










A9 - Insecure Communications










A scanners challenge with this issue is putting it in terms of business expectations. Perhaps a website does not need or want SSL is certain places and that’s OK.

A10 - Failure to Restrict URL Access

Friday, May 25, 2007

Intranet Hacking (Take 2) for BH USA 2007

I was just informed by BlackHat that my presentation (Hacking Intranet Websites from the Outside (Take 2)–"Fun with and without JavaScript malware") was selected! Woot! I have some good stuff planned (description below). As always its an honor to be chosen amongst the industries top experts. The selection committee has a really tough job wading through a ton of solid submissions. There's going to be a lot going on during the show this year, I can't wait. Presentations, book signings, vendor parties, WASC meet-up, etc . Time to get working on my slides and demos. :)

Attacks always get better, never worse. The malicious capabilities of Cross-Site Scripting (XSS) and Cross-Site Request Forgeries (CSRF), coupled with JavaScript malware payloads, exploded in 2006. Intranet Hacking from the Outside, Browser Port Scanning, Browser History Stealing, Blind Web Server Fingerprinting, and dozens of other bleeding-edge attack techniques blew away our assumptions that perimeter firewalls, encryption, A/V, and multi-actor authentication can protect websites from attack.

One quote from a member of the community summed it way:

"The last quarter of this year (2006), RSnake and Jeremiah pretty much destroyed any security we thought we had left—including the "I'll just browse without JavaScript" mantra. Could you really call that browsing anyway?"
-Kryan

That's right. New research is revealing that even if JavaScript has been disabled or restricted, some of the now popular attack techniques—such as Browser Intranet Hacking, Port Scanning, and History Stealing—can still be perpetrated. From an enterprise security perspective, when users are visiting "normal" public websites (including web mail, blogs, social networks, message boards, news, etc.), there is a growing probability that their browser might be silently hijacked by a hacker and exploited to target the resources of the internal corporate network.

This years new and lesser-known attacks attack techniques Anti-DNS Pinning, Bypassing Mozilla Port Blocking/Vertical Port Scanning, sophisticated filter evasion, Backdooring Media Files, Exponential XSS, and Web Worms are also finding their way into the attackers' arsenals. The ultimate goal of this presentation is to describe and demonstrate many of the latest Web application security attack techniques and to highlight best practices for complete website vulnerability management to protect enterprises from attacks.

You'll see:

  • Web Browser Intranet Hacking / Port Scanning - (with and without JavaScript)
  • Web Browser History Stealing / Login Detection - (with and without JavaScript)
  • Bypassing Mozilla Port Blocking / Vertical Port Scanning
  • The risks involved when websites include third-party Web pages widgets/gadgets (RSS Feeds, Counters, Banners, JSON, etc.)
  • Fundamentals of DNS Pinning and Anti-DNS Pinning
  • Encoding Filter Bypass (UTF-7, Variable Width, US-ASCII)

People know about XSS!

I travel extensively speaking to audiences about various topics within the web application security world including vulnerability assessment, intranet hacking / JavaScript Malware, statistics, Ajax, myths, best practices, etc . Almost always the content touches upon XSS. Before digging in too deeply technologically, I usually ask how many know what Cross-Site Scripting (XSS) is by show of hands. The amazing this is recently groups of developers, IT folk, infosec professionals, and even CIO/CSO execs almost always more than half the audience raises their hands! For those with 2-3 years of webappsec experience, this may not sound like a big deal, but for those that have been around since say 2002, this is huge. Way back when almost no one knew what XSS was and if they did, they didn't take it seriously. The message is certainly getting out there and knowledge of webappsec is certainly improving. And not a moment too soon. :)

webappsec is hard, because it takes so many

I was having an email conversation with Nick Sivo from Loopt where he contrasted network and web application security in such a way I found very insightful.

"I should mention that many of our hires have come straight out of school. I think it's quite
possible people who have worked in industry for a while have picked up security knowledge, but it would be useful if it were covered in school as a requirement. I think that the biggest problem with web-app security is that it hasn't yet reached the point where a few very knowledgeable people can secure something. This, I think, *has* happened on the network side.

I can hire 1-2 (we're small) people to configure all of my networking equipment to be secure. It's also possible to verify that things are configured appropriately in a finite amount of time. There are still risks like buffer overflows in IOS or IIS, but those seem to be few and infrequent. Most importantly, developers don't need to even think about the network issues, except in rare and well defined cases when they need access rules changed.

On the web-app side, a developer can't really introduce a buffer overflow or a network bug, because they're using C# and don't access things at that low a level. They can't do much to affect network security accidentally. However, they all need to be aware of, and can greatly influence web-app security. 1-2 people can't secure our web application alone. It has to be a joint effort. Even though we use ASP.net, stored procedures (with no dynamic SQL), and some self-built hardening modules (ASP.net plugins that run for every request), I can't be sure I've got even most things covered. A developer can forget to encode something, or forget to check input, or use an exponentially expensive regex to validate user input. For an application to be really, really secure, each developer needs to be able to look at their code and think of all the attack vectors (new ones are discovered weekly). It's nearly impossible to achieve that."

Nick is onto something here. Network security can be handled by a handful of (market-available) people at most any normal enterprise. Web application security on the other hand touches so many others (that isn't widely market-available) that has a drastic impact on over all security. One mistake by any person and bam... well... you know the rest. Imagine that, if an organization REALLY wants to improve their webappsec, they'll still have a hard time because the skill-set isn't yet out there. This will take time, but its coming....

Monday, May 21, 2007

Aussie rules weekend

I love playing Aussie rules football (via the GGAFL). Its fast paced, hard hitting, continuous play with a ton of skill. Plus the guys out there are really cool. I've played a lot of different sports and there is nothing else really like footy. Think of it like soccer crossed with football and played sorta like rugby. Certainly not a game for the meek, especially when you play in the ruck. Take for example this mornings league email:

"During the second game on Saturday Oakland's newly appointed captain Benny Atkins was suffered a leg injury. Today he went to the doctor and was diagnosed with a fractured fibula. That was the lovely cracking sound we heard as he went down. In some ways it is a blessing as the fracture was clean and should fully heal in 5 to 6 weeks. Certainly better than the knee injury we all feared but disappointing just the same. "

It happens.

Friday, May 18, 2007

Month of Bugs (Search Engines)

MustLive has is organizing a MoB for Search Engines Bugs starting in June.

"Purpose of this Month of Bugs is a demonstration of real state with security in search engines, which are the most popular sites in Internet. To let users of search engines and web community as a whole to understand all risks, which search engines bring to them. And also to draw attention of search engines’ owners to security issues of their sites.

During the month everyday will be publish vulnerabilities in most popular search engines of the world. Cross-Site Scripting vulnerabilities in particular. Everyday will be publish vulnerabilities in different engines (minimum one publication at a time, but there will be bonus publications also)."

The comments on RSnake's post answer several of the frequently asked questions.

XSS Book in stock on Amazon!

Our publisher, Syngress, has informed us that Amazon is finally shipping Cross Site Scripting Attacks. Woot! Of course I ordered a bunch for myself and waiting for the copies to come in. With any look several of the authors will be holding a book signing at Black Hat Vegas as well. Half of me is really excited to see the reviews while the other half is suffering from anxiety. :)

Monday, May 14, 2007

HTML 5 in the works

For those into bleeding-edge web application security research, HTML 5 is currently being worked on. Looks like the working group is adding a lot of functionality to the specification and amazingly enough paying attention to security. Right on. "XSS" can actually be found in several instances in the document. Perhaps this is a good time to start theorizing and developing new attack techniques, defensive measures, and having a look at areas likely to suffer from implementations flaws when supported.

Risky Business Podcast

Last week I did a second podcast interview with Patrick Gray of ITRadio.com.au. We primarily chatted about “The ethics of Web application security research, and liability concerns for consumers who bank online.” The message is starting to get out there about the new issues we face in webappsec with respect to disclosure and discovery. The game is definitely changing as people are becoming aware. Unfortunately there’s no easy answer to the challenges involved. We can only hope to continue participating in the dialog and hope that common sense practices prevails.

Also, I must have missed the Risky Business RSS feed the first time around, they have some good looking content available I’ll be trying to catch up on.

Web Application Security for CIOs

Last week an article appeared in CIO, An Introduction to the Murky Science of Web Application Security, where none other than Simson Garfinkel interviewed yours truly. Simson is a notable name in the information security arena, with a reputation for being VERY direct with vendors, authored the first book I ever read on web security, and most importantly really knows his technology. Oh, did I mention that he admits he’s never been of fan penetration testing? Going in I knew I had my work cut out for me, but was excited by our first meeting and didn’t know quite what to expect.

What you might find interesting about the article are the descriptions of the types of vulnerabilities we routinely identify and the odd situations we encounter when doing so. For example when vulnerabilities spontaneously open and close from scan to scan or when they reopen months later for no apparent reason. Overall Simson did a really good job highlighting the more important aspects of the field in an easy to understand way that CIOs can digest. Then this quote really made my day:

"I’ve never been a big fan of penetration testing, but the two hours that I spent talking with Grossman convinced me that it’s a necessary part of today’s e-commerce websites. Yes, it would be nice to eliminate these well-known bugs with better coding practices. But we live in the real world. It’s better to look for the bugs and fix them than to simply cross your fingers and hope that they aren’t there."


Here come the "rolling" scanner reviews!

It’s been too long since the web application security industry had a good in-depth review of the various vulnerability assessment solutions available. And never have any in the past included software-as-service-models like ours from WhiteHat. Network Computing's Strategic Security: Web Applications Scanners review plans to test products from Acunetix, Cenzic, N-Stalker, SPI Dynamics, Syhunt Technology, Watchfire and WhiteHat Security. Thankfully they have Jordan Wiens conducting the reviews rather than someone with extremely limited domain knowledge. For those who recall, Jordan is not there average journalist. I personally got to see him win Security Innovations's Interactive Testing Challenge web hacking competition. This should be really interesting to watch unfold!

>THE TEST BED:
We chose three applications from volunteer organizations to test our Web app scanners. All are relatively simple Web apps in use for real-world functions, and were built using a variety of development tools and platforms.


Our first application was written in C# using Microsoft's ASP.net with Ajax (also known as ATLAS) and deployed on IIS 6.0. The second was developed using the LAMP stack (the combination of Linux, Apache, MySQL and PHP), and the third was written in Java and deployed with JBoss and Tomcat on Linux.


None of the applications has received a security audit, either at the source-code level or using external scanners. Throughout the process, all scanning applications will be leveled at the same applications--any changes to fix security vulnerabilities found in production systems will be left off test instances that are used for future scanning, to ensure that each product and service has the same potential vulnerabilities to find.


Note that no vulnerabilities were intentionally added or seeded into an application. The applications will be scanned exactly as they existed in the wild at the start of the review.

Friday, May 11, 2007

The Web certainly isn't over phish'ed

Recently RSnake recently interviewed a real live phisher who goes by the name “lithium” (followed-up by Dark Reading). We can't verify any of the claims, but everything seemed reasonably believable. Well worth a read, especially this question which RSnake gets right down to it at the end:

"Are there any anti-phishing deterrents (tools or technology) that make life as a phisher harder?

Oh sure, There are many things that make pishing harder. But since Internet Explorer 7 and firefox 2 have implemented an antiphishing protection, Those two cause the most irritation. "

CSI Article, something for everyone

Sarah Peters (editor for Computer Security Institute) published a great article entitled "AJAX and Hijacks - Web 2.0 is growing up. And we’re not ready". Sarah discusses the major issues within web application security in clear and concise way (very hard) - including JavaScript Hi-Jacking, AJAX (in)-security, CSRF, XSS, statistics, intranet hacking, and the ethical/legal debate surrounding vulnerability discovery and disclosure. The technical details are deep enough to understand the finer points without going overboard and losing the reader. Excellent stuff to send around to industry peers looking to get up to speed. Normally this is paid for content only available with a CSI membership, but I asked them to open it up to a wider audience. With their permission they allowed us to host the content for a free! Thanks CSI!

Chasing Vulnerabilities for Fun and Profit II

This post comes via our Spring 2007 newsletter which also includes various industry articles and events/conferences we'll be attending. Full download.


At WhiteHat Security we spend our time hacking the world’s largest and most popular websites (a really cool job). The issues we uncover, if exploited by an attacker first, could easily result in huge financial losses or devastating consequences for a consumer brand. Thousands of these issues can be chalked up to Cross-Site Scripting (XSS) and SQL Injection, but the most technically fascinating discoveries are flaws within application business logic. These discoveries include high-severity vulnerabilities through which we could purchase laptops for a dollar, see other customers’ order information, liquidate bank accounts, and more. These issues are interesting because no two are exactly, alike and challenging because they are often so complex that they must be found through expert analysis. And, as I wrote in “Chasing Vulnerabilities for Fun and Profit I,” as a spontaneous office activity a few of us race to find the first and best vulnerability in a never-before-seen website. Special praise is given to those who find the clever logical flaws.

More often than not, the first medium severity vulnerability is identified inside of two minutes if it is cross-site scripting, and 20 minutes if it is a high severity issue. The best and most competitive races occur when the website is relatively secure and no easy SQL Injection is present. These races last longer and the results become more interesting. To win, the WhiteHat Security Operations Team has to find a business logic flaw by determining what functionality is supposed to be accessible by a user and then trying to find holes that allow him to subvert the application design. The results depend on the target website and its set of functionality.

For example, the target on a particular day was a large e-commerce application service provider (ASP). The ASP provides a technology platform to sell shrink-wrapped software. Let’s say that a customer wants to buy a particular piece of software. He would click “buy” on the software vendor’s website and then be transported to the ASP’s website to complete the transaction. The ASP’s website is “skinned” identically to the software vendor’s in order to maintain brand identity. After the purchase, the customer downloads his application and license key, or may alternatively request that a CD be shipped to him.

Hundreds, perhaps thousands, of software vendors depend on this system for order processing. Imagine all of the data that the ASP is protecting. To buy software, a customer must provide his name, address, telephone number, credit card information, and expiration date. Plus, the ASP stores software license keys. This is all highly valuable data for identity thieves and software pirates. A compromise of this data would not only be devastating to the ASP, but also to all of the software vendors that depend on the systems. As one might expect, our goal was to see if we could compromise this data through the web application. Better that we discover the flaws before attackers exploit them.

The ASP informed us that they had taken the usual security precautions including SSL, firewalls, and patched systems. Their Web application security program called for annual Web application vulnerability assessments during which holes are identified and reports generated. The developers and security staff resolved the issues within a reasonable amount of time and business continued. As is common in most e-commerce operations, the security impact of weekly code updates was an unknown. With every new line of code, the risk of introducing new vulnerabilities increased, as did the possibility to jeopardize the entire system. Also, the accuracy of last assessment report decayed each week after its completion until it became useless.

At the time WhiteHat stepped in, the ASP was about 90 days and 12 code pushes since its last assessment--more than enough time for new issues to be introduced, and perhaps a few missed issues from the previous assessment to come to light. With the team’s browsers ready to go, the starting URL was called out and the speed typing commenced. Everyone was feverishly going for the quick XSS win. In usual fashion, everyone went after the search box, but to no avail. Someone at the ASP had found and closed that issue long ago. Any place that echoes user-supplied data will do.

At roughly 30 seconds in, Theodore “T.C” Niedzialkowski (Senior WhiteHat Security Engineer), with fists in the air, shouted “scripting!,” as in Cross-Site Scripting. T.C. was quick to the spot that has to be the second most common location for XSS, the “Not Found” page. Many 404 handlers do not properly HTML encode the URL before echoing a user’s request was not found. With the first contest over, Bill Pennington (WhiteHat vice president of services) and I, not wanting to be shutout, began the hunt for a high severity issue.

WhiteHat’s Security Operations Team is NOT comprised of the Swordfish movie, Hollywood-style hackers who hack by 3D Auto-Cad. We are not the types to resort to script kiddie tools. The fact of the matter is that most scanners simply do not work fast enough on custom websites anyway, and they absolutely cannot identify complex logical flaws, at least not without massive upfront configuration. And, they certainly will not provide results within our timeframe of a few minutes. Most of the time someone will claim victory using Firefox, an HTTP proxy, and their gray matter.

Five minutes into the hunt the silence became eerie. No one was saying a word. Instead, we were all immersed in concentration, which meant that no one had yet hit pay dirt. Several of us were finding easy Information Leakage issues through which the system revealed internal paths and software version numbers in error messages and HTML comments. These are issues worth disclosing, but certainly not high severity issues. We tried testing credit card numbers, flipping product IDs, and swapping template parameter values, but nothing was working. There was no login to the system either, just a simple sessioning system for product purchasing. Not much to work with, and I was running out of application functionality real estate. I would have to start tracking back over my work. Someone had done good work on this system.

Suddenly, Bill cried “owned” – slang in our world for compromising a website. The rest of the team begrudgingly stopped typing and huddled around his laptop to see what he had found. He showed us how he was able to jump into different software vendor stores and access any customer order, complete with identify information and serial numbers. Game over. No doubt. But, the question was how he was able to do this.

The issue turned out to be one those complex and multi-layered business logic flaws. The interface to the Web application was called in by a URL parameter called “template.” The value of each template had a numeric ID. I came across this and cycled through a few numbers, but nothing interesting came up. Bill did a fairly exhaustive rotation and one particular number turned up something revealing, a page entitled “customer order service center.” On this particular page, there was a search form with an entry for an order number. Apparently, this was the interface that customer support uses to track orders when there is a problem. With a valid order number, we could see if it would bring up the number.

Bill then headed over to the store interface and started the purchase process. He checked his cookies and inside there was an “order” parameter with a long numeric string. He then pasted the string back into the search box, but the search turned up empty, likely because he had not actually bought the items yet. So, Bill deleted his cookies and entered the order process again to get a new number and continued this process a few more times. By looking at five order numbers, we could see that the numbers would increment, but not exactly sequentially. In the summary meeting, we were told that the issue must have been in the software for years and never uncovered by anyone.

In under ten minutes, we uncovered a handful of vulnerabilities from low to high severity. T.C. took the speed hack prize, Bill the best hack, and I struck out. On any given day, any member our Security Operations Team can take the prize. WhiteHat’s website vulnerability assessment methodology allows us to combine our experts’ strengths with the findings of our proprietary scanning technology to ensure that we provide timely, comprehensive vulnerability coverage of our customers’ sites, with the lowest possible false positives.

It is important to understand that it is not enough to find a handful of vulnerabilities. Hackers only need one small hole in order to get in and wreak havoc on a business. The security professional needs to be confident that he has a comprehensive and up-to-date overview of his website’s vulnerability status at any time. Most Web application vulnerability assessments require thousands of tests. So, an entirely manual process, while comprehensive, is neither practical nor desirable. Nor is a purely automated process going to work because scanners can only test for about half of all the vulnerability issues present in a website and completely miss critical business logic flaws. Therefore, the best approach to Web application security vulnerability assessment is an iterative solution that combines the best of both worlds, the automation of a scanner and the experience of an expert. The results are both timely and comprehensive. When we are not racing, this is what we do for a living. Every day.

Tuesday, May 08, 2007

Hawaii Five-O TV Show 1st Season Intro

A co-worker asked the WhiteHat founders (Lex Arquette and myself) "why" we founded the company. Anyway, being that we're both from Hawaii and that everyone here seems to be a comedian, he forwarded this along:

phishing solution, .bank tld, Riiiiiiiiight!

Mikko Hyppönen, Chief Research Officer of F-Secure, publish an article entitled "Masters of Their Domain" (with /. coverage), suggesting a phishing solution that says financial institutions should be served from a reserved .bank tld. Oh, and also that it would be expensive ($50,000) in order to keep phishers away. The logic goes that users would be assured that .bank sites are safe and to conduct business with. OK, leaving aside browser vulnerabilities, potential flaws in the domain registration system (like the SSL Cert system), and website vulnerabilities .... you can't be serious!?!?

The users who are getting phished are not those analyzing the domain name in the URL, reading the SSL Certs, or even double checking links before they click. The users who are getting phished are the same ones who would ignore a big red banner on the page that says "THIS IS A PHISHING WEBSITE!" And statistically thats A LOT of people and a .bank tld isn't going to help them.

We really need a place on the Web where stupid ideas go to die. I bet I could donate several of my own.

Report available for WASCs Distributed Open Proxy Honeypot Project

Ryan C. Barnett, WASCs Distributed Open Proxy Honeypot Project Lead, released his first Threat Report! This is wicked cool stuff. For those not familiar with this project:


“This project will use one of the web attacker's most trusted tools against him - the Open Proxy server. Instead of being the target of the attacks, we opt to be used as a conduit of the attack data in order to gather our intelligence. By deploying multiple, specially configured open proxy server (or proxypot), we aim to take a birds-eye look at the types of malicious traffic that traverse these systems. The honeypot systems will conduct real-time analysis on the HTTP traffic to categorize the requests into threat classifications outlined by the Web Security Threat Classification and report all logging data to a centralized location.”



That’s right the ability to view, analyze and measure real-live web attacks. Ryan has put a lot of work in and coordinated proxies in 7 locations around the world (Moscow, Russia - Crete, Greece – Karlsruhe, Germany – San Francisco, USA – Norfolk, USA – Falls Church USA – Foley USA). Time to start migrating away from the theoretical or hypothesized conversation about what the “bad guys” might be doing. Here is a taste of attacks found in the wild:

- SQL Injection Attacks
- Brute Force Attacks
- OS Command Injection

- Web Defacement Attempts

- Google-Abuses (Google-Hacking and Proxying for BannerAd/Click Fraud)

- Information Leakage


Obviously the more sensors that are available, the more chance of juicy data we can capture and the group is already set to grow. And as Ryan notes in the report, there is a lot of interesting and challenging aspects to this project that could really use some good people to solve. If you would like to contribute to this project, please contact Ryan Barnett RCBarnett_-at-_gmail.com.

Monday, May 07, 2007

Web Application Security Professionals Survey (May 2007)

Update (5/14/2007)
The results are in and the webappsec professionals once again have spoken! 62 respondents shared their opinions and in doing so presented interesting perspectives and insights of a larger world. Thank you to everyone who took the time to submit and please let me know if you have any burning questions that needs to be asked next time.

My Observations
1) According to Q7 and Q10, respondents are under no illusions about the current state of web security. It’s a sobering reality that no one is under the impression that things have improved significantly. Instead 71% believe web security has either stayed the same or perhaps only improved slightly during the last 12 months. And nearly 1/3 think the current state may have actually gotten worse as defensive measures have not kept pace with new attack techniques. This makes one wonder if the term “web security” has become an oxymoron. Fortunately for them about half of the respondents reached the Acceptance stage of (47%) of their physiological grief, while most of those who remain are fighting through Bargaining (27%) or Depression (14%).

2) Judging from Q11 and Q12 nearly 1/2 of all respondents believe financial institutions represent the most secure industry vertical (encouraging), or generally websites developed in either .Net (31%) or Java (28%). Virtually no love for PHP or my favorite language Perl. :( We need to be careful with drawing conclusions from these statistics. A significant numbers of peopled admitted they were just guessing. Personal experience was limited to only one or two verticals and platform languages. They hadn’t tested in-depth across the gamut. This means what works, what doesn’t work, and who’s doing the best job continues to be a blind spot. This is unfortunate because we’d like to learn from them. As we grow the pool of websites we assess, WhiteHat might be able to shed more light on this area through our Web Security Risk Report.

3) Q2 makes it pretty clear that respondents believe most developers are clueless when it comes to web security. Developers carry a heavy burden as the vast majority of people consider them to be the first and only real line of defense. They routinely have unrelenting deadlines to meet, go without proper training, and lack technology assistance. Then we expect developers to know all the issues and not make mistakes. But if you take a step back, even we the experts have trouble keeping up with all there is to know. Unfortunately for them until web security improves dramatically, developers are likely to continue being the whipping boy since there is no one else to blame. Oddly enough the things that might significantly improve web security, such as modern development frameworks, have little to do with education. But hey, maybe they can take the credit for that anyway!

4) Q3 was surprising as about 1/4 of respondents said they had a strong technical understanding of DNS-Pinning and Ant-DNS-Pinning. Further still another 60% said they possessed at least some familiarity with the concepts. Impressive. I would have thought these numbers would be lower. Looks like a few good bloggers really gotten the message out. On a similar question (Q6) I was also surprised by the relatively even split *snicker* across the Response Splitting range when it came to exploitability. There isn’t a great deal of consensus about the risks involved. I can’t help but think some 2/3 of us remain confused by the finer points of the attack. This would make sense considering recent mailing list threads.

5) As it was for the last survey a high percentage of respondents (72%) are positive about Web Application Firewalls (WAFs) and optimistic enough to give them a look. My guess is people are figuring out that there are a lot of websites out there, more bugs than we can fix in a reasonable amount of time, and security guys need to gain some control besides hoping for help from some “clueless” developer. Or I could be wrong and this is just a defense in depth mindset, but doubtful considering that even RSnake recently had a change of heart. However, WAF vendors have a lot of proving themselves to do before gaining the trust of the masses, capitalizing on current optimism, and moving beyond the early adopters.

Description
Several people have asked where the surveys have gone to in the past several months. The answer is that I've been amazingly busy the last couple of months and simply haven't had the time. The survey helps us learn more about the web application security industry and the community participants. We attempt to expose various aspects of web application security we previously didn't know, understand, or fully appreciate. From time to time I'll repeat some questions to to develop trends. And as always, the more people who submit data, the more representative the will be. Please feel free to forward this email along to anyone that might not have seen it.

Guidelines
- Survey is open to anyone working in or around the web application security field
- Answer the questions in-line and if a question doesn’t apply to you, leave it blank
- Comments in relation to any question are welcome. If they are good, they may be published
- Email results to jeremiah __at__ whitehatsec.com
- To curb fake submissions please use your real name, preferably from your employers domain
- Submissions must be received by May 14, 2007

Publishing & Privacy Policy
- Results based on aggregate data collected will be published
- Absolutely no names or contact information will be released to anyone, though feel free to self publish your answers anywhere

Last Survey Results
January 2007

Questions

1) What type of organization do you work for?
a) Security vendor / consultant (53%)
b) E-Commerce (9%)
c) Healthcare (0%)
d) Financial (9%)
e) Government (14%)
f) Educational institution (5%)
g) Other (please specify) (9%)



2) From your experience, how many web developers "get" web application security?
a) All or almost all (0%)
b) Most (0%)
c) About half (19%)
d) Some (59%)
e) None or very few (22%)


"The few that get it are the ones that have seen and played with WebGoat."


3) What is your technical understanding of DNS-Pinning and Anti-DNS-Pinning?
a) Strong (26%)
b) Some familiarity (60%)
c) I've heard of these (12%)
d) Eh? (2%)



"Some Googling and I am all fixed up here, but I had not heard of DNS-Pinning prior to taking this survey."



4) Do you click on links sent in email?
a) Never (27%)
b) Sometime (68%)
c) Always, I fear no link (5%)




"If I want to follow a link in email I copy and paste the link text and then visually make sure the link is what I think it is. I suppose this practice is susceptible to stored XSS attacks, but I never even do this much if I have the site in question is one where I can move my money around. "

"never. I check every link in status bar and copy-paste it to text file first."



5) Your recommendation about using web application firewalls?
a) Two thumbs up (13%)
b) One thumb up (59%)
c) Thumbs down (15%)
d) Profane gesture (10%)
e) No Answer (3%)



"They are a good short term measure, but there is nothing like having the application code written right in the first place to ensuring the security of any web application."

"They're not yet to the point where they help more than they hurt. A security aware development process and proper training can work much better. I'm open to the idea as they improve though."

"I do like the idea of WAF, since different independent layers of protection is what makes an application tend to be more secure. However what still is insufficient is the implementation of WAF we have at the time. I would like to point on this blog entry I wrote a few days ago, instead of re-phrasing it all again." http://christ1an.blogspot.com/2007/05/real-effectiveness-of-current-waf.html

"My exception: If inbound/outbound traffic from a hosted web application inside of a datacenter can be split so that inbound traffic (GET's, POST's) is unaffected by the WAF, and outbound traffic (Server responses) is protected - then I'm ok with implementation of a WAF. See: be conservative with what you send and liberal with what you receive. I do not believe or agree with RSnake's exception. Companies should not implement WAF's in order to buy time or because their website has 14,000 vulnerabilities. They should take down the website if it's
really that bad."

"Using a web application firewall can separate some security concerns from the business logic, but they are no substitute for good security practice. For instance, Amit Klein's algorithm to help mitigate the PDF XSS from the server is best implemented by a web application firewall, but its use is not a license to potentially open ones site to SQL injection by failing to used prepared statements on the assumption that the application firewall will provide adequate protection."

"Not as a catch all but as another synergistic layer of control. Additionally, I have recommended it to people as a stopgap measure where there site is completely riddled with obvious holes and the number of developer hours required to fix are very high."

"Despite potential shortcomings in WAF implementations, they're far more targeted towards the web application threat domain. Traditional IDS/IPS has not performed very well in this area."

"I am a purist that wants to see the code fixed but understand that that is not a viable solution for some/most clients. They could be the "savior" of web apps or as useless as your typical IDS. When implemented properly the are useful defense in depth i guess. "

"The products seem to be maturing. I think the hurdle now is now education. What they can and can not protect, and when they make sense in a deployment."



6) From your experience, what is the typical risk level of Response Splitting exploitability?
a) High (23%)
b) Medium (34%)
c) Low (43%)




"Severity ratings are highly dependent on individual factors. Rating systems (from the H/M/L basic ones to the complex CVSS, etc) are not scientific enough for my uses. In my opinion, all vulnerabilities are critical and require remediation. Prioritization during the remediation process should rarely be used because fixes should be done in parallel with as many resources as it takes to get each done as close to immediately as possible."

"On a scale of 1-10, where 10 is remote root-level compromise of a system, these would fall into the 4-6 range with things like XSS and SQL injection. It depends on the deployment. Web vulns may (or may not) have less of an impact to an Enterprise or hobbyist site than they would to a service that is used by millions of users (Amazon, MySpace, etc)."

"A related vulnerability that's probably easier to exploit is response header injection. I'd give that one a Medium."


7) How has the security of the average website changed in the last 12 months?
(Take into consideration new attack techniques and defense measures)
a) Way more secure (0%)
b) Slightly more secure (33%)
c) Same (38%)
d) Worse (29%)
e) No idea (0%)

"Many defense measures do not seem to be deployed, and many of the new attack techniques constitute a sub-set or refinement on old, known vulnerabilities which web developers should already have an inkling of an understanding about."

"The researchers are far out pacing the remediation in most sites. I also think that we haven't begun to see CSRF hit on a real scale yet. "

Worse
Considering the following:
- A new attack vector was discovered that bypass the built in .NET validation. This alone had a potentially global impact for any developer/company that relied soley on the built in validation. The patch won't be released until June 12th, just over 2 months after the public disclosure.
- PCI escalated XSS vulnerabilities to Level 4 (Critical).
- The mhtml vulnerability in IE 7 still hasn't been patched.
- The "onunload" entrapment vulnerability.
- You get the idea...




8) Do you plan to attend BlackHat Vegas of Defcon this year?
a) Yes (23%)
b) No (51%)
c) Maybe (26%)




" I'd like to attend Dinis Cruz's ASP.NET exploits training."


9) Are hacking contests, like Hack a Mac at CanSecWest, a good idea security-wise for the industry?
a) Yes (58%)
b) No (11%)
c) Somewhere in between (please describe: 1-2 sentences)




"The media attention I think is a good thing. Can be a needed reminder about security for people outside of the industry before the next real break in makes the front pages of the WSJ again."

"don't see much point other than showing off your back pocket 0days"

"It depends one the motive of the contest, sometimes it is good to give people an reason to look at a new technology. But the problem with many of these contests is how they are run in the background. The Hack a Mac contest is an example of how one of these contests can be run badly."

"They increase public awareness of an oft ignored problem, but do little to actually make things more secure in and of themselves."

"They generally don't UNCOVER new vulnerabilities - they usually just publicize existing vulnerabilities that are out there. By making an exploit public, they end up giving the exploit away for free to groups that otherwise might not have had it. (Of course, I understand the other side of the coin - full disclosure forces the vendor into fixing it.)"

"They aren't a bad thing, though there are better ways to accomplish the goal of improving security in the industry if some folks would be wiling to sacrifice the headlines and notoriety."

"The Hack a Mac at CSW served as a reality check for Mac users and pundits and so in my opinion served a greater purpose than most contests. This has a one-time value. In many cases these contests are launched as marketing campaigns to by vendors who want to prove that their product is "unhackable" and these are of no value whatsoever."

"On one hand, from my point of view it's never good if you have to break into something just for having a cool show, even if it's in an isolated environment but on the other hand sometimes it's the most efficient way to show (in this case) non-webappsec-professionals that we're not joking when we say that there are serious problems out there."

"To paraphrase Schriner it's security theater. But sometimes that's good. With any new client i use a little FUD magic tricks up front to get their interest up. That said using unsecured networks with 0-days on um isn't such a good idea. Hack a Mack you guys ROCK! :( "

"It depends on what side of the fence you are on regarding public disclosure. I can understand both sides of the argument. Yes, it makes an exploit publicly known before the vendor has a chance to create a patch for it. Yes, public disclosures to prompt vendors to be more responsive to releasing patches for discovered vulnerabilities."

"I could go on for a long time about this. I do think that public hacking both increases and decrease risk. In the very short term, it exposes issues for which fixes take time and then he consumer/customer is exposed to risk in the interim. But in the long term, 1) if the good guys don’t do it, the bad guys will and 2) vendors won’t otherwise be motivated to change/fix…this is just the way free market capitalism seems to work."

"Security by Obscurity sucks! Incentive for motivating people is always good and some people sometimes like the attention. Would Dino Dai Zovi have found the bug had it not been for the publicity? Maybe down the road but not sooner."


10) What is your stage of web application security grief?
a) Denial (5%)
b) Anger (8%)
c) Bargaining (27%)
d) Depression (14%)
e) Acceptance (46%)




"I'm far to academic in my thinking and everybody's just trying to recover their existing stuff rather than using good practice in engineering."

"Depression (if you catch me at the bar after work or a conference)"

"Acceptance. I was there from day one. Clients are always assumed to be evil. My pain point is in hiring. Most web developers I've seen couldn't construct an HTTP request/response if their life depended on it. They nearly always lack the technical knowledge to adequately defend themselves."

"Amazingly enough this the one part of my job that doesn't give me much grief. Anytime I find an issue, I open up a bug ticket and our developers make the recommended changes pretty rapidly. The anger part is in regards to the fact that simple problems continue to occur in new
applications even though they were highlighted as security issues and addressed as such in previous applications. I.e. Input validation..."

"It is not really grief anymore, it more a realisation that my job is going to be safe for sometime to come and my family will be provided for on the development mistakes of others.."

"The only stage I haven't been through is denial. I jump-started to the anger stage when my day-to-day existence became centered around trivial vulnerabilities in unused and unknown web applications. I've come all the way through to acceptance because there's a lot of smart people thinking about this space and bringing innovations to both attacks and defenses. Web application security is definitely more legit than it was 5 years ago. "



11) What is the most secure website industry vertical you encounter during vulnerability assessments?
a) Financial (47%)
b) E-Commerce (6%)
c) Healthcare (6%)
d) Government (0%)
e) Adult Entertainment (11%)
f) Gaming/Gambling (3%)
g) Don't know (14%)
h) Other (please specify) (14%)

"Overall, financial industries seem to be the most secure in my experiences. I believe that fear from regulatory compliance issues have prompted these companies to dedicate vast amounts of resources to deal with these problems. Healthcare definitely seems to be the worst, and I think that is a direct result of HIPAA lacking clarity in the spec and "teeth" in the enforcement."

"Not that they're perfect, but a few of them have really come a long way in creating mature processes. I feel like they have a head start on the other industries at this point."

"there are no most secure website industry in the web. There are a lot of sites with pdf files ;-)."

"From my point of view it has nothing to do with the industry a company is in, it has to do with popularity on the Internet. For example google.com is very secure if you compare it with other nearly unknown search engine sites. This is because that many people try to attack such big players and so the risk of a successful attack is much higher and that's why this companies have to do much more to stay secure, independent of the industry they're in."

"Those that conduct business via a verble agreement and a handshake. Other than that..."

"The pr0n industry is always ahead of the curve with tech."

"Most of the assessments I do are on finance based applications. They (the finance industry) are starting to understand (thanks to the TJX security breach), that one mistake could get them sued right out of business."

"I mostly just assess my own company's sites."

"I don't touch enough verticals to say in general"

"I really don't have much to go on here. The stuff I look at is mostly the global online services world such as what Windows Live or Google would provide. Having looked at financial applications about six years ago, I know at least they've gotten magnitudes better than those days, at least the ones I personally use."

"I've been Web app testing for coming on 7 years, back then my first web app test was a internet bank, those early adopters set a pace for the web app security race. I've found that many other industries are just playing catchup. Some are really close behind, but some are just plain lost, hell some companies took the wrong race course route and have ended up in the web app security equiv of downtown Baghdad fighting a loosing battle with the XSS insurgents.."

"Hmm...I'd like to say A because they were the first to really follow through on some security initatives (not that they already have the incentive of heading off potential exposure). However, it's possible that B can be the top because people (particularly companies) seem to be pretty aware of the fact that they need to transmit the credit card numbers securely or likely get burnt like all these companies have in recent years (i.e. TJX as the most recent victim). Then again, E and F could be the most secure (F because it really has one of the most to lose) because they tend to trailblaze new technologies (i.e. VCR and Internet). C is likely up and coming thanks to HIPAA but they are trying to go too far with leaving all patient records online. Ok this is long but I guess you could put me down as a G answer because I really don't know what is the best at the moment."



12) From your experience, what development technology is present in the most secure websites?
a) PHP (3%)
b) Java (28%)
c) ASP Classic (0%)
d) .Net (31%)
e) Cold Fusion (0%)
f) Perl (0%)
g) Don't know (19%)
h) Other (please specify) (19%)

"I simply have no clue ;) I've never spend much time with other languages than PHP and the only thing I can actually prove is that the latter is insecure. Probably all the others have very similar problems but there isn't that much on the news."

"What's about clean old HTML? These web applications are very secure as you know ;P"

"COMMON SECURITY SENSE and trust but VERIFY!"

"Java - If i had to pick one ... only because there are some tools in this space that make it hard to shoot your leg off. That said everything can be coded poorly and/or designed poorly. Garbage in Garbage out."

"Ruby or Rails"

"Ruby on Rails!!!!!!!!!!"

"A killer one again. I'd like to say B or F (although User Input without sanitization is a real killer for this language) but I really don't know...I'd have to say G."

"Some of the most secure applications I've tested have been .NET, but this doesn't mean the developer can't/didn't implement something incorrectly that could cause a major security issue. One of the sites I tested was considered by the companies developers and management to be "super" secure. It was for the most part, except the application allowed for uploading pdf files. The developer made three minor mistakes, but when put together those mistakes were huge. Mistake #1 the developer only validated the file type on the client (easy enough to bypass). Mistake #2 the directory the files were being upload to was in the application directory structure so files could be navigated to by changing the url. Mistake #3 the developer set the execute permissions in IIS on the uploads folder to match that of the application root (scripts and executables). It took about 2 minutes to discover this, upload an .asp file and dump the contents of the server's C:\ to my browser. Technology and frameworks are great, but when developers make small mistakes in implementation, they can result in huge issues."

"I know that our .Net sites are significantly more secure than our ASP Classic sites. However, I've seen an extremely secure ASP Classic site once (a bank that was using a purchased web app written in ASP Classic)."

"D - of course I'm biased and that makes up about 90% of what I see ;)"

"Python or Ruby. they are more secure for a lot of reasons, but the most strong one is that they are not as common as .NET or PHP apps."

"HTML, pure html. Even in this case it is possible to find vulnerabilities (in client-side scripts - javascript, etc.), and I had found many such holes, but pure html is the most secure variant."

"(maybe that's not fair to Java, but the .Net apps I've seen generally are not as complex so they've been a bit more secure)"

"This question sucks and is counter productive. People keep wishing to get magic answers about secure technologies. The problem is not in the technologies (at least not the decent ones... java/.net/php/etc. - I'm leaving out things like SSI) - the problem is with the coding. Saying that a certain technology is more secure is just going to make ppl to think they can slack their code security cause "java is secure" or whatever. bad, bad bad question."

"Other, I think the less well known stuff like Rails and Django have smaller, more technical, more security savvy user bases"

"h (can't say - all depends on programming approach and security awareness)"

(INSECURE) Magazine Issue 11

(INSECURE) Magazine Issue 11 is now available for download. Normally I don't "check this out" posts without commentary, but in this case I'll make exception. This mag is one of my personal favorites and always has some good quality content.

Contents:
  • On the security of e-passports
  • Review: GFI LANguard Network Security Scanner 8
  • Critical steps to secure your virtualized environment
  • Interview with Howard Schmidt, President and CEO R & H Security Consulting
  • Quantitative look at penetration testing
  • Integrating ISO 17799 into your Software Development Lifecycle
  • Public Key Infrastructure (PKI): dead or alive?
  • Interview with Christen Krogh, Opera Software's Vice President of Engineering
  • Super ninja privacy techniques for web application developers
  • Security economics
  • iptables - an introduction to a robust firewall
  • Black Hat Briefings & Training Europe 2007
  • Enforcing the network security policy with digital certificates

Friday, May 04, 2007

WASC Meetup at JavaOne (San Francisco 2007)

Update: Garrett Gee posted some pictures of the WASC meet-up!

WASC is organizing a Meet-Up during the JavaOne Conference (May 8-11 @ San Francisco Moscone Center). As usual this will be an informal gathering. No agenda, slide-ware, or sponsors. We're expecting maybe 10-20 like minded webappsec people to share some food, drinks, and stimulating conversation. Everyone is welcome and it should be a really fun time!

We only ask two things:

1) RSVP by email ASAP, if you haven't done so already, so we can make the proper reservations: (contact ___ at ___ webappsec.org)

2) Everyone is encouraged to buy a beer for someone they didn't previously know.

Time: Thursday, May. 10 @ 7:00pm

Place:
ThirstyBear (walking distance from the conference)
http://www.thirstybear.com/
661 Howard St San Francisco, CA 94105

Thursday, May 03, 2007

How to check if your WebMail account has been hacked

WebMail accounts are a popular target for malicious hackers, law enforcement conducting investigations, and rouge insiders. WebMail security is very important, perhaps even more so than your online bank account. If your WebMail is hacked, every web-account associated to that address (using send-an-email-forgot-password-system) could be compromised, including your bank. Phishing scams, password brute-force attacks, cross-site scripting exploits, and insufficient authorization vulnerabilities are all commonplace. And for the most part these attempts are impossible for normal users to detect or do anything about. The problem is that unless your password changed without our knowledge, how can you tell if your account has been compromised? Fortunately there is a fairly simple way.

Normally when someone compromises a WebMail account they’ll pilfer through all your messages and save anything they’re interested in keeping. Unless the intruder is really dumb, and sometimes they are, they’ll change all the messages back to unread (bold) so you won’t notice their presence. What you can do ahead of time is set a kind of a virtual silent alarm on your account. Here’s how:

1) Upload a tiny image somewhere online where you can see the logs of who accesses it. There are a lot of places that offer web space, could come with your DSL provider, or a friend that might have some to share. Once uploaded, NEVER share out the URL to the image. Hide is well because no one should ever find it online by accident.

2) Send your WebMail account an email, containing the silent alarm image, with a juicy sounding subject line like “Your new online Bank password”, “Re: employee personnel files”, or “That’s it, we’re through!!!”. Anything an intruder wouldn’t be able to resist reading. Leave the email as unread in your inbox. This is your silent alarm email.

3) Hopefully this day will never come, but if an intruder were to ever break into your account and read your silent alarm email, they’re browser will unknowingly request the embedded image. By periodically checking the image logs, if it ever has activity, you’ll know something is up. The web server logs will contain the intruders IP address as well as the date/time of when they broken in and read the message.

Simple. This same process can also be used to protect your MySpace account through the messaging system. Enjoy!

Wednesday, May 02, 2007

Rain Forest Puppy breaks his silence

4 years after retiring from the public security scene, rain forest puppy (rfp) breaks his silence and agrees to an interview where he shares his thoughts. For those that haven't been around webappsec that long, rfp is one of the REAL pioneers of the industry who contributed a ton of cutting-edge research that we still use today. You'll also notice that he's a very humble guy who prefers to continue giving back rather than taking the credit he deserves. Welcome back rfp.

RSnake, the latest Web Application Firewall convert

Learning to Love WAFs. What can I say, but "wow". The hidden meaning behind this article says volumes about the current state of the web application security industry and where we are headed.

Battle of the Colored Boxes (part 2 of 2)

Coverage and comprehensiveness is key to effective vulnerability assessment. The more vulnerabilities identified and weeded out the harder it is for the bad guys to break in. In web application security, black box testing is a fairly standard measure of the difficulty and commonly used as a method to improve it. That’s why when Fortify recently published a new white paper entitled “Misplaced Confidence in Application Penetration Testing” (registration required), it immediately peaked my interest. Plus a title like that is bound to generate some controversy (score 1 for marketing). I highly recommend reading their paper first before moving on and having your opinions colored by mine.

Done reading? Good, let’s move on.

Fortify, a company specializing in white box analysis tools, performed a study measuring the percentage of code coverage achieved by black box scanners. They set up a few web application test beds, wrapped the security-relevant APIs with Fortify Tracer as a way to measure, launched some commercials scanners (with and without manual configuration), and logged the results. A novel approach I hadn’t seen before in webappsec. They also surveyed what people “believed” they are getting in terms of coverage from black box scanners, but that wasn’t so much of interest to me as the “actual” measurements.

Here are the highlights from the paper that interested me:

- Our experiment found that penetration testing identifies key vulnerabilities during application runtime, but only reaches on average, between 20-30% of a given application’s potential execution paths

- Manual customization can improve tests, although this improvement is not significant: our experiments showed an increase in coverage of only 19%.

- Two sets of issues caused the majority of the “misses”. The first set involved exercising sources of input that are inaccessible through the Web, such as data read from the file system, Web Services, and certain database entries. The second, and more alarming area of missed coverage, came from areas that are accessible from the Web interface of the application, but are difficult to reach due to the structure and program logic of the application.


Upon reading this doesn’t look good for black box application penetration testing, or scanners specifically. Though there are several factors that went unexplained that could have significantly impacted the results, especially with the first two bullet points. Taylor McKinley (a Fortify Product Manager) was kind enough to indulge my curiosity.

1) Paper: “Many of the respondents used commercial and freeware automated tools, commonly referred to as web application scanners, as their primary mechanism to conduct their application penetration tests.”

Question: Does this suggest that the respondents didn’t complete the vulnerability assessment and just pressed “go” on the scanner?

Fortify: We’re just stating that many of the survey respondents used automated tools. We’re not specifying anything about how they use these tools. To address your question, which I believe is about whether we studied manual testing or automated testing, we studied automated testing with tools. We ran them out of the box first and then customized each one for the application we were attacking. In general, some pen testers most likely supplement their use of pen testing tools with manual efforts, however, if they don’t know what the tools are doing, its that much more difficult to supplement them in any reasonable way.


This means the results were based scans and “configured” scans, not full penetration tests or vulnerability assessments with experts behind them like those that I am known to recommend. It would be interesting to see how a combo scanner / expert assessment would stack up.

2) Paper: “For our evaluation, we had an internal security expert conduct penetration tests against a test bed of five Web applications.“

Question: What were these web applications exactly? Where they demo or training web applications like WebGoat or SiteGenerator, or something like a message board, shopping cart, or real-world website, or what? This is an important aspect for context as well.

Fortify: One was our own internal test application, which is a 5MB order fulfillment application. WebGoat was another one HacmeBooks was another. The last two are not well known but are representative of standard web applications in terms of size and functionality.

This is probably a fair enough test bed for this experiment.

3) Paper: “To address this shortcoming, we developed a tool that instruments an application like a code coverage tool, but specifically targets points where input enters the program and security-relevant APIs are used.“

Question: Can you elaborate more on how this is done?

Fortify: Fortify Tracer inserts its own Java bytecode into the Java bytecode of an application. It takes the .class files and, using aspect oriented technology, scans through the bytecode looking for vulnerable APIs. It also contains a set of rules defining what APIs are vulnerable, and what parameter to watch out for. Using aspect oriented technology Fortify Tracer has the ability to add code around, before, or after an API. When the aspect technology hits a particular API that is vulnerable it will insert Fortify’s code. This allows Fortify to analyze the data coming into or going out of an API.

Admittedly I don’t know enough about Java or this type of technology to say one way or the other this is solid enough for code execution measurement. A control case would have been nice…

4) Paper: “The second, and more alarming area of missed coverage, came from areas that are accessible from the Web interface of the application, but are difficult to reach due to the structure and program logic of the application.”

Question: Was a “control” method done as part of the experiment? Meaning, what if a user or QA process interacted with the website in a normal and complete usage fashion. What would have been the execution percentage? This seems to me like a vital piece of data to use as a reference point.

Fortify: Very good point. These test applications were relatively small and we know them well so we felt very comfortable that a QA tool could hit the vast majority of the application. We don’t have the data on hand but I agree in retrospect that would have been a good thing to have addressed in the report. I can say that we would expect a full QA test to exercise more than 80% of the application.
5) Paper: “The tester conducted both fully-automated and manually-assisted penetration tests using two commercial tools and used Fortify Tracer® to measure the effectiveness of these tests against the test bed of applications.”

Question: There are a number of crappy commercial black box scanners on the market, which did you use? It would be unfair of Fortify to have selected bottom of the barrel scanners as a representative comparison to represent the entirety of the black box testing market. This is another one of those missing vital pieces of data.

Fortify: I would like to divulge the names of these tools, but this is unfortunately not something we can do. However, short of telling you their names, is there anything I can do to convince you that you would not be disappointed? Maybe I can put it this way, we used two of the top three market leading tools. Does that suffice?

I think that’s fair enough to make some educated guesses about who’s product(s) they used.

OK we got some measurement concerns out of the way that should be considered if someone else decided they’d like to repeat the experiment. And I really hope someone does, this is good stuff. What’s also interesting is if you take the combined total of the first two bullet point measurements (30% + 19%), this is about the coverage I said that scanners are capable of testing for (~50%). Now, if you were to perform a full vulnerability assessment with an expert, would we have improved coverage to over 80% as mentioned in Q4? I don’t see why not. From that point of view the scanner / expert coverage doesn’t look so bad, at least not by an order of magnitude.

I think what Fortify is suggesting in the paper is not so much that black box scanners are bad or incomplete, but that their coverage will vary widely from one web application to the next. There is a lot of truth to this. Unless the end user is able to measure the depth of coverage, they’re unable to know the value their getting or not. I think that’s fair. Until the technology matures to the point where the coverage doesn’t vary so greatly and we begin to trust it, we’ll have to measure. Lets move onto the third bullet point.

6) Paper: “Two sets of issues caused the majority of the “misses”. The first set involved exercising sources of input that are inaccessible through the Web, such as data read from the file system, Web Services, and certain database entries.”

Question: How would you characterize the exploitability risk of these “misses” by external attackers?

Fortify: While the risk from internal hackers is more severe for these types of attacks, the threat from an external hacker is very real and needs to be addressed.

Ok, lets ask about that then…

7) Paper: “The first are those vulnerabilities that penetration testing is simply incapable of finding, such as a privacy violation, where confidential information is written to a log file, which potentially violates PCI compliance. Log files are particularly vulnerable to attack by hackers who recognize that they are often an easy way to extract information from a system.”

Question: Can you describe a plausible scenario of how an attacker might access this data though a web application hack?

Fortify: The basic premise here is that a log file isn’t a secure location, so you’re putting private data in a place that is not thoroughly protected. A hacker might be able to exploit some vulnerability that gives them access to various parts of your network, including the log files. This is also a major threat from an insider, who has a greater likelihood of being able to gain access to the log files for a particular application. I just spoke to someone who used to work for a major bank and he said this was a huge issue for them b/c employees, if they did the right thing, could gain access to all types of private data. In addition, having this type of vulnerability may be grounds for a failed audit, which could means fines or, at the extreme, having your ability to process credit cards shut down. Lastly, creating problems with log files is a great way to stymie forensics efforts.

I take this to mean that they would agree that vulnerability “misses” due to functionality inaccessible through the Web are more of an insider threat. This is fine, but its important to understand that when comparing and contrasting software testing methodologies. So where does all of this leave us? Fortify is illustrating the limitations of black box testing and where they can add value, nothing wrong with that. I think its safe to say they’d admit that certain vulnerabilities are beyond their coverage zone as well. They are a reasonable bunch.

For myself I can’t help but think we’re going about this measurement stuff the wrong way. We’re all busy fighting and comparing against each others solutions down in the weeds with “who found more” and “I can find that while you can’t” nonsense. We’re missing the big picture and that is… How do we keep a website from getting hacked? Isn’t that the point of everything we’re trying to do? That says to me that we must find a way to measure ourselves against the capabilities of the bad guys. How much harder does vulnerability assessment of black, white or gray box testing make it for them? In my opinion, this is where the focus on this type of research should be and would provide the most value to website owners.