Tuesday, June 29, 2010

Full-Disclosure, Our Turn

Vulnerabilities in websites happen, especially the ever pervasive Cross-Site Scripting (XSS). Essentially every major website has had to deal with XSS vulnerabilities published publicly or otherwise. This also includes security companies. No one is perfect, no website has proven immune, ours included. As experts in Web application security and specifically XSS, yesterday even we took our turn. We hope to learn some lessons from the experience and share the details so others may do the same.

For background, the WhiteHat Security corporate website (www.whitehatsec.com) is static brochure-ware. No Web applications, no forms, no log-in, no user-supplied input where XSS can hide. Just the every day HTML/JavaScript/CSS/Flash explaining what we do, why, and how. The website does not carry any sensitive data, although it does carry the WhiteHat Security brand and our reputation is extremely important to us. Therefore any publicized vulnerabilities or defacement would obviously be embarrassing.

Monday afternoon @thetestmanager openly posted on Twitter an XSS vulnerability that reportedly affected www.whitehatsec.com:

"It really does happen to the best of us. XSS on WhiteHatSec http://bit.ly/cIDfEA If you take a feed to your site do you check their code?"

“By the way, that tweet was meant as a bit of fun and certainly not a poke at @jeremiahg or any of @whitehatsec The hole isn't in their code”

Upon seeing the tweet a few minutes after posting I was a bit skeptical being familiar with the limited attack surface, but took the matter seriously and immediately sought to confirm or deny the validity. Fortunately/Unfortunately the particular XSS issue was very straight forward to confirm. The all to familiar alert box was plain as day. What was more worrisome was the existence of a query string, something that’s not supposed to be on the website at all! It took a few more moments to isolate out the source of the issue, a third-party supplied JavaScript block (accolo.com) located on our careers page that sourced in additional code remotely.

document.write("<* x-script src='http://members.accolo.com/a02/public/CommunityJobs_include.jsp?isJSInclude=1&" + Accolo.paramsStr + "'>");

Third-party JavaScript includes are obviously extremely common on the Web. First order of business, remove the offending JavaScript code ASAP and then perform root-cause analysis.

Remediation time: ~15min from original time of disclosure.

When embedded in a webpage Accolo’s third-party JavaScript code dynamically creates a DOM web-form where prospective interview candidates may enter their details. It is important to note that the form action points to an off-domain, but various cosmetic interface settings are pulled from a superficially created query string on the hosting website (our website). Whew, the corp website still has no Web applications. However, when Accolo’s form setting values are pulled from the query string they are document.write’ed to the DOM in an unsanitized fashion, hence the source of the XSS vulnerability. All @thetestmanager had to do was to insert a SCRIPT tag into one of the fake parameters to get a DOM-based XSS to fire. A solid find. After removing the code from our site we reported the issue to Accolo so they could remediate. Likely more of their customers are similarly affected.

Some Obvious Questions & Lessons Learned:

1) Do you consider @thetestmanager to be “irresponsible”?
Of course not. Yes, we would have preferred a more private disclosure, but whaddaya gonna do. Name calling is unproductive. Things like this happen to everyone and you deal with it as best you can. We appreciate @thetestmanager for taking the time to find the vulnerability and bringing it to our attention at all. Clearly someone less savory could have found it, not said a word, and did something worse.

2) Why was this vulnerability missed?
Among the many safeguards taken to protect the corp website, including daily network & web application vulnerability scans, we perform intensive static and dynamic security peer review by the Sentinel Operations Team on all “substantive” website changes. However, a process oversight occurred where marketing / HR requested a new website feature that did not meet the threshold for substantive change, just a simple JavaScript include, so no review was requested or performed. The Sentinel Operations Team would have caught this issue. We’ve since updated the process and double checked all third-party JavaScript code currently in use.

The Sentinel Service itself did not identify the issue due to a standard explicit-allow hostname configuration. Sentinel does not test links of forms bound to off-domain locations, such third-party JavaScript includes, for legal and liability reasons. Since Sentinel didn’t follow / fill-out the form it did not see the attack surface required to detect the vulnerability. In a Web 2.0 world this scenario is becoming far more common, is something we’re working with our customers to address who have the same challenge, and speaks to a larger industry problem worth exploring.

A) To assess a third-party (by default): If an assessment includes third-party JavaScript systems, which are technically within the domains zone of trust, there are very real legal and liability consideration when network traffic is sent their way. Getting expressed written authorization to thoroughly and properly assess a third-party is extremely difficult to obtain, but without it prosecution in many countries in a consequence one faces.

B) No control, no visibility: Even if third-party JavaScript is assessed, organizations have a very limited ability to exercise change control or even visibility when that code is updated -- almost always without their knowledge. An organizations website site might be fine one minute and literally vulnerable to XSS the next.

The bigger question then becomes, "how does an organization handle security for 3rd party includes & JS files"? For business reasons, the answer can't be "don't do it."

3) What should Accolo have performed to prevent the vulnerability?
Properly escape and perform input validation, in JavaScript space, on all the URL parameter values before writing them to the DOM. Further, use appendChild() as opposed to document.write to help prevent secondary JavaScript execution. Since the JavaScript code did not need access to the customers domain an IFRAME HTML include, rather than embedded JavaScript, would have been a better choice to achieve cross-domain separation.

4) How are you going to improve you operational website security posture?
On the average we have roughly a single XSS vulnerability in the production every 2-3 years. Remediation occurs within a couple hours at the very most in every case. These metrics are consistent with the absolute most security proactive organizations found anywhere and far exceeding industry averages (a consistent handful of XSS per year, ~50% remediation rate, and fixes taking months). Having said that, as described earlier, we’re improving our procedures to make sure things like this don’t slip by again.


S3Jensen said...

If only all companies were as open and honest about their issues. You're completely right, no company is perfect and try as we might to prevent these types of attacks business requirements can, and obviously do, occasionally open us up to attack. I agree private disclosure would have been more appropriate, but as you said, at least @testmanager brought it out in the open so it could be addressed and remediated in a timely manner.

Anonymous said...

Surely XSS is no threat in a site that doesn't have a login system.

Unknown said...

@Anonymous cookie/credential theft is not the only thing you can do with XSS. Think about defacement and/or redirection to malicious sites serving malware or fake AV.

All XSS should be fixed on any site, regardless of the original content, function, intent, etc of the original site.

Anonymous said...

No login page? What's this I see? A login page protected by Basic Authentication over HTTP? ServerTokens Full?

I see a lot more wrong with this picture than just a simple "process oversight".

$ curl -i http://www.whitehatsec.com/home/partnerportal/index.html
HTTP/1.1 401 Authorization Required
Server: Apache/2.2.8 (Ubuntu) mod_ssl/2.2.8 OpenSSL/0.9.8g
WWW-Authenticate: Basic realm="Partner Portal"

Jeremiah Grossman said...

@Mephisto, thank you for the kind words. For our part, whether or not the disclosure process was appropriate, @testmanager didn't introduce the XSS vulnerability, we did. Fix it, learn from it, and figure out how to do better. We tell our customers to do the same, and they expect nothing less from us.

@Anonymous1, if only that were the case. :)

@Question, that certainly fits within the requirements of our policy. Some website however have a business requirement to be vulnerable, persistent XSS. LOL.

@Anonymous2, good catch, I should have been more clear with "web-form log-in", which is what I had in mind. The partner portal is essentially a dropbox for a variety marketing literature (pdf, docs, xls, ppts) we make available to our resellers. While nothing in it is really "sensitive", we prefer it wasn't openly indexed by the search engines. We didn't need anything stronger than basic-auth over plain HTTP, such as more Web app code to look after potentially reducing our security posture, to protect such data.

Secondly, as you've noticed, there are a few minor issues around that should be cleaned up. We've been in the process of doing exactly that. This experience was a subtle reminder of not to be comfortable for even a moment.

Adam Baldwin said...

This is not meant to be a poke, but where is the WAF in all of this or the log monitoring that may have identified this before it went to a full disclosure situation? If one simply relies on assessments and disclosure for protection isn't there a large gap in there?

Anonymous said...

I'm wondering why the session fixation is more important than a XSS for the most of companies which are not even using HTTPOnly flags! :D
PS: Why don't you use a web application scanner to scan your website each month? At least you can find the obvious vulnerabilities with it ;)

TheTestManager said...

what Jeremiah hasn't said is that the speed in which the issue was fixed was quite amazing about 15-20 minutes maximum.

I actually found the issue a few days before posting it when Jeremiah had retweeted about some other security firms having xss issues.

Like I said in my original post, it was not meant as poke at either Jerrmiah or WhitehatSec whom along with a few other select security researchers I really look upto, and always look forward to reading their work.

I would normally have disclosed in private manner as I do virtually every day with numerous issues found in multiple different places.
Any disclosure may open you to legal action, if you have tampered with either the URL or the rendering of the page in anyway, so disclosing either privately or publicly is always walking a tightrope
Looking back on it now I wish that I had done it in a private and therefore more responsible way. However what is done is done.

Anyone disclosing issues to companies will know by experience that usually you are either ignored and the issue is left outstanding, or you are still ignored and the issue is fixed in quiet fashion without you ever being informed.
Rarely you are thanked or even acknowledged, only the rare 5%-10% or so of businesses will want to discuss the issue further.

Some people think that Cross Site scripting or other security based issues existing on a security site is a crime,

However in the case of XSS, it exists in virtually every site, unless your using flat static content. The escaping out of tags is childs play and the actual bypassing of many XSS blockers or Anti XSS whitelists is again just a matter of trial and error.

The main point of the post is that I wanted to state throughout the whole process from Jeremiah reading my tweet disclosing the issue to this blog post he has been nothing but open, and I think that should be commended.
We could do with more vendors like this

Jeremiah Grossman said...

@Adam, not at all, these are the right questions to ask, we're doing the same internally.

Since there were "no web applications," there was little need for a WAF. Even still given this was a DOM-based XSS, its not guaranteed it would have detected/stopped it anyway. One of the things I'm needing to double-check on.

With respect to log management, we have that in place as other monitoring tools. However, we get attacked with ruthless regularity. The bad guys, researchers, partners, competitors, customers, us... are constantly probing our systems. In the case of XSS for example, its very difficult to know when one of the thousands of "tests" per day was actually successful.

@Anonymous3 not sure, is this in context to the current blog post? We do scan the website with a web application scanner every day, but for reasons I explained in the post, it wasn't found.

@TheTestManager, I said it in the post! And we thought the site WAS static, well it is static! Sort of . :) Thanks for the kind words. It was disappointing to think that this was something that we should have, could have, found. It does serve as a good reminder that we, especially us, need to be ever vigilant.

dunsany said...

What I liked best was your very thorough root cause analysis. I think that piece is the most overlooked part of vulnerability response in the industry. Bravo!

Jeremiah Grossman said...

I'd like to once again say thank you very much to everyone for the overwhelmingly positive response. We really didn't know what the reaction would be to the foible. It's extremely encouraging to know the security industry truly values and respects honest and transparency.

@shiflett: Nice, honest post from @jeremiahg showing how XSS can get you, even if you know all about it: http://j.mp/whitehatxss

@mkoelm: RT @seccubus: RT @security4all: "Full-Disclosure, Our Turn" - http://is.gd/d96rl (via @jeremiahg) -> respect for being honest!! <- hear hear

@fabriciobraz: Very nice way to face any fault, including security RT @jeremiahg: "Full-Disclosure, Our Turn" - http://is.gd/d96rl

Alexis said...

You take a benign attitude to the actions of TheTestManager. However, you could certainly argue, that if (s)he had done the same thing in the UK, he would be in breach of the UK Computer Misuse Act*. He could then be looking at two years "doing porridge" as we say in the old world.


Jeremiah Grossman said...

@Alex: Yes perhaps it was illegal under the UK law, but I also believe the law would be wrong, totally counter productive, and want no part of it. Remember Daniel Cuthbert? That law was applied to him and was completely uncalled, but beyond that had a chilling affect on the rest of the industry. No thanks.

l444t43 said...

In response to TestManager for this: >> Anyone disclosing issues to ....

What you're doing will definitely trigger you a trouble. You'll never end up saving the vulnerable sites with your effort alone. They're being created daily. Whether they have vulnerabilities or not is not your matter. Whether they listen to your reported things is not your matter.
I'm glad you have such good mind to tell them. Think again. Save your time and learn by doing legal pentests.