Tuesday, February 20, 2007

Automated Scanners vs. Low-Hanging Fruit

Low-Hanging Fruit (LHF) are vulnerabilities that are easy to find and exploit. We certainly don't want these types of issues in our websites, especially if they can be quickly mitigated with a small amount of effort. In network security, scanning does the trick for LHF identification. Unfortunately, in website security, though scanning is absolutely vital, it’s not that simple or sufficient. That’s because LHF may fall into either technical vulnerabilities, which website vulnerability scanners can find, or business logic flaws, which they can't find much of any.

Technical vulnerabilities, including Cross-Site Scripting (XSS) and SQL Injection, can be found in large supply by scanners and usually can be classified as LHF. For instance, when a website echoes user-supplied HTML, that’s a dead giveaway of an XSS vulnerability. The same with SQL Injection and the notorious ODBC error messages dumping database statements. These instances are easy to spot and exploit. Though as common as these issues are, they’re not always classifiable as LHF.

New XSS issues in YahooMail, MySpace, Gmail, sla.ckers.org (heh) and other high profile websites have become significantly harder to come by because so many people already cherry picked the easy stuff. Discoveries often rely on clever filter-bypass tricks (XSS Cheat Sheet), complex input encoding techniques (UTF-7 or US-ASCII), or sophisticated combinations. SQL Injection exploits frequently have to be performed blind because helpful error messages are suppressed. These instances could be comfortably labeled Mid-tier or even (shall we say) Golden Apples since they reside far out of the reach of scanners, and most humans for that matter.

Then we have business logic flaws like Abuse of Functionality and Insufficient Authentication/Authorization. These mostly require humans (security experts) to uncover them even when classifiable as LHF. For example, during the MacWorld 2007 Expo, several people discovered an easy (LHF) way to obtain free Platinum Passes (a $1,695 value with a chance to see Apple's CEO Steve Jobs up close). By viewing the source code of the sign-up web page, they found "hidden" Priority (Discount) Codes freely usable during registration. Unlike humans, scanners wouldn’t recognize the significance of Priority Codes, how to use them, what the page looks like when they're accepted/denied, let alone being able to pick up the badge to verify the attack succeeded.

WhiteHat Security's engineers continually discover a wide variety LHF business logic flaws in a majority of the websites they assess. The more sophisticated the business logic flaw, the more expertise is required to identify the vulnerability and its remediation. Anyone can find one or two business logic flaws, but it takes a team of experts to try to find them all, all of the time. That’s a big reason why good, complete website vulnerability management is so hard to achieve.

From my experience, any class of attack can be LHF, Mid-tier, or Golden Apples. And, any vulnerability identifiable through a purely automated fashion (a scanner) can be classified as LHF – since anyone without much skill may buy/download a scanner, find a few technical vulnerabilities, and begin exploiting websites. Still, WhiteHat believes the goal of an effective website security program should be to find and manage all the vulnerabilities all the time. Weeding out the LHF can be a good first step. There’s no reason to make exploiting websites that easy for the bad guys.

Podcast with ITRadio

A couple of weeks ago I recorded a podcast with Patrick Gray from ITRadio for a new on-line radio show called Risky Business #1. I talked of course about web application security issues through this time from an attack evolution standpoint. Specifically, how we've gone from email viruses to Web Worms (Samy Worm), IIS/Apache vulnerabilities to flaws in custom web applications (CSRF, XSS). Podcasts ended up to be a lot of fun to do!

Thursday, February 15, 2007

10 signs you’ve been in web application security too long

RSnake helped me put this little thing together....


  1. You sometimes use Lynx to surf the Web. Furthermore you know what Lynx is.
  2. You don’t find it humorous when someone says they’re a penetration tester.
  3. When you send someone a link they say, “do you really expect me to click on that?”
  4. You get annoyed someone refers to an HTML Injection vulnerability as Cross-Site Scripting.
  5. You know that Slashdot uses a really strange URL format in their source code.
  6. You’ve gotten more than 10 emails from strangers asking you to help them hack their girlfriends Hotmail account.
  7. Your handle begins with the first initial of your name followed by “Snake”.
  8. Web 2.0 websites don’t work in your browser unless you turn off all your security plugins.
  9. You know Brendan Eich is and hate his guts
  10. Someone is stupid enough to bet you a thousand dollars you can't maliciously use 30% of websites on the Internet.

Wednesday, February 14, 2007

Acunetix, NetworkWorld, and $1000, oh my!

Update 3: As could be predicted, the sla.ckers.org crew quickly uncovered a few XSS issues in the Network World website. Mr. McNamara and Mr. Snyder, please take some friendly advice -- don't go down the road saying you weren't vulnerable or that XSS is a non-issue. That'll just open up a whole new can o' worms, like it did for Scan Alert and F5.

Update 2:
Thomas Ptacek posted a equally hillarious parody of the situation.

Update
: Check out RSnake's take on the challenge. Priceless. He even offers his assistance to win. :)

I love mornings like this when new and interesting things are happening. Check this out: Acunetix releases a report headlined, "70% of websites at immediate risk of being hacked!". For the last year they've been offering a free web scans of some 3,200 sites. The wording was a bit sensationalist and I had a few questions about the data, but the numbers looked about right to me (like RSnake said maybe a little low). At least I didn't think they were way off base as they were somewhat similar to my stats. Here's where it gets interesting.

Network World's Paul McNamara loops in his "go-to" security guy, Joel Snyder, "a stalwart in the Network World Lab Alliance and senior partner at Opus One in Tucson, Ariz", for an expert opinion who promptly calls the results a "crock" and issues a $1,000 challenge. Juicy.

"But the basics would be that an employee of the company (Acunetix) would need to get valuable personal information - like a credit card or social security number, not an e-mail or home address - from at least three of a random 10 of those 3,200 sites they tested."

A couple of things here beore I go on:

1) I'm not certain how wise it is to ask a "network" security guys opinion and "web application" security matters. Maybe he cross-trains.
2) There is a certainly a difference between having a vulnerability and it being exploitable. Vulnerability Assessment vs. Risk Assessment.
3) There many different types of data on websites worth protecting aside from CC's and SSN's (source code, trade secrets, insider info like unannounced press released, etc.)

So, Nick Galea (CEO) and Kevin J.Vella (VP Sales and Operations) promptly fire back at Network World accepting the challenge but changing the terms a little.

"We are willing to accept the challenge. However we feel that the subject of the challenge should be the Network World Web site, rather then - as Mr. Snyder suggested - an innocent third-party Web site. After all, making a wager with someone else's Web site would be unfair, and furthermore illegal."

So we will accept the wager and perform a security audit on the Network World site and attempt to breach any vulnerabilities found. This should be a fair substitute, since we are assuming that considering Mr. Snyder's comments, Network World is confident that its Web site is secure and any data it holds is unbreachable."

Woohooo, game on! Good move and I agree with their assessment of the terms. But Network World and Snyder get a bit snarky and say:

"I think that they are missing the point. I am (as you are noting) challenging the conclusion, not the data. I believe that they think that they have found vulnerabilities. But there is a huge difference between that and turning a vulnerability into a breach."

Fair enough and the story is still developing and we'll see where it leads. They're now haggling over what websites they should hack and what should consitute a hack. Obviously not every website is important, but that doesn't mean it isn't at risk of being hacked. It just doesn't matter a lot if it is. So how many "important" websites are out there out of a pool of 100 million? I dunno, I guess 500,0oo (?) and statistically I think most of them are hackable.

Interestingly enough I've hosting a webinar on this subject this morning. :)

Tuesday, February 13, 2007

User education is still worth it

Rsnake posted a very good read about Why User Education's a Bust

“During one meeting I had the opportunity to debate the pros and cons of user education. For the most part I am against education, which might be surprising to a lot of people. Here's why staying away from education can save your company money and keep you more secure.”

I agree user education has not, will not, and cannot achieve the results that we’d all like to see. Yet I wouldn’t advocate closing the classroom either. User education doesn’t need to comprehensive to be worthwhile. I think we need to reset our expectations and adjust our business practices with something more reasonable. User education only needs to be capable of catching/preventing SOME of the most stupid easy attacks bad guys might try, providing just enough of value to keep doing it. What I’d like to see is users begin viewing computer/Internet security as they perceive ATM/Debit Card security. For example:






















ATM/Debit CardComputer/Internet
Don’t tell anyone your Debit Card PINOr your passwords
Don’t leave your wallet/purse (with card in it) unattendedOr your screen unlocked
Mask the keypad while you type in your PINBefore of shoulder surfers
Don't give card numbers over the phone, unless you have initiated the call.Beware of links in email out of the blue asking for your password

The list goes on on things we tend to do naturally. This won’t stop more sophisticated card skimming attachments, fake machines, or massive theft of magnetic track data. These precautions are designed only to thwart a few simple attacks and help the user feel safer, which another piece of added value. For example, I think a growing percentage of users want to protect themselves from all phishing/trojan scams and we should continue assisting. Then focus the bulk of our attention on more effective methods as Rsnake describes. Personally I’d also be curious to hear RSnake’s thoughts about developer education because I think a lot of the same principals may apply.

Monday, February 12, 2007

We need Web Application Firewalls to work

In the early 90’s network firewalls surfaced as the product everyone needed for defense against the dangers of being internet-connected. Host security purists countered calling firewalls unnecessary because everyone should patch and harden their hosts. Proponents rejoiced because firewalls made life easier as their networks were vast, diverse, and largely beyond their control. Many years later almost every computer has a firewall in front of it (sometimes several) and some form of automated patching, so both solutions eventually won. History seems to be repeating itself with Web Application Firewalls (WAF’s) and the “secure software” movement.

Generally IT/Sec guys like the idea of WAF’s, while secure software purists argue for the code saying WAF’s shouldn’t be viewed as cure-alls. Fair enough, but in my opinion neither should secure software. The reality is software has bugs and hence will have vulnerabilities. Modern development frameworks like ASP.NET, J2EE and others have demonstrated big gains in software quality, but what about the vast majority of the world’s 100+ million websites already riddled with vulnerabilities? Is anyone actually claiming we should go back and fix all that code? Fixing them one at time would be like trying to drain the ocean with a teacup.

What happens today is IT/Sec must compete for development resources over revenue generating features being pumped out every week. The same people, with responsibility and no authority, are also powerless to fix the issues like they’re used to with patches or firewall rules. In web application security, IT/Sec who used to have control assumes a subservient role to the development group who are not security experts. Developers say they need to be convinced why fixing XSS and SQL Injection is important. Typically result of the exchange is perpetually insecure websites as the interests of both parties are not in alignment. We need something that gives developers time and IT/Sec control. That’s where WAF’s come in.

A good WAF’s is designed block a lot of the most common and dangerous web application attacks. Why would anyone not want that? From what I’ve found its not that the objectors don’t like what WAFs promise, it’s that they don’t DO what they promise. Or there is some set-up and ongoing management overhead involved, which are all completely valid concerns. Still I think the web application security problem has simply gotten WAY too big to be fixable in the code without the help of WAF’s. So two things need to happen:

1) WAFs are evolving technologies that MUST BE MADE TO WORK, or work a lot better, and we must see them work over and over again. Witnessing this will help build trust in these devices, which will lead to….

2) The web application security mindset maturing to the level of network security. No one views Cisco Pix’s (Firewall) as competitive to BigFix (Patch Management) as overlapping with Qualys (VA). The same should go for WAF, Security in the SDLC (Frameworks), and Web Application Vulnerability Assessments respectively.

Friday, February 09, 2007

RSA, Did you see anything cool?

I must have been asked that question a dozen times during the show, for which I had no good answer. Seems the attendees REALLY wanted to find the new hotness, but couldn't seem to find anything compelling. It got me thinking about security in general and that maybe our expectations are improperly set. I mean, isn't security products/solutions about making sure "nothing happens" (if we do our jobs well)? Of course that’s boring. It’s only when we demo live hacks and something does happen on screen that people begin to perk up.

Another thing occurred to me is “how can anyone make sense of all this stuff”!? There I was in a literal sea of hundreds of infosec company’s, most of which I’d never heard of, doing my best to understand they’re value proposition, while being peddled free software and toys by booth babes. There was tons of NAC, Identity Management, lots of webappsec, and gawd the anti-Malware/Spyware of every kind for every device. Whew! When speaking with a few vendors they did they're job well describing how they differentiate from their competitors. “We go faster, more indepth, find more of the (un)-known, and we focus on the data“. It all sounded somewhat interesting, but in the back of my mind I thinking, “why do I need this?”

There is a lesson to be learned here by those in the web application security field, myself included, because outsiders probably feel the same way about our field. Everything we talk about including XSS, CSRF, SQL Injection, Technical, Logical, and the other confusing terms is all cool, but have we really described why this stuff is important to eliminate? I mean, really really. This might be what Syvlan has been driving at and asking how to prove our worth or value in some type of quantifiable terms. Answering the fundamental question, “why?”.

Web Hacking contest at RSA (2007)

Security Innovations hosted their Interactive Testing Challenge, which essentially was a Web Hacking competition. The whole format and presentation style was very well done, impressive even, especially the finals with live commentary. SI set up a banking website which a bunch of vulnerabilities. Contestants had 30 minutes to find 5 flaws to qualify for the next round. RSnake and I happened to stumble across it while wandering the show flow, but unfortunately he had to bail for a flight 10 minutes in, so we only got through 3 vulnerabilities. There’s always next year.

Big props go out to Jordan Wiens, contributing editor of Network Computing magazine, who won the whole thing! During an interview just before the final face-off I found out Jordan is no ordinary reporter. No no! He has a B.A. in Mathematics, well-versed Unix Admin, and has some solid web application security chops to boot. Watch out when being interviewed by this guy, he knows this tech.


Jordan wins his shiny new GPS!


The big-multi-screen display so the audience could follow the action.


The contestants getting their instructions from the ref just before the final face-off


The announcer asking the contestants about how the feel about the upcoming challenge.

Thursday, February 08, 2007

WASC Meet-Up Rocked! (RSA 2007)

The meet-up was simply fantastic and way larger than anticipated. We were thinking maybe 20-30 people would show for a nice size turn out, but instead we had about 60-70. There were lots of familiar names in attendance with an amazing amount of webappsec thought power. Off the top of my head in the room there was Mozilla, Imperva, WhiteHat Security, SPI Dyanmics, Intel, Watchfire, Cenzic, Application Security Consulting, Bank of America, Proginet, NetContinuum, Citrix, SecTheory, Walmart, Federal Reserve Bank, ICSA Labs, etc. It was very cool getting a chance to catch up, exchange ideas with what different people are working on and brainstorming ideas for the future. That’s really what WASC is all about. WASC has a lot of work to do and we’re also going to need some sponsorship for a private party next time.

Several people took pictures, please post your URL’s.


Robert Auger (cgisecurity.com), Jeremiah Grossman (WhiteHat Security), Caleb Sima (SPI Dynamics), Billy Hoffman (SPI Dynamics), Arian Evans (WhiteHat Security), Erik Peterson (SPI Dynamics), RSnake (ha.ckers.org)

I pulled this one from Anurag's blog since it was a great group photo.


Erik Peterson (SPI Dynamics), Billy Hoffman (SPI Dynamics), Steve Orrin (Intel)


Arian Evans (WhiteHat Security), Caleb Sima (SPI Dynamics)

The exchange of fashion tips in addition to the finer points of automated scanning.


Erik Peterson (SPI Dynamics), Steve Orrin (Intel)

Erik's hiding something, I can tell.


Dawn van Hoegaerden (WhiteHat Security), RSnake (ha.ckers.org), Rachel Miller (SHIFT Communications)

Who says XSS doesn't work with the ladies.


Caleb Sima (SPI Dynamics), Robert Auger (cgisecurity.com)

Robert reaching for his tazer.


Billy Hoffman (SPI Dynamics), Arian Evans (WhiteHat Security)

"I have the world in my hand"...."dude, no you don't."


Daniel Veditz (Mozilla), RSnake (ha.ckers.org)

Sorry about those Mozilla sploitz, really.


Robert Auger (cgisecurity.com)

The Clark Kent pose. Check out the green press badge.


"Hackers Attacking the Internet" on FOX while we ate. We for once have an alibi since we were in fact eating lunch at the time.


Scott Parcel (Cenzic), James ? (SecTheory)

WebAppSec vs. Network Sec


Mark Kraynak (Imperva), RSnake (ha.ckers.org), Scott Parcel (Cenzic)

Hack, scan, and firewall. How bout that!


Eric ? (Adobe), RSnake (ha.ckers.org).

We're going to have to name RSnake's smile since its identical in every photo. Maybe "Hacker Steel" or something.


RSnake (ha.ckers.org), Anurag Agarwal (myappsecurity.blogspot.com), Bill Hoffman's hands.

Even webapp hackers have to eat.


Brian Bertacini (Application Security Consulting), Anurag Agarwal (myappsecurity.blogspot.com)


Yes, I know. :)

Friday, February 02, 2007

Samy pleads guilty

As reported by SC Magazine, "The man responsible for unleashing what is believed to be the first self-propagating cross-site scripting worm has pleaded guilty in Los Angeles Superior Court to charges stemming from his most infamous hacking."

What Samy did was wrong, and whether he meant it or not, caused damage. The good thing is his sentence doesn't look outrageous, as been seen in other cases, some probation and restitution. He'll be able to get on with his life, but it doesn't look good on a resume thats for sure. Maybe in a couple years we'll see him at a Defcon or BlackHat starting a consultancy like Kevin Mitnick. :)

If we could start all over...

A couple months ago I had lunch with Collin Jackson, Andrew Bortz, and Dan Boneh from Stanford University’s Security Laboratory. The same guys who created SafeHistory, SafeCache, and bunch of other cool stuff. Their mission is conducting security research that everyone can benefit by. We talked all the around web application security, how everything is completely broken, and brainstormed ideas of what might be done about it. Sometime during the conversation they asked something that caught me completely off guard. They asked, “if we could start all over, what would you do different?”.

Like most I’m always locked up in the gory details of the current environment and rarely afforded an opportunity to think in purely theoretical terms. Any proposed solution must take prioritization and resistance to migration into account, which makes real progress difficult. Now imagine for a moment you could forget all about how many websites are vulnerable, what it would take to fix all the code, and dismiss any concerns about overhauling the security infrastructure of ANYTHING. You have a magic wand. For myself, it took a long while before I could piece together some core ideas (below).

This is was a fun exercise and before scrolling down to read my thoughts, I encourage everyone to try it. We might generate some crazy innovative ideas to draw from. And please don’t think that I have all the answers, I don’t. Just some ideas.

1. Complete language separation of JavaScript from HTML
2. Nuke Basic and Digest Auth for something way more secure, but just as simple.
3. HTTP stripped down and streamlined (no off-domain referers, no passive third-party cookies, native support for URL and cookie encryption)
4. Browsers only support well-formatted XHTML
5. Compliable web pages (HTML/JavaScript) into byte-codes
6. SSL certificates may contain trademarked logos that show up in the browser chrome
7. Browser integration of Secure Cache, Safe History, and Netcraft’s anti-XSS URL features in their toolbar
8. Implement Content Restrictions
9. Same-origin policy applied to the JavaScript Error Console
10. Restrict websites with public IP’s from including content from websites with non-routable IP address

Don't trust server-side security

This morning I got a heads up about a Websense alert about the Super Bowl XLI / Dolphin Stadium website. Apparently the popular website was siliently defaced to include a hidden snippet of JavaScript that exploits MS06-014 and MS07-004, loading the NsPack-packed Trojan keylogger/backdoor. Very bad for any unpatched IE browsers that hit the website.

We've all seen this kind of thing happens before, and I'm sure it'll happen again and again and again, etc. What got thinking was a piece of conventional wisdom we often hear, "Don't Trust Client-Side Security". Fair enough, but in this case the opposite is true. This was a popular and trusted website, not some hacker/warez/pr0n/serialz hang out spot. I think we need to start designing web browsers and safe-surfing habits around this concept:

Don't Trust Server-Side Security