Friday, December 28, 2007

10 Maui fun facts

Rather than share all my vacation details, which would likely make my readers jealous who are suffering through the cold of winter, I’ll instead post a few fun facts about Maui. Things visitors are unlikely to read in any guidebook.

1) Temperature changes with elevation rather than with the seasons. At sea level its 80-90F, 500-5000ft its 65-80F, and if above that you are probably going to the top of the Haleakala volcano (10,000ft). Snow/frost has been known to happen from time to time at the peak. I heard on the big island (larger island to the south) this winter someone surfed in the morning and snowboarded during the afternoon. I don’t know if there is any other place in the world other than Hawaii where you can do that.

2) Seasons only impact the waves. In the winter the waves are on the north and west shores and in the summer, the south. Freaks swells happen from time to time, but not often. And speaking of waves - lifeguards are almost nowhere to be found. If they are there normally they are there for the surfers, not the tourists, who get towed back by jet ski. The surfers are usually the ones who pull out drowning tourists and get upset when they have to because it means they missed a nice set.

3) Tourists can be found on either the west or south shores as these are the dryer and warmer sides of the island. Lots of hotels, resorts, shopping, and golf courses. I don’t recall many residents who actually golfed so that must be primarily a visitor thing. The rest of us are on the beach on the north shore.

4) 99% of the thousands of people who move to Maui every year catch something called rock fever whose symptoms cause them to move back within the first 12 months. You see, Maui is a small place with not a lot to do for most mainlanders. There are no pro/college sports games, theme parks, nightclubs, or anything like that. Those who are not REALLY outdoorsy who enjoy the beach, hiking, fishing, hunting, and a lot of the same everyday will catch the fever. Their stay on Maui will have been just an extended vacation.

5) Maui residents culturally don’t understand the concept of a 2-week vacation. - you know, where you go somewhere to get away from it all. I had no idea what it was when I got my first job in California when I was 19. Accruing time off? WTF!? No one from Maui really does that. I mean, when you’re from Maui, where are you really going to go? Oh right, Vegas, but that’ll be a very special trip and only once in a great while. When employees need time off it probably means the waves are up and they are not going to show up anyway.

6) Dressed up is considered closed toe shoes instead of slippahs (sandals), button down (aloha) shirt rather than of a faded T, and unripped jeans or pants of some kind versus surf shorts. And that attire you probably only where to a wedding, funeral, or hmmm, not much else. The rest is natural wherever you are or what your are doing. I don’t think anyone on the whole island actually owns a suit except for maybe the lawyers and then only worn in court.

7) A good car is one that runs and is street legal - the rest is basically luxury items. If a car doesn’t have any rust or dents, that’s considered mint. Lifted pickup trucks, hatchbacks, and minivans are the vehicles or choice. And driving distance is always measured in time, never miles. As it could take you 3 hours to go 15 miles depending on where you are.

8) Local food is NOT Hawaii food – BIG difference. Local food is an odd fusion of ingredients inspired by the Portuguese, Japanese, Chinese, Filipinos, and of course the Hawaiians. Consist of a lot of spam, sausage, chicken, and steak which has been breaded and deep-fried or baked in pools of teriyaki sauce and covered in gravy then served with tons of white rice and macaroni salad. These dishes are referred to as plate lunched and yes this stuff will kill you, but slowly and it’ll taste good. :). Hawaiian food, which I’m never been fond of includes poi, Lau Lau, and kalua pig & cabbage.

9) Lingo, Maui – well Hawaii – has it own very unique dialect. Anything on the east side of the island is referred to as “upcountry”, unless on the extreme backside which is called “hana-side”. When going to “town”, that’s almost always Kahului. Town names are rarely spoken and travel plans are typically described directionally. For example “going to the south, west, or north shore.” And when one side of the island, to travel to the other, you are going to the “other side”. When some one yells at you and says “Eh Brah”, that’s the equivalent of “hey man”. And when someone asks you if you want to go “grind”, they’re not asking you to dance provocatively, but instead if you are hungry and want to eat – a lot. Oh, and don't try to blend in by trying to speak like the locals, it'll just make you look really dumb.

10) Yes, Hawaii is a state. This is for those so many people across the U.S. that I had to convince that my Hawaii drivers license was valid and not a fake. Trying renting a car in Alabama with one of these, I dare you.

So what did I miss?

Maui was a lot of fun, but more on that later. Today I got to get back to digital reality - wade through mountains of email, unread RSS feeds, and unplayed voicemail. Looks like while I was away there was a lot of chatter about PCI section 6.6 and WAFs, which make sense since the compliance date is only about six months away. Gary McGraw (DarkReading) and Joel Dubin (SearchSecurity) had some sage advice, but it was Ryan Barnett’s words that really spoke to me. Ryan discusses vulnerability REMEDIATION with respect to PCI, which is all too often overlooked, and highlights some interesting verbiage. And since Ryan works closely with ModSecurity, it’s fitting to pass along that Ivan Ristic just announced the RC for version 2.5 and it has with some slick sounding features.

Google’s social network Orkut also took its turn having to deal with the relatively new phenomena of Web Worms. This worm spread to a reported 650,000 users, short of Samy’s 1 million, but still enough to turn some heads. There was a lot of media and blog coverage, source code was made available for analysis. Amazing what a few lines of JavaScript can accomplish. What’s still surprising to me in all of this is the relatively infrequency of these attacks and that Web Worms have yet to see a malicious payload. Enjoy it while you can it won’t last forever.

Ironically while at the beach, I made Slashdot by sharing my personal “Web” surfing habits and discussing how to defend against CSRF attacks. Nah nah :) – this was the result of an interview I did some weeks back with Sarah D. Scalet of CSO and it was just recently posted. Gotta hand it to the Slashdot crowd for their consistency in NOT reading the story before commenting. The first person actually asks “How exactly is this strategy going to protect you from a keylogger?” and then the conversations degrades for there. Seesh. But Marcin (TSSCI) posted a nice little trick I haven’t tested out yet to simultaneously run multiple Firefox profiles, which should have nearly the same effect I was going for.

And while sla.ckers.org and XSSed.com are outing vulnerable websites, other websites suffered some Web security related incidents. Hundreds of MD Web Hosting customers websites were SE0wN3d, F-Secure Forum was defaced by a Turkish group, adult-entertainment hoster Too Much Media Corp who supports thousands of websites was compromised, an Ohio court website was penetrated using Credential/Session Prediction where several victim suffered identity theft, and the Tuscon police department website is having their fill of SQL Injection as well. All fun and games in webappsec land.

Friday, December 07, 2007

Maui Vacation 2007

We made it! 2007 was a busy year to say the least. 40 public appearances, 200 blog posts, about a dozen published articles, a book, 7468 sent emails, and who knows how many airline miles. This year has been fun with many memories, but also extremely tiring. Having just gotten back from Texas (*mmm, ribs*) and New York (*brrr*), my last business trip of the year, its time to turn off the techno-stuff and vacation a little.

On Monday I’ll be heading back home to Maui for a couple weeks with the family as we do every year. During that time there will be no blog posts, email responses, or communication of any kind. Because you know I’ll be busy at the beach (surfing the winter swell), playing with the kids, having a few BBQs, maybe jumping off a few waterfalls, and working in some Brazilian Jiu-Jitsu – the usual stuff. And if I can get rid of this pasty white complexion, that’ll be good to. ;)

Merry Christmas everyone!

Tuesday, December 04, 2007

Full Disclosure is dead

Businesses must realize that full disclosure is dead, a contributed article I wrote for SC Magazine. This is nothing like my usual webappsec banter, nor is it the stereotypical FD talking points everyone has heard and debated a million times before. Instead I tried to articulate my current views on the subject of vulnerability disclosure, which are probably very different than most, and where I believe the industry is heading.

“Full Disclosure is dead. Let me explain why. The information security world has changed, even if some don't see it or are unwilling to accept it. Vulnerability disclosure discussions based upon ethics are morally antiquated and naïve at best considering today's cyber-security climate...”


One thing I forgot to mention is that the many software vendors will try to capitalize on the fact that less vulnerabilities will get reported and say it's result of "more secure software".

Tools, tools, and more tools

People love tools. For a guy a freshly released pen-test tool can be a lot like getting your hands on a shiny brand-new toolset, even better if they’re powered in some way. Hint, Christmas is coming soon. :) They’re something you just can’t wait to rip open the box on and start playing around with. So it’s in this spirit that I point out a couple new tools with some features that sound like a lot of fun.

1) PortSwigger released a new version of Burp Suite, the same great stuff plus a whole lot more. The new stuff includes the ability to analyze session token randomness, manual and intelligent decoding and encoding of application data, a utility for performing a visual diff of any two data items, and more. Nice!

2) Stefano Di Paola, of Minded Security, released SWFIntruder. SWFIntruder (pronounced Swiff Intruder) is the first tool specifically developed for analyzing and testing security of Flash applications at runtime. Most of us have been using odd types of decompilers for a while, but nothing purpose built for the task. For a first release, this one sounds like it has promise.

* And if you are looking for a resource that describes a whole lot more of the web application pen-test tools out there, look no further than Andre “dre” Gironda’s post on “Why crawling doesn’t matter.” He intended the post to educate for a different purpose, but the content is a veritable encyclopedia of pen-testing tools and their capabilities. Many of which I hadn’t even heard of that sound cool as well.

Wednesday, November 28, 2007

Its not the size that counts, its how you use it

I was reading Mike Rothman’s warning about the dangerousness of TinyURL, the famous URL shortener. Mike’s right of course. Someone could easily disguise a malicious URL and get you to click on a link that you wouldn’t have otherwise because it would have looked obviously like an XSS attempt by nature of the visible HTML tags. If you did click on the link you’d be redirected (301) to the malicious XSS URL and your sessions cookies to just about any website would have gone bye-bye or worse.

What I think we’re all missing though is that by just deciding not to use TinyURL we’re really not solving the client-side (browser) problem. What about the many other identical services like doiop, notlong, and shorturl? We can’t possibly memorize the names of all of them and remember to not click on their links. The point is if someone really wanted to get you to click on a malicious XSS link they will, no doubt. All they’d have to do is disguise and redirect it off any ol’ unrecognizable domain they control and you’d never get a chance to spot the XSS payload before hand.

So, here’s my feature request for the browser vendors that I don’t expect to be implemented until n + 1 versions from now. n being the version currently in beta. So that roughly means Firefox 4 or Internet Explorer 9. The URL from any 3XX redirect should not be allowed to contain <> characters. Or perhaps something less restrictive, say maybe no HTML tags allowed. Would that break anything? Sure, its not a perfect solution, but we have NOTHING now. Anyone want to try this out first by making a browser add-on?

How to tell when you are SE0wN3d?

For many successful attacks there are several ways to tell when something has been owned. When pages on your web server contain malware that’s infecting visitors. Or perhaps when the web servers begin making outbound Internet connections. Databases may see huge CPU spikes and network usage from data going out the door from SQL Injection issue. DB records that should NEVER be accessed (honeytokens) is another good indication. Web users will tell you right away when they’re passwords are changed, money is missing, something of of theirs has been defaced, or perhaps they have a new friend named Samy. The list goes on, but what got me thinking was the SEOwN3d hack that targeted the blog for former U.S. Vice President Al Gore’s Inconvenient Truth movie.

In this case the standard IDS stuff would not have applied. No money or value was lost, user accounts hacked, mysterious outbound connections, or malware payloads present - Only silent defacement containing an HTML link that no one was even expected to see or even click on. The SE0Wn3D hack was used to simply boost the search engine rank for another website - and not even through blog spam that we’re used to dealing with. So my original question stands, how did they find out? And for that matter if your website/blog was hacked in this way, would you notice? How would you notice? Maybe many thousands of blogs are already hacked for this purpose and we don't realize it yet. For all I know this blog has been hacked to boost Andy, ITGuy to his #1 status on Google and the only way to tell would be through viewing source.

Hmm…. Andy? :)

Introducing AntiSamy

Creating a cross-site scripting filter that allows user-submitted HTML/CSS to pass through, but does not allow malicious content through (usually coded in JavaScript), is a lot like writing your own encryption algorithm. It’s not something that you should be building yourself because it’s extremely easy to make mistakes and it takes an army of smart people scrutinizing it before its considered trustworthy. The difference is there hasn’t been an alternative to rolling your own so far. That’s why each Web Mail provider, social network, and blogging platform has their own implementation, which gets broken almost routinely. The MySpace Worm written by Samy is a prime example of how these filters can fail in spectacular fashion, which is where the AntiSamy project comes in.

The OWASP AntiSamy Project, developed by Arshan Dabirsiaghi, is an effort to create an open source (BSD Licensed) API that’ll allow user-submitted HTML/CSS, but severely limit the potential for malicious content (JavaScript) to get through. XML Policy files are created for what a user is allowed to submit which can either extends functionality or limit the attack surface. Samples eBay, MySpace, and Slashdot already created to use as a guide. Arshan was good enough to set up a live demo website of AntiSamy so people can play around with the various filter policies and test if they can be bypassed. Of course Arshan also needs help to make sure the API has enough usability so the average user can do what they need to.

Personally I think this is a fantastic project and something that could really take off usage wise - especially if versions are ported beyond Java to .Net and PHP. A lot of developers, who are aware of the dangers of XSS, are building more and more web applications that expect to take in dynamic user-supplied content (Web 2.0). This will give them an easy option to do so safely and securely. Excellent work Arshan!

Tuesday, November 27, 2007

My new blog goal, vanity searches

It’s been six months since I achieved my blog goal of showing up first on Google when searching for “Jeremiah Grossman”. After than I that I figured it was all downhill - that is until someone mentioned to me that my blog shows up on the first search results page for just “Jeremiah”. OMG I thought, according to Google I’m now one of the most popular “Jeremiah’s” in the world! How cool is that!?! Mom will be proud. :)

As it stands I am #6, but there are only three different Jeremiah’s ranking higher. Of course the most popular among them is the prophet Jeremiah. That guy is going to be tough to beat. :) But hey I need a new goal and I’m up from the challenge. All it takes is links these days. So in honor of my new found mission, I’ve updated my blog tag line to:

“A page about me to show up first on Google when searching for Jeremiah”

A page about me to show up first on Google and it FINALLY has!

Inconvenient Truth blog, SE0wN3d!!1

You know website security is getting serious when not even the promotional blog for former U.S. Vice President Al Gore’s Inconvenient Truth flick is safe. According to an article by Robert McMillan of IDG News, “An Inconvenient Truth, has been hacked and is hosting links to Web sites hawking online pharmaceuticals.” Apparently the bad guys (probably better described as black hat SEOs) are attempting to boost their search engine rankings by invisibly linking to their websites entitled "Xanax On Line," "Viagra," and "Buy Valium Online".

One expert said it was probably an unpatched WordPress vulnerability, which is entirely likely, but it’s hard to say for sure. It’s also hard to say if the attack was targeted or if the blog was simply caught in the net of a mass WordPress exploit scan. The other question is if the links were buried invisibly in the source, how’d anyone notice? Anyway, just goes to show that even seeming low value targets could in fact have significant value to someone else.

Tuesday, November 20, 2007

PayPal’s Vulnerability Disclosure Policy includes researcher protection

Update: 11.28.2007: Security Retentive, a PayPal employee who was personally involved with crafting the language of their disclosure policy, responds to public comments.

Update 11.21.2007:
Kelly Jackson Higgins posted a story covering the post.


Very few website owners post how security researchers may contact them with regard to a discovered vulnerability. This seems odd since most experts believe the vast majority of website contain them. Microsoft, Yahoo, Google, and now Paypal have their contact policies posted. The rest of the Alexa 500, if they have a posted policy, is extremely hard to find.

PayPal’s newly posted vulnerability disclosure policy though should be given special attention because they’ve done something unique in that and in my opinion very intelligent security-wise. If the security research follows their stated procedure, which is entirely reasonable, PayPal states plainly that they’ll not engage legal action! This should given the researcher confidence that nothing bad is going to happen to them as a result. I think this is first, and if not, I’ve never seen it.

“To encourage responsible disclosure, we commit that – if we conclude that a disclosure respects and meets all the guidelines outlined below - we will not bring a private action or refer a matter for public inquiry.”

This indicates that PayPal understands that security researchers are the good guys, but they are wary of disclosing website vulnerabilities to them for fear of legal issues. As a result researchers may choose to either not disclosure AT ALL or anonymously post it to a public forum like has been done so many times on sla.ckers.org. This way they are personally protected, but the problem is that this puts PayPal and their customers in a bad position.

PayPal either doesn’t get the vulnerability data they need at all, waiting for a bad guy to find the exact same thing and exploit it – or they have to do fire drill once it’s made public. And they might not see it published right away causing a time lag. PayPal would much prefer the opportunity to handle the issue in a timely fashion and has no interest in pursuing the matter further because there is really no benefit to it. Legal entanglements only prevent the good guys from being responsible and don’t deter the real bad guys one bit. And PayPal would know given the nature of the business.

PayPal sets a good example, hopefully other website owners will take notice and do the same.

WhiteHat Security is Hiring

Normally I don’t post WhiteHat Security job listings here, but enough people ask me directly anyway so I figured why not. We’re constantly growing and have very specific hiring needs, specifically in our operations department where a lot of the magic happens. Arian Evans, Director of Operations, put together the content below describing the type of people we’re looking for and what the role entails. If it’s appealing to you, email us at wh-jobs _ at _whitehatsec.com with your resume and let us know.



Why Work For White Hat Security?

WhiteHat Security needs a few good people.

Do you want to become an expert at web application security?

Do you want to learn the ins and outs of general software security?

Do you want to become legitimate hacker?

Would you like the challenge of testing your skills and your mettle and hack some of the most important and famous software on the planet?

Would you like to see how some of the largest websites in the world actually work, and rethink your assumptions about the Internet?

If your answer to any of these is yes, then you want to work for WhiteHat Security, and we just may have a place for you.

What do you need to work for White Hat Security?

Ideally we look for some combination of the following:
  • Web software programming experience
  • Any software programming experience
  • Math skills
  • Strong reasoning & logic skills
  • Customer service and process management skills
You do not need a background in web application security, information security, or any kind of security to work at WhiteHat Security. That is what we do and if you are smart, we will teach you all of that quickly. All you need to bring to the table is the right aptitude, eagerness to learn, and willingness to fit in with and strongly support a team-oriented environment.

What do you get working for White Hat Security?

1. Sex Appeal -- Gain essential, in-demand software security skills

Application Security is one of the hottest new fields, full of potential for career & income growth, and flexibility. Combining application security knowledge with existing skills of software development, quality assurance, or networking/network security enables an individual to pursue either a specialized career in security, or a general career path in development or QA armed with a security toolkit that will set one apart from other peers. The long and short is that recruiters will be beating down your doors in a matter of years if you excel.

2. Work for the Industry Thought Leader in Application Security

Did you ever want to work for the "best" at something?

Even better -- do you want to work *with* the best at something?

Would like a chance to become *one of* the best at something?

In the realm of scalable black box testing of web applications -- WhiteHat Security is #1. Many competitors have great ideas, cool flashy products, or some smart consultants, but WhiteHat Security is the only company on the planet assessing over 600 of the world's largest websites in production on a weekly basis.

3. WhiteHat Security is Customer Service

At WhiteHat, servicing our customers is number one. That is *what* we do. Did you ever work for a company that takes its customers for granted? Makes fun of them behind closed doors? Exhibits internal arrogance? That is the exact opposite of what we do at WhiteHat.

Would you like to work with really smart and appreciative clients at some of the largest companies in the world, and help them identify, understand, and solve some of their most challenging & difficult problems in the world? That is what we do on a daily basis. If you want to help us succeed in our mission -- come work for us.

4. WhiteHat Security is a Team.

WhiteHat Security has a fun, team-focused environment. All management speak aside -- it is a great place to work closely with smart people that you can depend on, and who will need to be able to depend on you.

How can you beat that?

Where in the World is Jordan Wiens

OK, this is no way security related, unless you count that Jordan Wiens has Dell boxes stacked to the ceiling behind him while using on a Mac as information leakage. Anyway, Jordan and I got to video chat yesterday and he was testing driving Leopard's new iChat customized background functionality. Really funny stuff.




Monday, November 19, 2007

No one uses applescript anyway

Jordan Wiens posted a funny example on Apple website on how sometimes security solutions hinder normal use. For example, when searching for “applescript” on search.apple.com the “script” part of the string is removed and a search for “apple” is performed instead. It’s hard to tell what type of intermediary device this is or what exactly its trying to do, but whatever it is it doesn’t like the word script. This makes it impossible to search for applescript on their website.

Duplicates, Duplicates, and Duplicate Rates

Following Larry Suto’s analysis of NTOSpider, IBM’s AppScan, and HP’s WebInspect, where he compared code coverage to links crawled and vulnerabilities found, some questioned the accuracy of his results. Personally I didn’t get the criticism because with only two published reviews this year, we should be grateful to see data of any kind related to web application vulnerability scanners. I chalked the comments up as the standard scanner vendor or product reseller defense mechanism. Besides, Larry Suto is a professional source code reviewer and if he can’t figure it out, what chance does anyone else have? Well, expect for the vendor themselves, and this is where it gets interesting.

HP/SPI felt the issue important enough to research and respond to directly. Jeff Forristal (HP Security Labs) set up an environment to redo Larry’s work and measure WebInspect v7.7 and v7.5 for two of the three websites. While everyone is encouraged to read the report and judge for themselves, a couple of things really stood out to me in the data charts (see below) - specifically false-positives and “vulnerability duplicates”. I’ve only talked about the problem of vulnerability duplicates briefly in the past when describing how customers eventually mature to a Quality phase where less equals more. Obviously perople prefer not to see identical vulnerabilities reported dozens/hundreds of times.


If you look at the chart columns “Total # Findings”, “Raw False Positive”, and “Accurate # of Instances” - these compare what the scanner reported, to what was false, to what vulnerabilities were valid and unique. The two scanners reported nearly identical validated issues, 5 on the Roller website and 113/110 on OpenCMS. In the false-positive dept, WebInspect v7.7 did fairly well only having between 0% and 16% on the two websites, while v7.5 performed a little worse at 2% and 36%. But what you have to look closely at is the ratio of Total # of Findings to Accurate # of Instances (minus the falses) as this will measure the level of vulnerability duplicates.

In the Roller website v7.7 reported 40 unvalidated issues, with v7.5 displaying 55, all of which boiled down to 5 unique vulnerabilities. That means only 12% of v7.7 results are meaningful! v7.5 was 9%! On OpenCMS, of 1,258 unvalidated issues reported by v7.7 (3,756 for v7.5), came down to 113 unique vulnerabilities. Once again only 9% and 3% of the results were necessary. Shall we call this a Vulnerability Duplicate Rate? That’s a lot of data to distill down and must take a lot of time. For those that use these scanners, is this typical in your experience and expected behavior?

I know Ory is reading this… :), so can you give us an indication of what the accuracy rating might have been for AppScan?

Samy is my hero (AppSec 2007)

Update 11.21.2007: Robert McMillan, Senior Writer, for IDG News was in attendance for the conference. He put this piece of video footage interviewing Samy and highlighting the conference up on PC World. There is also a Swedish version if you prefer. :)

Garrett Gee took by far my most favorite picture of OWASP & WASC AppSec 2007. :) Thats myself, Samy Kamkar (Samy Worm), and Robert "RSnake" Hansen just after Samy's speech. Wayne Huang (Armorize) also took some fantastic snaps and in both rolls you see the faces of many familiar names from our industry. An event to be remembered.

Friday, November 16, 2007

What should I blog about?

RSnake introduced me to this blogging thing about a year and a half ago. At first I hated him for it because after trying it out it felt like another job taking up too much time - something else I had to keep up with and I wasn’t very good (writing) at it anyway. Not wanting to quit I decided to shift the goal to show up #1 on Google for “Jeremiah Grossman” (mission accomplished). I also decided to view this blog as an outlet for whatever personal and Web application security issues that happened to be on my mind. Whether or not anyone actually read it was just something I forgot all about.

Then something changed. Traffic grew exponentially to (10-20K uniques per mo.), slashdotted a couple of times, RSS subscribers went WAY up, and the media began to publish articles based upon the content. At the same time I started receiving a lot of extremely positive feedback via email, comments, and even face-to-face. People were coming up to me at conferences all over the world, whom I didn’t previously know, but knew of me from my blog. People were actually interested in what I posted! The blog became influential. Wow! So then I focused the content towards areas I thought most important and that people wanted to learn about.

Some 300 posts later, present day, what I’m most interested in is the thoughts and everyday experiences of you all - the readers. My surveys and live online roundtables are a reflection of that. The fact is “experts” have a hard time adding value and being timely if we don’t understand the challenges and the problems in the field. That’s why I personally spend as much time as I can conversing with people in the enterprise who’s job it is to protect websites. These are the people whom I want to help and make their lives better and easier. This is where we improve web application security overall.

So here’s my request for all over year:

If you know of a topic that I’m glossing over, haven’t talked about, or want me to dig in deeper into – please comment below or email me. For example, at the AppSec 2007 conference, someone in the education industry wanted to know how they might comply with PCI-DSS by changing the way they do business and not having to collect credit card data. This solution would be an alternative to spending a lot of money protecting the data.

OWASP & WASC AppSec 2007


AppSec 2007 was a blast. 200+ people filled eBay’s Town Hall enjoying nothing but web application security for two solid days (and nights). Can’t get that any place else and we even got some great press! If you weren’t able to attend, sorry you missed out. Don’t worry though - the conference slides will be posted on the websites shortly and within a couple of weeks the video will be made available too! This is fantastic because with just the slides a lot of the speaker’s insights are lost.

Speaking of the speakers, a huge concentration of the brightest webappsec minds from literally all over the world in were in attendance. pdp, Chris Wysopal, Ryan Barnett, Tom Stripling, RSnake, Sheeraj Shah, Arian Evans, Ofer Maor, Tom Stracener, Stefano Di Paola, and even Samy made an appearance. Judging from the feedback this took the event up to a new level. The web application expertise in the house was tremendous and few questions could go unanswered.

I personally got to meet and hang out with a lot of new people from Microsoft, Google, Cisco, eBay, PayPal, Oracle, Symantec, and even the U.S. Secret Service. Much of the time the hallway and after-party conversations are just as valuable to me as the conference content. Building relationships and learning more about people’s everyday webappsec challenges are my take away because these are the things I go home and try to later solve.

A lot of great pictures were taken, especially by Garrett Gee and Wayne Huang. I expect them to pop online up over the next week. When anyone posts their pics, please comment here or let me know so we can link to them. Pravir Chandra, Dave Wichers (and staff), Gunnar Peterson, Anurag Agarwal, Brian Bertacini, and the rest of the volunteers did a stellar job organizing the event, parties, and exhibits. As a result of their hard work everyone who attended had a really good time and learned a lot.

And finally a big thank you to all the sponsors who made the event possible. Aspect, Fortify, PayPal, eBay, OunceLabs, Breach, WhiteHat Security, IO Active, Art of Defense, Cenzic, AppliCure, Watchfire, Armoize, F5, Veracode, and Cisco. The vendor technology expo added that extra dimension of content to event that many benefited by and something that people don’t get to experience first hand elsewhere.

Friday, November 09, 2007

Live Online Roundtable (Episode 1)

WhiteHat Security wanted to try something different from the ordinary slide-ware Webinar. So yesterday we hosted a live and un-scripted online Rountable discussion complete with audience participation. Robert “RSnake” Hansen, CEO of SecTheory, Chris Paggen, senior manager, application delivery and network security business unit at Cisco, and Jordan Wiens, Security Beat Editor at Network Computing, joined in and offered their personal insights on the topics of vulnerability assessment, web application firewalls, and the payment card industry data security standard. But things were made even more interesting and entertaining when we learned that WebEx allowed us draw on each others pictures :)



A LOT of attendees showed up and we got a lot of positive feedback at the end, some showing up on blogs, which really made the event a success. This is something we'll definitely do again. In the meantime, you can download or replay the recording.

Wednesday, November 07, 2007

A whole lot of WEB hacking going on

First there was the QVC and OpenSocial incidents that I blogged about, but there are others, many others. And a lot of the references can via WASC's Web Hacking Incidents Database (WHID). While the industry won't have it own form of blaster or slammer to wake people up to the problem, maybe like the old saying goes, we'll make up for it in numbers.


1) A pair of college students hacked into the PeopleSoft database of the California State University at Fresno's to change their grades. Looks like they used a tab bit of insider access to get the job done. For their trouble they face potentially 20 years in the pokey and 250K in fines. If convicted or plead out, the charges will likely be reduced way down, but still. Wow! I wonder how this crime compares with that of a DUI.

2) Funny enough, Oracle is suing SAP for hacking their customer portal. According to the story, “Oracle accuses SAP of attaining the log-in information of recent or current Oracle customers and using it to download software and support materials from the Web site for the PeopleSoft and J.D. Edwards product lines. The materials allow SAP to tell Oracle customers that it can support their PeopleSoft and J.D. Edwards products while they transition to SAP products, Oracle said.”

3) Scarborough & Tweed recently disclosed that the personal data (name, address, telephone #, CC#, acct #) of 570 of their U.S. customers may have been compromised through the use of SQL Injection.

4) A couple of hackers that RSnake knew, Sirdarckcat and Kuza55, attempted to compromise him and his site in a prank gone wrong. They were not successful, but RSnake was right to be angry with them for trying. RSnake being the chill and understanding guy that he is and the hackers taking full responsibility for their actions and expressing remorse were able to resolve matter peacfully. All has been forgiven. Its really good to see how these guys were able to work things out without unnecessary escalation.

5) MustLive’s is actively running his Month of Bugs in CAPTCHA's. About one week in and Google, Blogger, reCAPTCHA, and CraigsList are a the notables on his list. CLARIFICATION: With respect to reCAPTCHA, "The issue that was found was actually a drupal specific issue -- it applies equally to any Drupal CAPTCHA implementation. In fact, a patch for this issue has been available for months."

6) Art.com and Vertical Web Media disclosed that someone broken into their websites and nabbed customer names and credit card #'s. Neither said how it occurred.

7) A couple of other defacements took place (Chilean Presidency, Aberdeen City Council), again didn't say the method used, but worthy of a mention.

8) Internet bank Cahoot had an issue where a customer found that by guessing usernames and manipulating URLs they could get access to other accounts.

9) Ryan Barnett spotted some Blind SQL Injection in the wild through WASC's Distributed Open Proxy Honeypot Project. An interesting find! This particular project is going to teach us a lot about webappsec and what is really going on out there. Plus its data set posed unsolved challenges for people to dive into.

10) And while not recent, a really good video demonstration of how to take advantage of XSS on Facebook.




Sunday, November 04, 2007

OpenSocial, Hacked in under 45 minutes

Top blogger Michael Arrington posted about how a hacker was able to modify his personal Plaxo account profile as well as that of Plaxo’s VP Marketing John McCrea. The hacker, calling himself “theharmonyguy” and describes himself as “just an amateur”, appears to have spotted a handful of clever insufficient authorization issues which allowed him to perform horizontal privilege escalation on fellow users.

Fortunately what theharmonyguy did was only a harmless prank and sought only to bring attention to the flaw. It there’s people out there who are curious and looking for these types of issues. And if you read back to an earlier post, I directed people to a story in Insecure Magazine #13 called "Social engineering, social networking services: a LinkedIn example". According to the content, social network websites can be incredibly valuable targets for conducting personal reconnaissance and carrying out identity theft.

It’s also interesting to watch how people who ar not part of web application security world react. Michael Arrington in titling his post, “First OpenSocial Application Hacked Within 45 Minutes”, used outcome-based metrics to describe the incident without even knowing it. In the past I’ve referred to it as hackability and this is a perfect example of how I think website security and security solutions should be measured. The approach is simple, natural, and makes a lot of sense to everyone.

QVC Business Logic Flaw nets scammer $412,000

$412,000 is what a business logic flaw in QVC’s website allowed North Carolina woman, Quantina Moore-Perry, to scam them out of. The scam was brain dead simple. 1) Place an order 2) Quickly cancel the order 3) Wait for the products to arrive in the mail anyway 4) Sell off the goods on eBay. 5) Profit. I guess the cancellation system needs a bit more attention.

My guess is Moore-Perry, who has since plead guilty to wire fraud, was no “hacker” and found the issue by mistake. She probably legitimately ordered something at first, then for whatever reason canceled it, and the products arrived in the mail anyway. Instead of calling customer support she probably saw an opportunity to make a little cash.

According to TheRegister article, QVC only learned of the incident when an eBay buyer tipped them off. They became suspicious because the QVC packaging wasn’t removed. Lazy crooks. The also incident begs the question, how many QVC customers (if any) have found the same issue and have just gone unnoticed? Out come the auditors. I’m sure this issue isn’t unique among eRetailers.

As I’ve been articulating over the last couple of months, business logic flaws like these can be incredibly damaging, are painfully common, and very difficult to identify. Obviously vulnerability scanners are not going to find these (unless they can check the mail too), IDS won’t spot them, and WAFs won’t block them. Basically this is because every part of the attack contains completely valid HTTP requests and responses. No crazy looking meta-characters like in XSS or SQLi and even the flow of the requests is natural.

At the same time, these types of issue can also be difficult for even a pen-tester to spot unless they know what to look for. Normally a pen-testers scope of work stops short of “ordering” something on the website. That’s also why I’ve been asking for and documenting as many of these real world examples as possible because it helps raise awareness. The more we have to go on the better everyone’s system design and vulnerability assessment processes will become.

Thursday, November 01, 2007

Live Online Roundtable Discussion (Nov 8)

We figured we'd try something different from the usual slide-ware presentations that everyone is used to with ordinary Webinars. With this roundtable there will be no slides, we'll have more speakers (4 of them), a whole lot more interesting dialog, and most importantly a way for the audience to ask questions so they can participate more interactively. Check out the formal announcement below for more details. And if you want to attend, make sure you register ASAP because there is a 120 person cap that is expected to fill quick.

"WhiteHat Security is hosting its first ever online roundtable discussion on website security. Jeremiah Grossman, WhiteHat Security founder and CTO, Robert “RSnake” Hansen, CEO of SecTheory, Chris Paggen, senior manager, application delivery and network security business unit at Cisco, and Jordan Wiens, Security Beat Editor at Network Computing, will take on today’s hot button issue of website security in a unscripted one-hour event. Please join us on Thursday, November 8th at 1:00 PM PST (4:00 PM EST) for this entertaining and educational event.


There will be very few prepared questions. The event will be audience interactive. Your questions or comments, will drive the conversation. Hear website security experts discuss:"

  • Vulnerability management
  • Web application firewalls
  • What PCI means to website security
  • Web browser security

WASC meetup on Nov 8 in San Jose!

Anurag posted this to the Web Security Mailing List. Hope to see many of you there!

Its time for another WASC Meet-Up. As usual this will be an informal gathering. No agenda, slide-ware, or sponsors. Just some like minded people from the security industry getting together to share their stories over beer. Everyone is welcome and it should be a really fun time!

Please RSVP by email ASAP, if you haven't done so already, so we can make the proper reservations: anurag dot agarwal at yahoo dot com

Time: Thursday, Nov. 8 @ 7:00pm

Place:

Duke of Edinburgh
10801 N Wolfe Rd
Cupertino, CA 95014-0618
Phone: (408) 446-3853


Friday, October 26, 2007

Why crawling matters

The first thing you do when pen-testing a …

… network: ping sweep
… host: port scan
… website: crawl

Each process begins by gathering intelligence and mapping out the attack surface as best as possible. Without a well-understood attack surface generating results from a scan/pen-test/assessment or whatever is extremely difficult. I bring this up because there’s been a lot of interesting discussion lately about evaluating web application vulnerability scanners though measuring code coverage and crawling capabilities. At WhiteHat Security we’ve spent years R&D’ing our particular crawling technology and have plenty of insights to share about the challenges in this particular area.

Crawl and code coverage capabilities are important metrics because automating as much of a website vulnerability assessment process as possible is a really good thing. Automation saves time and money and both are in short supply. The better a website crawl (even if manually assisted), the more thorough the vulnerability assessment. If you’ve ever experienced a scanner spinning out of control and not completing for hours or days (perpetually sitting at 5%), or the opposite when a report comes up clean 2 minutes later after clicking scan, there’s a good chance it’s the result of a bad crawl. The crawler of any good web application scanner has to be able to reach every nook and cranny of a website to map the attack surface, isolate the injection points, or run the risk of false negatives.

Whatever you do don’t assume crawling websites is simple and scanners are just mimicking old school search engine (Google, Yahoo, and MSN) technology. While much is similar, consider everything those guys do is pre-login and that’s just the start of where things become difficult. Many issues exist that routinely trip up crawling like login, maintaining login state, JavaScript links, malformed HTML, invalid HTTP messages, 200 OK’ing everything, CAPTCHA’s, Ajax, RIAs, Forms, non-standard URLs, and the list goes on. Search engines don’t really have to worry about this stuff and if they happen to miss portions of content they probably don’t care a whole lot anyway. For example they wouldn’t be interested in indexing a bank website beyond the login form.

While WhiteHat’s technology has become really good at JavaScript (Ajax) support and overcoming most of the challenges described, no one in the industry is able to guarantee that all links have been found or application logic flows exercised. Password resets or account activation are often not linked to from the website itself, instead they’re only visible in email. This means human assistance is often required to complete attack surface mapping. What we do with WhiteHat Sentinel is make available the crawl data to our customers for sanity checking. So if for some reason we’ve missed a link they can notify us and we’ll add it directly. Then we investigate why the link was missed and if necessary make a code update. Most often its because the accounts we were provided did not give us access to a certain section of the website and we had no idea it existed.

Then there is the matter of scale as some websites contain millions of links, some even growing and decaying faster than anything can keep up with. Without a database to hold the crawl data, scanners tend to slow or die at about 100,000 links due to memory exhaustion. However to use Amazon as an example, you don’t need to visit every book in the index in order to map all the functionality. But, a scanner has to be intelligent enough to know when it’s getting no more value from a particular branch and move on. Otherwise scans will go on forever when they don’t need to. Ever wonder why there are so many configuration settings for the commercial scanner products around crawling and forced browsing? Now you know. Like I said in an earlier post, each scanning feature should be an indication of the inability to overcome a technology obstacle and it needs human assistance.

Moving forward the one thing we’re going to have a keep an eye on is scanner performance over time. The effectiveness of scanners could actually diminish rather than improve as a result of developer adoption of Web 2.0ish technologies (Flash, JavaScript (Ajax), Java, Silverlight, etc.). Websites built with this technology have way more in common with a software application than an easy parse document. While its possible to get some degree of crawl support with Ajax and the rest its not going to come anywhere close to supporting a well known data format like HTML. As a result, attack surface mapping capabilities could in fact decline and by extension the results. Lots of innovation will be required in many areas to overcome this problem or simply keep up with the status quo.

Wednesday, October 24, 2007

Web Application Security Professionals Survey (Oct 2007)

NOTICE: I strongly recommend scrolling down to the survey results below first and coming to your own conclusions before reading mine.

Conclusions
According the survey of 140 respondents, a record turn out, their opinion of the current security state of the Web is dismal. In a nutshell here is what the result say to me:
  • The vast majority of websites have at least one serious vulnerability
  • Many websites are being broken into, but no one knows about them and that’ll increase exponentially over the next few years
  • There is NO WAY the average user can protect themselves from being exploited
  • The standard mandated by the credit card industry, PCI-DSS, makes little difference to the security of a website
  • Web application vulnerability scanners miss just about as many of the most common issues as they find
Sounds like a recipe for disaster doesn’t it? But that ain’t all…
  • About half of respondents look for vulnerabilities in websites not their own from time to time, but are cautious about it for fear of legal issues.
  • And if they were to find vulnerabilities, it’s toss up if they’d inform the website owner for the same reasons.
  • Interestingly, more than half said they would not sell the vulnerability data.
What we’re dealing with in Web security is the result of a 15-year software development gold rush that occurred without fully comprehending or appreciating the security implications. Think about it, when World Wide Web development began in the early 90’s there was no notion of XSS, SQLi, CSRF, business logic flaws, Response Splitting and all the rest. These attacks were unknown that didn’t come to light until several years later at the turn of the 21st century and beyond. Developers did not perform input validation in the beginning because there was no good reason for it! From their perspective, if these attacks didn’t exist then what’s the incentive to do input validation, output filtering, etc.? Which also means we’re having to clean up after past mistakes that none knew were made until quite recently. 100+ million insecure websites later here we are.

Description
It’s finally that time again where we stop hearing about what I think and listening to what you think. The Web Application Security Professionals Survey helps us learn more about the web application security industry and the community participants. We attempt to expose various aspects of web application security we previously didn't know, understand, or fully appreciate. From time to time I'll repeat some questions to develop trends. And as always, the more people who submit data, the more representative the will be. The best part is I’ve progressed from my normal email form to using Survey Monkey online.

Guidelines
- Open to anyone working in the web application security field
- If a question doesn’t apply to you or you don’t want to answer, leave it blank
- Comments in relation to any question are welcome. If they are good, they may be published
- Submissions must be received by Oct 31, 2007

Publishing & Privacy Policy
- Results based on aggregate data collected will be published
- Absolutely no names or contact information will be released to anyone, though feel free to self publish your answers anywhere

Results

  • Developer, Web Security interested
  • security auditor
  • Developer
  • hacker of course :)
  • Contractor to the Government
  • I work in the Network Security group for the Ohio State University
  • Quality Analyst
  • code reviewer
  • "Freelancer"
  • Internal sec
  • Consulting Systems Engineer (not a consultant, but title) Information Security
  • mere web developer
  • Security Conscious Developer

  • I don't actively look for them, but often they're sitting out there in plain sight.
  • look for != exploit. No tor+darknet_ip stuff, only something I'm comfortable doing with my own non-anonymous connection.
  • Alot of things pique my interest as I am surfing, and I will "poke" at them more than a normal user would.. However, I believe activities like this violate a code of ethics (CISSP), so I'd rather not disclose in the event that someday I decide to get that cert and accept the lame code of ethics.
  • Only those that are obvious without deviating from normal usage
  • It's rare, non intrusive and I usually have an association with the owner through a real life connection.
  • I love this industry, in my free time I look for vulnerabilities with and without permission. Sometimes you have to live on both sides of the tracks to understand better how they work.
  • Usually foreign, and usually not intrusive methods, XSS mainly
  • Jail bad.
  • I admit that I sometimes put a quote or paren into a form field e.g. O'Reilly or (408)343-8300. Converting POST's to GET's has become habit when sending somebody a URL with pre-populated information. The "Forgot my password" feature is often tested several times a day on most sites - but usually because I forgot my password. Sometimes I edit cookies, changing their expiration to a later date - and then protect them with CookieCuller so that the website can't force me to expire them (out of laziness). If I can create two accounts, sometimes I'll see if I can trade session id's, copy objects, steal objects, put objects, etc. It's not all the time.
  • Would never do ../ etc tbh, ala Dan Cuthbert. a search like jon'es or ">xss may sometimes be entered. And I dont believe either of those two vectors are illegal, under UK law anyhow.
  • Everyday and all the time. Just look at Month of Search Engines Bugs and future Month of Bugs in Capchas. Looking for vulnerabilities in web sites must be freely available (without any permissions). Bad guys will be looking for holes in any case, so no need to restrict good guys.
  • In fact, i often take a look at some compromised computers which scan my services. If these bots belongs to official and identiable organization, i try to tell them.
  • There are no true white or black hats, merely shades of gray.
  • I don't necessarily look for them, sometimes they just jump out on their own. In the past, I've only looked for vulnerabilitie within sites where I plan to do online purchases, to ensure any information I may provide is at least somewhat secure.
  • "Looking for vulnerabilities" is limited to a passive examination of the signs that usually indicate risk. Occasionally, I will test a site for cross-site scripting vulnerabilities via very obvious and limited tests.
  • By "look for vulnerabilities", I mean that I'll cause a web site to indicate, through its normal behavior, whether it's probably vulnerable.
  • I did once after finding one vulnerability by accident on my son's soccer leagues website.
  • MOre or less just following up on a error or interesting message that I found, not intentional pen testing or anything like that.
  • Very rarely
  • I don't have explicit permission from the site owners, but our group has reserved the right to do any scanning of any site or service at the university unannounced and at any time.
  • I just dont have the time to look at other people's sites, and dont really want to know what can be found (and you know we alway do)
  • Just for learning purposes, not doing any bad.
  • ey! i'm human
  • it's a tic
  • Usually if i'm buying something online i do a quick check just to make sure its not some lame shopping cart full of vulns.
  • Looking for vulnerabilities isn't necessarily the same as testing for vulnerabilities. Something like CSRF doesn't really need direct testing. Also, maybe my name really is ' or 1=1 --
  • I enjoy the challenge, and the hunt.
  • I may on occasion take a peek
  • i poke but never leave a trace that shows a malicious intent. Meaning if a good forensics was done. The investigator would be like "damn he just disappeared, I don't get IT"
  • We have a process in place to get permission to test applications within the company for vulnerabilities so that there are no issues with production and/or maintainence of the site.
  • Pretty rarely. I feel it's crossing the line if you haven't been hired to do it.
  • When looking at websites without permission, I am very careful about what I will and will not do. For example, I might throw a single quote into the querystring or analyze a site's session handling mechanism by viewing it with a proxy, but I won't do any serious tampering, especially with big financial websites. It can be pretty tempting though.
  • It can be quite tempting to conduct testing on applictions accessible from the web one is not legally allowed to play with. However once a person reminds themselves of just how vague current law are, that should help them see clearly ;). People should know the risks involved and how to mitigate them before moving forward on even the simplest manipulations.


  • My mood.
  • The issue and who it is I'd be reporting to.
  • How it would affect me.
  • There isn't a good method of disclosure.
  • I disclose them as a private person.
  • I never had any legal problems after disclosing web vulnerabilities to the owner or even to the public, I assume this is because most of the time I did not overestimate impact of security problems. What's more, I got my job because of such disclosures (some of them went public) in my current employee's websites.
  • anonymously
  • depends on the type of vulnerability
  • depends if i am responsible for the site's security. if i find something in an another site, i will report it only if i care whether something bad happens to the site.
  • Depends on the expected reaction of the owner.. If I can safely assume that the owner is going to be apreciative, then I will disclose. If there is a chance the owner is going to overreact and start accusing me of trying to hack his website, etc, then no. Additionally, if the owner is highly unresponsive and the site is used by millions of people (e.g. Google), I would disclose publically (on my blog) and send the blog article to the owner.
  • Dependent on any past history disclosing bugs to that party and potential harm to 3 parties (users of that service). Unless they have been responsive and friendly in the past or the bug could end up causing grandpa's insulin delivery to stop: Full Disclosure!
  • How serious is it?
  • i disclose it to the owner and never disclose it to the public. i don't need the media attention or reputation.
  • criticity of the vulnerability :)
  • Not my responsibility, it's the vendors/siteowners responsibility to test his software before shipping/publishing. Only way they learn is to releasing exploits promptly.
  • If I'm satisfied I can prove it was not as the result of specific testing (i.e. accidental discover) I will report the issue
  • Depends on the severity of the issue, and depends on what I feel the user base is. If its on a site like facebook, it will be reported. If its on a site that has hardly any customer base, then I wouldnt report it.
  • I believe it is the responsible way to disclose, diligence ends at notification when it is not my asset and public disclosure does more harm than good in most cases because professionals are lazier than kids on IRC
  • If I perceived they would listen
  • On whether I think they will react badly by disclosing it to them or not. Default to "no way" unless I know them or their policies.
  • I tried, but most of time it does not reach to the security prossional, instead just waste my time with help desk or canned e-mail response
  • Small websites is a given. Then dont have th resources alot of the time and always get nice feedback. Larger ones are tricky. ala the ../ in adobe the other day, I know someone who did, no negative feedback from adobe.
  • If I have permission, sure; otherwise it's a hornets nest.
  • I always disclose security holes to sites' owners. And in most cases disclosed it to Internet community after informing owner. It's my style which I called Advanced responsible disclosure.
  • Seriousness and how to explain how I found it 'by accident'
  • If i could disclose it anonymously, I would.
  • Yes, I have done this in the past. One disclosure resulted in the site being taken down and patched immediately. Another resulted in a response basically stating "We don't care" and yet another resulted in threat of legal action.
  • It depends on how I thought they would respond. I'm not going to put my career at risk if someone may not react well.
  • Depends on the severity of the vulnerability.
  • On the website/company. I don't want to deal with stupid staff.
  • Generally not -- the upside is zero, and the downside risk is that I could be unfairly accused of bad behavior. But in instances where the web site owner has a good reputation, or I have a good relationship with them, I have reported issues.
  • My son's soccer league required participants to register and pay online. During the registration process, I accidently found a vulnerability (I then performed a scan and there were plenty more). They league though I was overly paranoid, I do not think they took my concern seriously.
  • Commercial v Non-commercial site. Open Source v Not. If it was a project page for an open source project (e.g. phpBB) absolutely. If it was Bloomberg's trading platform, depending on the severity (low to super-critical) it'd be hell no to consider sending an anonymous message. People have been prosecuted for shit like that.
  • Depends if the owner had commissioned me to perform the analysis and produce findings.
  • On how severe the issue is and whether the way I found it was legit.
  • What the site is, how "risky" the issue is, etc....
  • Really just talking about university sites here since it's my job.
  • There's too much risk involved for very little gain.
  • On my relationship with the company, and perhaps my access to people who cared. It's just too much effort to go through the whole disclosure thing. May fire off an email to the security@..., but wouldnt expect (or push for) a response. I know that's wrong in a few ways, but companies that make it easy and a pleasure to report issues to are few-and-far (yes, I'm looking at you Google!)
  • severity.
  • If i have time
  • On my relationshipi with the site, how much I care about its security. If I know the owner, I find it more easy to report.
  • How well I know the company/owner of the website. I know cases where the guy who informed the owner was blamed as a hacker, because they didn't understand what happened. Additionally we have a new government regulation about hacking and using hacking tools, so the risk is too high to come in trouble.
  • its funny?
  • type of organization, experiences in the past with owner, type of vulnerability
  • Depends on severety of bug
  • anonymously
  • Too great a risk of mis-interpretation by the site owner.
  • could get either me or my company in trouble
  • I've called several vendors and ordered over the phone and explained why i was doing so and what they needed to do to fix the issues.
  • Too many issues are found to disclose them all. I only disclose issues for which I am personally affected, or it is a high profile site, AND I think the site won't shoot the messenger.
  • Depends on if I like the owner.
  • Depends on whether they have a disclosure policy that waives my liability, and/or whether they do things like give credit to researchers which pretty much gives me immunity - even if they didn't intend it that way.
  • On the site and how I discovered it
  • Depends on what kind of testing revealed it. SQL injection & XSS are probably out, unless it's a site with a good reputation for dealing with researchers (Google, Yahoo, etc). CSRF, session handling, etc could be in.
  • My experience with contacting vendors has been very poor. Most of the time the vulnerabilities remain in place for an extended period of time, are ignored, or the companies' security representatives are more concerned about pursuing legal action.
  • Who the owner is, what their policy is on reported bugs, and how they have been known to deal with such reports in the past
  • Because no one reads their abuse@yada.com mail anymore.
  • I usually do this, if it requires minimal effort, if they put it in a help queue, then I don't bother pursuing it.
  • This is part of my job description
  • If it was significant enough.
  • On the perceived risk to the user community, the risk I perceive to myself by exposing my discovery to the site owner, and perceived accidental nature of the bug - or in other words, is it sloppy programing, ignorance, or a bug.
  • Most sites are too lame/insignificant to even bother. Some solutions like WorldPay Select payment gateway (and lots of similar for-idiots gateways) are insecure by design and nobody cares.
  • Depends if I think it'll end up with me in Federal pound me in the ass prison or not.
  • risk/severity
  • If my personal data was at risk, then definitely. Otherwise I'd have to evaluate the risk of my disclosure being taken as an act of aggression. It's just not worth it sometimes.
  • depends on existing relationship, if none exists, depends on response of the owner to the previous published vulnerabilties.
  • A lot of people seem to be afraid to disclose issues, but I will almost always disclose issue that I've found, and have even been thanked for it a number of times. The key is to make it clear that you didn't hack into their site, you just noticed a security exposure and want to let them know about it so they can fix it.
  • May not be worth it for some web sites
  • Depends on potential reaction and legal position.
  • For fear of being reprimanded whether it be by: the company that employs me, or the company I just attempted to help.
  • Fear of legal action.
  • Too much hassle


  • Depends on you're definition of serious. XSS can be serious in a right context. If serious = complete 0wn4ge, maybe about half or less.
  • I'd say almost all, but there are many sites that just don't worth hacking, so their vullnerabilities are not serious even if they have code injection.
  • If XSS or any other browser-side problem is counted as serious, then I don't know any safe website. If you mean server-side injections (from SQLI to RCE), then I think about 30% is highly vulnerable.
  • "serious" may be misleading. managing vulns is managing risk, such as the window of opportunity for an exploit. if the vuln is open for a longer period of time then it becomes more serious. i would not consider XSS or XSRF to be a serious vuln if it were patched in a reasonable time.
  • XSS is everywhere
  • If you consider XSS high. Cant remember not finding it for a long time. Apart from small sites that dont echo.
  • Caveat: All or almost all of web applications not necessarily websites. It's rather difficult to find WebAppSec vulnerabilities in HTML although not impossible.
  • Where XSS is included in what is considered "serious".
  • Almost all, but in my opinion, most of the 'big' sites have vulnerabilities that are hard to find.
  • I would say that 2 out of 3 websites I reviewed had significant security vulnerabilities.
  • Again, this is the university environment where there a huge number of sites and a wide range of developer abilities.
  • If we talk about serious vulnerabilities. But I would say "all or almost all", if we talk about non-serious vulnerabilities as well.
  • If you count XSS & CSRF as serious, just about everything except brochure-ware is vulnerable.
  • Most websites have some type of issue though the severity of such vulnerabilities is generally anything from mild to critical.
  • just would say none because i am not experience enough to count out the possibility. "glass half full kinda guy"
  • somewhere, there is one
  • I would say that all sites would have some serious vulnerability that would impact the data's confidentiality, availability, and integrity. Some of the issues that seem to be overlooked that are easily resolved in my opinion are application-based DOS attacks
  • Based on 6+ years of application pen testing, I'd say it's definitely above the 80% mark. If you consider XSS to be "serious" (I personally don't except in very specific circumstances) then it's more like 100%.
  • Security is getting better, and that is raising the bar. People are finally getting over the hump of SQL Injection(mostly). However more sites than I would like tend to be vulnerable to serious Buisness Logic flaws, and no one has really tried to get a grip on CSRF.

  • I stay away from this sort of thing.
  • Depends on the exploit. Last time I checked though, a site can have a blatant XSS hole, and still be PCI-DSS compliant.
  • Theoretically, there should be a difference, because compliance means that people are et least interested and awared about their security.
  • "Hacker Safe!"
  • PCI-DSS covers attacks that i generally not use such as password strength, weak ciphers, etc.
  • My guess is that L1-3 Merchants who perform a code review do indeed have websites that are more difficult to exploit. L4 Merchants are only required to do the self-audit and have an ASV scan their external IP prefixes - so this won't change a thing. Because of this factor (and the fact that PCI DSS doesn't target web applications, only their infrastructure), it is difficult to answer this question.
  • Hackers will not play by the rules on PCI-DSS.
  • tested one sites which had been certifed by a 'scanner'. took maybe 10 mins more than usual to find an xss.
  • It's harder to exploit such sites (it require more time to find security hole). But first try to find site which is fully PCI-DSS compliant :-). Including XSS free and also UXSS free. I had occasions to find PCI-DSS compliant sites which had UXSS holes (and certified auditor's site had such holes also).
  • I think it depends on who tested the site for compliance and how well the companies remdediate the findings.
  • I do govt contracting so I do not get much exposure to PCI
  • After compliancy tests, merchants will fix most issues. So we can say, in general, PCI compliancy test have raised the bar. But not always. For example good vulnerability assessment tools, such as QualysGuard IMHO, cannot test web applications for well known flaws.
  • All other things being equal, any standard is better than no standard at all.
  • You have to obtain the private encryption key which may make it a little more difficult. It depends on how PCI-DSS is implemented. PCI-DSS does not readily address web application security or network security. As far as I am concerned it primarily addresses the need to encrypt credit card data if the data needs to be maintained. However, this does not mean that the key used to encrypted the data is protected.
  • I didn't have to look it up, but then I recognized it.
  • all depends on if the site has undergone a review before, and who by. if it's from some "crapscan" vendor, then they are always suprised in what is discovered
  • very little harder..
  • "PCI-DSS compliant" doesn't seem to be very consistent.
  • As I have not gotten a chance to enter a corporate environment yet I am not familiar with the details of a PCI compliant website. I have heard of the term, and have read documents on the matters, but because I do not yet hold any formal certification I have not had experience with this.
  • PCI-DSS doesn't put a lot of focus on specific web security at this point... A scan from an ASV barely has to scrape the surface.
  • The PCI standard does take a step in the right direction, but it truly does not reduce risk.
  • It is not really different when looking at XSS and CRRF, but is harder when dealing with poor protection schemes.
  • Most companies relied on automated scanner results at most to secure their websites for PCI. A lot of web application security holes can be missed by these scans.
  • I would say no noticable difference, degree of security is a variable that directly correlates to the compentancy and thuroughness of the audit process.

  • Meet the new boss, same as the old boss. Same flaws are still there.
  • Used properly, AJAX can significantly reduce the attack surface and threat profile of a site. XSS can very easily be eliminated with a good framework/implementation. However, it adds new abstractions and arguably another layer of separation between the developer and the underlying communication and technology.
  • *Very* few new attacks. Mostly same old same old, only more of it.
  • AJAX is just some RPC subsystem. however, it has not taken the the security features of good RPC mechanism; thus it reintroduces problems. also the same origin policy is extremely limited thus developers are always trying to roll their own solution for the mashup problem; thus causing even more problems.
  • Larger attack surface because they are generally more functional. The problem is the functionality and not Ajax.
  • Ajax and Ajax like libraires are mostly designed to enhance users "surf experience", thus extend capabilities allowing new services. Its first goal is not to be secured but to fulfill services.
  • People like to play "Buzzword Bingo" and start implementing technologies they don't fully understand. They don't take the time to understand the processes involved or how to fully secure their usage. In the end they just want to be able to say "We use AJAX, .NET, Java, "...
  • I wouldn't say the attack surface is actually larger, but it has gotten more complicated. This makes it harder for developers to intuitively tell whether they've created a vulnerability.
  • Well, duh. Increased complexity & increased attackable surface == increased risk.
  • There is now a lot more communication going on between the server and the clients, but there is less control, and less checking that the data is valid. Before, this became part of the web application development. Check everything that comes into your server. Now, the server side has got more holes, and client-side checking is non-existent, or trivial to bypass.
  • All "bleeding" edge technologies have holes, since Ajax is one of the newer kids on the block I expect to find silly things done that create problems. As the tech matures it'll get better.
  • It's just more requests you have to watch and modify. No real big change in the way you look at "traditional" apps
  • I don't think it makes a significant different, except that its another technique that abstracts a developer even further from how the web works and makes it more likely they will do poor threat modeling and have a vuln, especially an architectural one.
  • Without the necessary knowledge in sanitation and validation it is very simple to make programming mistakes that can allow for miscellaneous injections.
  • not easier to make mistakes exactly... but it makes mistakes not seem like mistakes..i.e. its harder to imagine the ajax behaving as it shouldn't
  • Ajax is something that I am quite concerned about due to the movement part of the business logic from behind the firewall to the client. There is a definite need for architectural guidance that current web applications lack.
  • New attacks are only with regard to attacking the framework itself or a roll-your-own Ajax implementation. For the most part it just creates a false sense of security on the part of the developer. It's all HTTP anyway so it doesn't require a significantly different approach.
  • It's like 1995 again. Ajax security is old skool exploits and REALLY dumb ideas
  • Javascript Hijacking wouldn't exist without JSON. User tailored JSON responses would be rare without AJAX.
  • It is not necessary to have a larger attack surface, just more commmon.
  • At most I would feel that Ajax just adds a layer of obfuscation over the problems currently present on web applications. With Ajax/ With-out Ajax the attack surface is very similar in my opinion.

  • They're great at finding .bak and ws_ftp.log files!
  • web app scanners are great for finding simple problems. i have never thought web app scanners have been useful in my work besdied training QA teams to find some minimal number of issues. if they find some, they call me and i perform a full audit.
  • Authorization bypass: poor, this is for me still a big issue!
  • Web Services - Limited
  • Ajax support is not ubiquitous in scanners. Testing for XSS when source code is unavailable is their strongest asset. If scanners were to integrate "test spy" functionality similar to ImmunitySec's SQL Hooker - they could do more advanced SQL injection attacks. I think there is a future for logic flaws and CSRF integrated into scanners, but it obviously isn't there today. For forced browsing, a similar "spy" FileMon-like technique could be applied. HRS is performed by most scanners today, but unlike XSS - this is almost certainly better found with source code (code review, automated SCA, smart-fuzz tests, etc). The best reason to use a scanner over other testing methods is to increase time between findings. In the hands of an expert, scanners can be a great tool to verify working exploits in web applications - where other methods have a hard time verifying whether a vulnerability or exploit condition is real.
  • Useless for phishing attacks, DDOS attacks, among others
  • seem some WAF's break sites before. They do work in some limited siutations at the moment.
  • Web application vulnerability scanners are lame. If you want real security ask for security audit by human professional, not by scanner.
  • You need a rating below Poor for some of those categories.
  • poor on authorization testing
  • Vuln scanners are nearly worthless, not because they need to do a better job spotting bugs, but because they need to do a better job quantifying their results. One thing I mean by "quantify" is this: A laundry list of potential vulnerabilities of types x, y & z, with associated high/medium/low risk ratings just doesn't cut it. An analyst still needs to review the list and consider the risk of each bug in the context of the application.
  • They are also extremely good at checking for policy violations, 508 accessiblity, etc.
  • Never used them really, just did all my stuff manually.
  • This is a slightly loaded set. On one hand, I think that AppScan does a terrible job with detecting true blind sql injections (lots of false positives), but SQLiX does a great job detecting them and "exploiting" the flaw to prove its existence.
  • You are joking right! Most of the webappscanners IMO are pretty crap at finding anything but the low-hanging fruit. What they are useful for is doing a first pass at a large site so I dont have to test out obvious things on all the pages
  • We have bought several licenses of an web application scanner, but it is just one of the tools we use. It saves some time and work, but the most serious vulnerabilities we often found by hand.
  • There are some serious issues with web application vulnerability scanners, namely the amount of false positives and the number of false negatives. I have run quite a few of the major brands against the web applications of my company and have had to deal with reports going to the hundreds of pages of false positives while noticing that known vulnerabilities were not found.
  • Depends on the biases of the scanner devs. Some are really good at forced browsing, others have zero support. There's no one tool which does it all properly yet.
  • Nothing works well without customization.
  • get a real list of what actually matters... data layer authorization - poor encryption - poor logging - poor error handling - poor ssl on backend - poor concurrency - poor authentication of backend connections - poor authorization of backend connections - poor validation and encoding of backend connections - poor etc...
  • Web application scanners have a long way to go. Even for vulnerabilities that they should be good at finding, such as Non-Persistent XSS and HTTP Response Splitting, I've seen lots of missed vulnerabilities. They're useful tools but shouldn't be relied on as the sole means of testing the security of a web app.
  • Flash, who does this? All the RIA content is actually poorly understood by tools...

  • Worst Idea ever. As opposed to fixing the code, let's just install a device.
  • Most likely never.
  • most are so scaled back in config/deployment they are little more than simple proxies
  • web applications firewalls is a way stupid idea... and so is PCI-DSS requiring a source code audit or web app firewall. i guess this will be great business for anyone who claims they have a web applications firewall, but it's way too easy to circumvent.
  • about 0,1%
  • Only ecommerce and some financial websites implement a WAF. Many of those are only using an APIDS, or a WAF in view-only mode. I've never seen an Intranet website implement a WAF.
  • cant say ive noticed very often.
  • Most of the companies do not invest on web application firewalls.
  • In my experience, it's vastly more common for app & database servers to be isolated and hardened than to be protected by an application firewall. And frankly, organizations that invest in hardening their application infrastructure just aren't going to realize much of a benefit from adding an app firewall to the mix. I suppose one could position app firewalls as an alternative to hardening, but that'd just be irresponsible.
  • Only 1/4 of web applications I reviewed had a web application firewall
  • It depends on the client. It also depends on the contract, some clients have us test both Test/Dev and Prod, in which case Prod is usually more protected for obvious reasons. I suppose this can provide some type of Delta which lets them know if their AppFW or other safeguards are doing their job well or not. Personally I've rarely seen a AppFW in use (maybe 1 in 30 clients).
  • I haven't seen one at my university. People have talked about implementing them a few times, but no one has to my knowledge implemented one.
  • From my experience the webapp firewalls are either not deployed are use rulesets that are too loose. Even when my pen tests have been noticed its never been before day three of the testing. I should emphasize that no techniques were ever used to hide the attack, I presume that none of the attacks would have ever been picked up if the testing wasn't so loud and noticeable.
  • Difficult to tell if it's the WAF blocking or the app.
  • never...
  • With the exception of some servers utilizing mod_security a majority of the time automated attacks are performed without issue.
  • The pen test process that we employ tests the applications from behind and in front of the firewall. There is a risk of relying only on testing through a firewall.
  • They "try"
  • When WAFs are deployed they are usually configured so loosely that they might as well not be there.
  • I tend not to use "standard" attacks, and the WAFs rarely if ever stop me unless I'm being stupid. I'd say less than 5% of my reviews have even mod_security in place let alone something pricey.
  • My perspective may be skewed however, I only test from a black box perspective so it is very difficult for me to tell what the clients architecture looks like on the other side.

  • I've seen a real-life CSRF attack about a year ago in a small gaming portal. After over a week the developers were still sure it's some coding or database bug on their side, removing user accounts. If IT people can't recognize CSRF, how can average user protect himself against it? NoScript is doing great job against XSS, but how much people are using NoScript?
  • Does NoScript count?
  • Automatic updates of os/browser helps as does phishing filters, but the average user often does not know how to use these or understand their purpose. Most users react with shock horror when they learn that people can spoof emails, an issue that is far older and more widespread, yet users remain uneducated. I don't see the trend changing now as opposed to the past. These issues will have to be solved by the vendor, not end users.
  • If you have ever worked tech support you would understand how many people have issues just navigating a Pc.
  • People are even reluctant to download and deal with the minor inconvenience of NoScript...I know people who "used to use it"
  • The average web user doesn't know how to turn Javascript off. A significant majority of IT administrators and information security professionals run an outdated, vulnerable version of IE or Firefox with plugins and add-ons that are also outdated and vulnerable. I've never met any person in real life who claims to browse with security in mind, especially regarding the attacks named above.
  • But im no convinced attackers are using these vectors a great deal. no at expert on threat intel though. they make targetted attacks easier though.
  • Web user will be capable to protect himself if he will be taught (and even security restricted). To not use Internet Explorer (taught) and forced to not use it (restricted). And others aspects of security, "not using IE" is a first step.
  • Since most of web user do not develop their own tools but uses those available on the Internet (blog, CMS and so on), their web site/application security level is directly linked to these tools security.
  • Even those who know what they are doing have little hope of defending all browser attacks. There are just too many threat vectors to protect against while keeping the web usable.
  • It depends on what you mean. Some attacks require user interaction. In those cases, I'd change my answer. On the whole, the user is relying on a broken security model that inherently trusts the site they're visiting and anyone who is able to inject active content into it.
  • Today, least-privilege is the best bet for protecting yourself from client-side exploits, but very few people do that. (And of course there's really nothing users can do to protect themselves from XSS/CSRF.) Tomorrow, malware will have adapted itself to the least-privilege environment, and who knows where that will leave us.
  • There are a large number of individual's who beleive that if they see a locked lock on a website that everything is secure and there is nothing to fear. At work users are offered some protection by their network administrator; however, I find most users run with limited protection (they might have anti virus protection) as administrators on their own computer at home. I would say 90% of my friends and family have had their computer compromised. I then teach them to have one administrator account on their machine and to use a limited user account for the majority of their work. I also try to alert them to be more aware of what they are doing.
  • I browse with No Flash, Javascript, or Cookies, and most sites are damn unusable with no intelligent error message. Why do I need a cookie to save my damn zip code on Circuit City (contrived example but one I run into a lot)?
  • But it's getting a lot worse with Web 2.0.
  • The MySpace generation and the elderly make for a large portion of individuals who are not capable of deciphering possible malware and related activities. I tend to get phone calls from my grandfather asking why he received a screen (in reality a simple popup from the browser) stating that his computer has a virus, and that he needs to download the rogue software provided in the link in order to remedy the situation.
  • The tools that are out there that claim to help, NoScript for example, take away the ease of use that the average user wants... NoScript, no matter how beneficial, is outside the realm of the average user. Some things are getting better... More users are applying updates from Microsoft and more software auto-updates these days.... so things are getting better but not as quickly as they could be.
  • The big issue is that the web applications are coded with the assumption that the user will not perform anything malicious. With the discussions I have had with the development staff, this will continue since the courses that teach coding and its associated methodology does not really include security.
  • NoScript!!
  • I've created anti-CSRF UserJS for my browser and it's more trouble than it's worth. And XSS? There sooo much crappy code out there that it would have even more false positives.
  • 15 years on, and the gold standard empirical answer for this question is resoundingly "No!" And more to the point, nor should they have to know how to protect themselves.
  • I like to think I know something and even I can't really protect myself while at the same time using services I like.
  • Average? No way.
  • Not without sacrificing the ability to utilize available functionality on the their favorite Web 2.0 applications.

  • Depends on the site, if it's a homebrew/smaller operation I usually avoid it.
  • Social networking
  • Online Banking
  • I have an account at one bank with only a couple hundred dollars in it. I use that bank/account for online purchases. All my other banking/credit cards are with other companies.
  • Depends more on the site
  • Regular old fraud is more of a concern. Too many vulnerable sites for the bad guys to hit all of `em. (That's what I tell myself, anyway.)
  • No online banking!
  • i just do it in a separate browser instance. don't forget MOZ_NO_REMOTE=1. :)
  • Business that requires SSN or other sensitive identity info.
  • Online purchases, registering on websites using valid email addresses date of birth, etc.
  • I never browse email links, and look at a lot of stuff through wfetch
  • web surveys, financial, personal data
  • I won't put my full social security number into a form field.
  • No $$$ related transaction over the Web
  • only use 2 sites. Dont think they are great but keep my CC data in one place. Willing to pay more for that too
  • Cannot comment because of privacy issues :)
  • I have to admit I watch these "important" HTTP/S transactions through a local HTTP proxy though.
  • Granny Porn is a major area of business that needs more protection. RSnake, Id and the rest of miscreants over at sla.ckers have cornered the market on it and the rest of us can't get our fix!
  • We do not conduct any financial transactions online and we keep all sensitive customer data off of our website. Instead, we send customers CDs with that information.
  • I refrain from performing financial transactions with smaller businesses who roll their own shopping cart and credit card authorization.
  • I do not provide any personal information online. If I have an e-commerce transaction, I use a special credit card that I only have a $500 credit limit on. The only problem is that the credit card company keeps trying to increase the limit without my approval.
  • I tend to avoid small mom-and-pop shops, unless they use a larger system like PayPal to handle credit cards.
  • Ask for too much info too fast (credit card for shipping charges?) or make me think your website is shit (unusuable pages on OfficeDepot (example again)) with no javascript and I get either turned off or too frustrated to patronize.
  • Well maybe it doesn't really stop me as much as I use a low limit CreditCard for online transactions etc....
  • Accessing Bank accounts online.
  • i could get pick pocketed in the metro, what's the difference?
  • Because if I lose money from my bank, I'll go back to them about it - it's not my problem!
  • Hell, that's why there's a credit card insurance. Let them pay for their own mistakes!
  • Social Networks, Auctions, Online Docs and Backup Services (Google Docs, .Mac etc.), Job Sites -- to name a few
  • Buying/selling
  • as stated if the website is bunk i pass on using it.
  • I refrain from viewing bank account information, PayPal account data, and anything that may be used to identify me personally.
  • i refrain but use an account that rarely has small amount of cash
  • Trusting the new and unknown
  • Err, you mean personal or professional? Pro: spamming and obviously exploitable features.
  • Loan / credit applications. Buying pr0n. Clicking ads. Clicking "Remove me" in e-mails.
  • I provide a lot of fake information though.
  • Rarely use my real name on the internet. Don't open attachments. Don't click links in emails. Don't do widget sites (pageflakes, goowy, etc.)
  • firefox with noscript though
  • I think in most cases doing business online is at least as secure as doing it offline.
  • Had some trouble using paypal lately... For websites such as auction, will never used that!
  • all kind of online stuff :)
  • I swim in shark infested waters, I live in germ infested enviroments, and I do business using bug infested software. It's a necessary evil at this point.

  • I'd sell it to practically anyone (excluding Russian Business Network)
  • Selling vulnerabilities is like washing car windows on the crossroads. The car owner did not ask for it, so why do you expect payment? Would you also sell drugs to the DEA?
  • I might think about it, but I probably wouldn't do it. If I was a starving college student; in a heartbeat.
  • no
  • would not sell
  • would sell for top dollar
  • No.
  • No. if the issue is broad enough i go through CERT. i don't need the media attention, fame, or money
  • No
  • no
  • no
  • I need the money.
  • no
  • NO - Inform the product company
  • Jeremiah Grossman
  • not in the selling biz
  • wouldnt sell
  • I'm thinking about this idea.
  • no
  • Not selling, telling the application owner first. Always.
  • No. I wouldn't sell it.
  • Wouldn't sell it, just report it to the owners.
  • no
  • I would offer it freely, selling it is getting closer to the line of extortion. If they don't want to buy it then what? Would you sell it to someone else?
  • Um... you need a NO option here.
  • No
  • I'd sell a vulnerability to any reputable purchaser. That definitely excludes anonymous purchasers.
  • I doubt I would fined one, but a dollar is a dollar
  • you're missing vendor. or not
  • Never thought about it.
  • I wont sell it. I'll publish it for free on sites like SecurityFocus and milw0rm
  • not sure I would sell it, likely follow standard disclosure methods
  • Nope
  • No
  • No
  • No one
  • nope - id disclose it to the vendor
  • I would never sel a vulnerability.
  • If I felt that I could potentially earn some money from the information I would sell it to anyone willing to pay a nice sum.
  • hell no they should do their own damn work
  • Would not sell.
  • Depends on the application
  • No. Responsible disclosure.
  • Selling sploits is unethical
  • I'd notify vendor, give reasonable time to resolve, then publish
  • no way!
  • I wouldn't release it myself, but rather allow some one else to take the credit, and ultimately the liability.

  • It should at least be easy to find a contact point to for vulnerability disclosure information.
  • I wouldn't trust what they posted.
  • Regarding sensitive or e-commerce websites
  • Im all for public disclosure (after a fix, the better) but public
  • I also want a pony. I'm not holding my breath.
  • nah, as long as they fix it in a "reasonable" amount of time (as defined by me), then i'm fine with an implicit policy of fixing stuff fast.
  • just fix it, or prevent it.
  • I think each website should handle disclosure their own way, just as researchers do. However I wish more websites (companies) would adhere to the http://www.domain.com/secure/ security@domain.com, abuse@domain.com, etc recommended practises ( http://www.rfc-ignorant.org/rfcs/rfc2142.php ) and have these addresses reach a human.
  • Not every website has the user base or traffic to necessitate a vuln. policy.
  • More companies need to be held legally accountable for their programming and security practices
  • Why give script kiddies an easier target ?
  • Nobody cares about disclosure. If I had a website, it wouldn't have a vulnerability disclosure policy. Maybe there are some that should.
  • Useless. Again hackers does not care what is disclosed or not.
  • big sites. Give reference and financial reward. Think thats fine. Time = money. prob cheaper than getting a pen-tester to find it :)
  • Current disclosure policies are enough: full disclosure, responsible disclosure and my advanced responsible disclosure (and also not disclosure policy).
  • This attitude would change the fear of publicly dealing with security issues and may, on the long run, reassure customers since it means security is a concerned for the web site owner.
  • The purpose should be to protect the company and the company's customers. If a disclosure policy existed we would know who to contact in case a vulnerability was discovered. Doesn't mean they'll fix it, but at least they would know about it directly.
  • If the vulnerability puts its users at risk, then it should be disclosed. The question is then who makes this decision?
  • Caveat: Any website that accepts or retains sensitive customer information.
  • That shall prevent big confusions about when to disclose vulnerability issues.
  • No -- the disclosure situation w/r/t web site owners is completely different from the mass-market software world. I elaborate on this point here: http://www.webappsec.org/lists/websecurity/archive/2007-10/msg00025.html
  • It'd be nice, but for rinky dink operations or mom and pop businesses or things like that it's kind of eh.
  • I think organizations should probably publish their recommended disclosure policy, but it need not be public and need not be on every web site. I'd like to see DNS extended to publish a 'policy URL' for each domain... the information would be there.
  • address to whom to point bugs
  • vulnerability disclosure policy will give birth to possibility of attacker guessing about new flaws based on older ones.
  • Maybe organizations as a whole should, but in my case it would be extremely redundant for all of the sites to have their own policy.
  • Once a policy is in place I would feel much more comfortable disclosing issues. This way you wont get the "what were you doing looking for that SQL injection issue" from the site admin.
  • might be nice, but what difference does it make really. RFP's seems to be pretty standard, but few companies really take is seriously
  • A lot of websites are build by people who have no idea of what they are doing (wiziwig), a vulnerabilty disclosure policy is beyond them. I am in favour for one on hosting level.
  • depends on the site - if its business related - yes.
  • I think there should be a standard disclosure policy that everyone can be happy with.
  • Out of all the websites I've looked for vulnerabilities within only one offers $50 incentives for disclosing them directly to the company, but I've only been offered this once.
  • Website owners don't care about vulnerabilities within their code so they're less likely to create a disclosure policy for their branch of sites. Researchers AND Web admins should really follow something that is mutually agreeable but that's very difficult to create. RFPolicy doesn't apply to an immediate vulnerability that any child with some desire to poke around can figure out. Web vulns have be resolved QUICKLY and that's just something the regular SDLC doesn't provide room for. How are you going to get the QA done, performance testing, regression testing, etc in time because of one minor change due to a XSS or SQL Injection? Let alone a business logic flaw. We're doomed. :)
  • because they all have it and i have worked at very large companies that cover up attacks. but fix them in time. People in charge are selfish because they only think of how it would reflect on them (department) never think about the customer.
  • normally it seems that vulnerabilities are due to the lack of secure coding practices, lack of a defined SLDC, poor or no requirements, and lack of security testing using testcases derived from threat modeling. One instance of a vulnerability means a lot of the time that the vulnerability exists elsewhere within the application.
  • That question really needs some elaboration as to exactly what you mean.
  • Not each site...but big corporate sites, yes.
  • It'll never happen. And if it did, it wouldn't be followed.
  • not each, but I can think it's a good idea for larger sites
  • This would endorse hacking, and make it possible for anyone caught hacking to say they were just testing.
  • There should be one common FDP
  • Let your QA and Security have some help, it a nice open-source esk. way of trying to approache the problem.

  • application audit logging is very poor so I doubt most even know when an event happens.
  • As developers become more aware of older attacks (e.g. XSS, SQI, etc) and how to protect against them, and more AppFW are deployed, newer classes of attacks will come out that few (if any) are protected against.
  • Increase exponentially? I would've liked an option that was a little less sensationalist. I think they'll increase of the next couple of years... incrementally.
  • It depends on the IDS that come out, what new attacks come out, etc.
  • web site vulns will prolly increase, but no one will exploit them (even less than now-a-days)
  • There is a great gap in the developer education regarding security. I know we like to point our finger at PHP here with the website and books examples often being vulnerable to several attacks. As new frameworks and scripting languages gain momentum in the future we will continue to see repeats of this. The fact that we still see buffer overflow exploits today is a clear indication that we're not getting it.
  • All your internets are belong to us.
  • zone-h and xssed are clear cut statistical indicators, they cant even keep up
  • The WHID statistics are unbelievable, and CWE/CVE doesn't really cover incidents. Breaches in general are being announced more often (CA SB 1386).
  • lack of hakcers, no expert on this though.
  • As pentest tools get better, attacks won't need high level skills...
  • As new technologies get introduced so will new vulnerabilities.
  • I don't need a crystal ball. We are often called in after incidents to help clean up. Many of these cases are not disclosed except to a select few customers. Others are severely underplayed in the public announcement.
  • I would say that there are not security experts in 2 out of 3 reviews I perform. During these reviews, the individuals responsible for the web application did not know what to look for to determine if there application was being compromised. Additionally, the web applications were completely unsecure. I feel that as the ease of exploiting web vulnerabilities by script kiddies increase so will the attacks. Also, I feel that organized crime and cyber terrorist around the world will also increase their attacks on web applications to fund their operations.
  • Not a big decrease, but the low hanging fruit (SQL injection, XSS) should be tuned out to eliminated as libraries and language built-in functions are made the default.
  • moneydriven industry
  • Open-source tools will get easier to use and therefore more and more people will be able to find these vulns. Script kiddies will become a problem in the web world whereas this stuff was too difficult for them before.
  • I don't think we hear about them, but at the same time I think that as assessments occur more often, developers start to adapt and reuse, and enforcement/governing bodies gain strength (ie: PCI, etc) things will stay the same or even start to decline. I think we can draw a hard comparison w/ network VA/PenTest we're much better at automated patching, system hardening, SysAdmin, etc today vs 10 years ago (not to say that we don't find things, just less and addressed more quickly). The same thing will happen with WebApps it's just a matter of time.
  • As the bar to attack web sites falls its only natural that the number of attacks will increase.
  • I see a lot of vulns in the things I look at, but I dont hear that many attacks (which would lead me to ticking #1 I suppose). However, I'm not sure of how many attackers there actually are out there, and their skill level. It's a limited resource thing - not like viruses/bots/etc where you write one and let it go
  • welcome to php worms.
  • I think the web app hacks will go up but visibility will go down as vulnerabilities will exploited for more malicious and stealthy attacks.
  • I've given up on evil haxors. They're not very ambitious.
  • Unfortunately there's no "They will increase steadily"... I think exponentially might be a little much but they will continue to grow... I had a call last night that went: "Our web server was hacked and hosting a phishing site... what can I do?"
  • my ball is hazy, but I don't think things will change must. The saddest thing about life, is the moment is always now, and that things, generally, stay the same....
  • They are getting increased attention both on the offensive and defensive sides. So I think they'll balance each other out.
  • Lots of websites have XSS or SQL Injection holes, but are just not worth hacking (unless it can be done automatically by a worm).
  • We'll see more Storm worm type attacks where the attackers are serious about monetizing their attacks rather than just for the fun and fame of it. These worm fleets will use large social networking sites as their patient zero point.
  • It's always the same kind of games...
  • I have learned so much in so little time, there has to be people out there doing it for personal gain.

  • 1) Decompiled a flash to determine the update process for customer details. Exploiting this would result in real world benefits (hijacking private club memberships'n'shit). Not a victimless crime, though.
  • I'm not sure if this counts for business logic: Some site had "remember me" option in the login form, which added a special value in the cookie. It was completely random and different for each user (as it should be), but it never changed. Once it was stolen by XSS attack, it was used to take over the account, even if the victim logged out or changed password. This was not a vulnerability alone, but it seriously boosted XSS attacks (the site was vulnerable to them).
  • Most are from my current employer. NDA agreements prevent me from telling..
  • hmmm, the best stuff is the easiest: 1) Test Credit Card numbers are your friend 2) Persistant XSS->CSRF (Post form submission done with javascript) on unnamed open source e-commerce platform. Admin views the page *presto* we have another admin. 3) filetype: sql
  • you'll see the advisory by the end of the year. :)
  • Accessing test.aspx logged me in as an administrator to the web application with full access. Test.aspx was used during development because the third-party authentication provider was not in production yet
  • No big deal but I got the attention of some people at work when i phished one of their user registration pages including SSNs and CC#s
  • rsnake elaborated on and linked to my story here - http://ha.ckers.org/blog/20070122/ip-trust-relationships-xss-and-you/
  • well, long day in the city & beer = bad memory! Often tight on time, usually priviledge escalation or filter evasion are about as interesting as it gets. bypassing file upload restrictions is always fun.
  • Nice example of creative and clever web application hack is my space-hack filters bypass technique. Which was introduced at Month of MySpace bugs and was showed at Month of Search Engines Bugs.
  • Not creative or clever, just common. Reload/refresh is the most common destroyer of business logic and it's simple, and everyone does it e.g. using 'back'.
  • Complete web server compromise through a file upload process. Was able to upload a .asp file and execute it, which ultimately allowed me to navigate the C:, read config files, database connection strings, etc...
  • Most of the clever hacks include NDAs.
  • Story #1: http://www.rachner.us/blog/?p=8 Story #2: I cryptanalyzed a keystream-reuse issue in a single-sign on system, recovered the full keystream, and was thereby able to issue arbitrary credentials to myself for the entire portal.
  • In order to avoid the JavaScript security restrictions, I developed pages that allow the web service to redirect responses to local HTML page. Turns out that this redirection is capable of redirecting anywhere. So, I took two of those and pointed them at each other and found that I created a simple denial of service where the server sent requests to itself. Throw a bunch of these at the server from an anonymous connection and the server simply dies.
  • Not very good but... Went through the entire purchase process on a (very large) online lingerie retailer and got to the order confirmation page. My order id is in the querystring. Wonder... yup. I can view anyone's order.
  • I dunno if this is the most creative/clever but it's the one that a lot of clients can relate to and immediately see the major business issue. Licensing application allows you to pay your annual fees and make a donation to a related charity. Lets say your annual fee is $64 you can make a charitable donation for -$63 reducing your bill to $1. (Can't reduce to 0 or negative).
  • E-commerce proprietary solution, PHP, 1,5 years of developpement, include($page.php) on the index.php with allow_url_include=On. Erf.
  • in financial apps where you can charge peoples credit cards, don't use a session id that is stored in the cookie validated using an SQL query...and if you do, make sure you can't pass % as the value ;|
  • Sorry, NDA's and all that.
  • One of the largest banks in my country, managed to find form that allows you to submit financial news. The form also allowed to upload a picture with each story. I've noticed that the file is uploaded to a certain directory, and managed to tamper with the filename/dirctory. I then proceeded to upload my own PHP script, which executed commands on the server. This was in 2002. Until today, my own HTML page, with my company's name/URL in it, is still in their root directory, and they haven't deleted it :-)
  • Our management wasn't aware of web application security, so I shouldn't spend much time on this subject. Some month ago I found a XSS vulnerability in one of our most important websites. I was able to add an article, that one of our biggest share holders sells his shares. Additionally I removed an iframe, which holds the share price of our company and added my own iframe with a much lesser share price. I was also able to send messages to any email addresses with any text through this website as well, because the recipient was stored in a hidden field in the mailform. I used this "function" to send my management an official looking email which included a link to my fake article (through XSS). Guess what, I now have a team with some people and we do WebApp Security the whole day. A picture is worth a thousand words :-)
  • sorry fella's, contract says talking out of bed is a nono
  • All of my stories are olden-days network security exploits :)
  • CICS (http://en.wikipedia.org/wiki/CICS) command injection from a crappy Cold-fusion app, through a crappy Cold Fusion->TN3270 middle-ware environment, all the way to the mainframe. In all fairness, it would have been difficult to do without a little inside knowledge.
  • The Photobucket 0day I've released utilizes an eventhandler appended to a query string, which is then embedded into the HREF/link elements within the page. With a bit of user interaction the payload then initiates an AJAX request to the account settings area, and proceeds to log the full name, email address, miscellaneous account data, the cookie, and the mobile upload address as well as the PIN number to the account. With the mobile upload address and PIN number it is possible to upload arbitrary images to other individuals' accounts for ambiguous reasons such as attempting to get the account suspended, having the owner of the account be physically investigated for questionable material, or simply to frighten others. During the entire process the user never leaves the Photobucket.com domain, but using the DOM the page's content is changed to that of the actual "Photobucket Maintenance" page before a GET request is made to a third-party logging script through an image. Once the request has been sent the page then relocates to the most recent images that have been uploaded to the site, which should avert any suspicion on the part of victims.
  • Nothing interesting I can share but one of my favourites is a website that performs concatenation every time it sees a space. I'm seeing more and more of this... so, for example, 'foo bar' becomes 'foobar', but what the programmers of these Apps fail to make note of is the non-strict nature of HTML... tab and newline work just as well for splitting tags, and very few people remember to filter the tab character.
  • One's already been published (MacWorld!) but I usually find some variation of session logic flaw a couple of times a year. These can be as simple as changing a hidden field to "User2" when logged in as "User1" and viewing records to the very complex fifteen-step process of using nearly every trick to showing how a persistent XSS can really be fatal. Are we counting Authentication Control failures as Logic Flaws? I do because they're fundamentally incorrect thought processes on the developers part.
  • nothing I'm proud of!
  • http://www.elhacker.net/gmailbug/english_version.htm http://sirdarckcat.blogspot.com/2007/09/google-mashups-vulnerability.html Yeah, hacking google r0x
  • Where to start. I guess my favorite stories usually revolve around breaking cryptographic implementations where the developer has done something stupid -- either tried to design their own scheme or used a cryptographically strong cipher in a weak way. I've decrypted and manipulated data this way without ever actually cracking the keys, simply by recognizing patterns, and deducing how the cipher was being used. (BH 2006 slides are online if you want more detail).
  • My client site had a XSS vulnerability. They also had an exchange server setup with basic authentication protection. I created a XSS exploit that created a login form on the website and then spammed the entire company with a notice that the IT department added a login to exchange server on the main website for ease of use. The exploit code actually pointed to my own box that was setup to prompt for a basic authentication popup, which was then fed into a decode script and outputted into a file. It took less than 15 minutes after I sent the spam message before one of the IT project managers 'logged' into the site and passed me his credentials. Combination of XSS, social engineering, and information gathering (email address harvesting).
  • Asked by a newspaper for technical opinion on 2005 Polish Presidental Candidates' websites I've found SQL injection on one of them that by use of CHAR() (bypasses PHP magic quotes) allowed injection of arbitrary content into pages (I haven't disclosed details of it, but imagine how much fun it could have been :)
  • But then I'd have to kill you. Seriously, let's just say I got to see a fine film and had wonderful food after finishing my review.
  • i mailed you one... indian airline PNR bug
  • inject sql from asp.net viewstate

  • dunking bar-boxing
  • masturbation
  • Drumming?
  • Hapkido. That's a Korean system, can be compared to the Russian Combat Sambo.
  • but only a little... and bits and pieces of others (notably Israeli self-defense and JKD)
  • Actually kung fu and sanda, but close enough.
  • Drunken West Texan Barroom Brawling :)
  • cliche
  • Mind Martial Arts: Tao.
  • Is unreal tournament a sport?
  • I took Krav Maga for a while, my primary method of self-defense is in not being an offensive person, being large and strong, and being very aware of my surroundings
  • Pencak Silat Pertempuran
  • No, even Anurag Agarwal could kick my scrawny ass.
  • What? Do you want to use Kung Fu to fight against hacker? "The Art of War" may be better to use instead.
  • Wing Chun, and it beats blue belts hands down :p
  • Many years ago I was engaged in cybersport. Now I'm mainly practicing mouse-moving exercises ;-).
  • no
  • huh ?
  • Taijitsu, Aikido, Aikijujitsu
  • Aikido
  • Eating Doritos is an art form isn't it?
  • Judo mostly anymore, I haven't been able to compete since April and I tore up my shoulder.
  • I hold a three star dragon belt in comp-fu.
  • None...what does this have to do with Web Application security?
  • Why is this applicable. No, but I should probably learn some form of martial art or combative sport. After all there are several individuals whose life was made missourable after I discovered web application vulnerabilities on there employers websites. I am sure there may be a few individuals gunning for me.
  • nope
  • Just Script-Fu
  • I've trained up to my Black belt but I don't plan to teach (not on my own anyway) so I haven't bothered to test since it's so freakin expensive. It's basically a whole mortgage payment :( $600US + for World TKD Federation Black Belt Exam....OUCH! [That is unless something drastic has changed in the last 3 years and I missed the news....unlikely]. I've also dabbled in Muay Tai, Kung Fu, Kickboxing, because my instructor takes courses in all kinds of things and gets on a kick for one or the other along with TKD over the years but I've never done any of them separately or tested in them.
  • I practiced tai-chi-quan for over 2 years in my teenage.
  • no, but I wakeskate, which is WAY cooler!
  • No
  • Char Siu Bao (the art of making chinese baked pork buns)
  • where is perl ninjitsu?
  • I'm an ace with a pea-shooter, even if I say so myself
  • take it easy dude.
  • pwn!
  • Google-fu and what ever this guy does (the old one kicking ass at the beginning, but not at the end): http://www.youtube.com/watch?v=gEDaCIDvj6I)
  • I've yet to get into one, but I'd like to say, "Bar-fighting".
  • "5 ward bees" fist fighting in the hood. :-)
  • No, but I do watch the Kung-Fu network :)
  • Skateboarding... You ever try to get time on a busy ditch ramp?
  • Neko jitsu - my cat tummy rub fu is awesome! I hardly notice the claws digging into the scar tissue any more
  • No
  • Trying to convince my boss...
  • what the hell?
  • LOL, I think I might be the only one who isn't from what I can tell. I would love to pick up Boxing, but there just arn't many gyms around where I live.
  • I am a 30th level Ninja in my homemade paper RPG.
Last Survey Results
May 2007