Thursday, August 30, 2007

HTTP RS and User ID & Password Mishandling

Below are a few snippets from our August 2007 WhiteHat Sentinel customer newsletter that thought people might find interesting.


HTTP Response Splitting results are in:

Many customers have asked us which Web servers are vulnerable to HTTP Response Splitting attacks. While we are still assembling aggregate totals, here is an initial list of server types we’ve observed to be vulnerable:

1. Apache/2.x

2. Apache/1.x

3. IBM_HTTP_SERVER

4. Jetty/5.1.11

5. Lotus-Domino
6. Microsoft-IIS/5.0
7. Microsoft-IIS/6.0 (commonly misconceived to be invulnerable.)
8. Netscape-Enterprise/4.1
9. Oracle-Application-Server-10g/Oracle-HTTP-Server
10. Resin/2.x
11. Websense Content Gateway (Traffic-Server/5.x)

Note that the problem of Response Splitting susceptibility is really with the application code. A common mistake is narrowly focusing on the server itself as the source of the issue, likely because a few servers have controls to strip this attack from HTTP GET requests. However, many of the HTTP RS attack vectors we’ve found use the HTTP POST verb as well.


The initial HTTP Response Splitting tests are not yet taking advantage of our more advanced testing engines that perform layered encoding attacks and malformed URI manipulations. Once this is fully incorporated, we expect to find even more variants than we do today.


Watch Out For...

User ID and Password mishandling: Using the user ID and password after logging the user in, as session or authorization identifiers, is a dangerous practice with many security implications. Not only does the presence of an XSS vulnerability indicate that you are giving up the user’s credentials very easily, but it often indicates weak session handling and leads to a variety of information-leakage issues, like passing the user ID and password off-domain via the HTTP Referer field or non-SSL encrypted parts of the website.

So, don’t use the user ID and password as identifiers.


While enhancing our authentication and automation tests, we have been observing more and more Web applications that pass the user ID and the password around in things like hidden form fields or cookies. Sometimes credentials are encoded, sometimes encrypted, and sometimes they are en claire (in plain sight).


While the actual security implications of this are contextual to the specific application, this design pattern usually indicates the presence of one or more weaknesses.


WhiteHat Website Vulnerability Management Practice Tips


Q. What do I do if I see a user ID and password being passed around one of my applications?


A. First off – you need to identify why the credentials are being passed around in the application. If there is no reason, you can simply have your developers remove it.


Barring that, a common weakness we observe is the user ID and password being used as a form of session-token to maintain state in the application. This is a really bad idea on many levels. In addition to information leakage issues with the user credentials, this will introduce:


Session Fixation Weaknesses: The “session token” is predictable, fixed, and unchanging. This makes it a trivial task for an attacker to reverse-engineer and take over a “session.”


A Vast Attack Surface
: In general, there is no notion of a “session.” You cannot log a user “in” or “out” handling sessions in this fashion, since in this case, there is only “one” session for all eternity, making the attack surface very large once an attacker figures that out. Everything from XSS to CSRF becomes far, far worse if this weakness is present - not to mention the attacker can impersonate the user directly and at will.


Multiple other issues
from the WASC Threat Classification list

Another common use is when credentials are checked before making security decisions such as verifying whether or not a transaction is really authorized by the user. Unfortunately, if the credentials are not entered by the user, but rather provided automatically via a hidden form field or cookies – the attacker can also force the user to submit these automatically.

Monday, August 27, 2007

The Big Picture

At SANS Ryan Barnett (Breach Security) and I talked a lot about the various suggested Website security “best practices” such as SDLC, black box testing, source code reviews, WAFs, scanners, etc - weighing their pros, cons, and costs. In the course of the conversation Ryan commented that I tend to look at the landscape from a scalability perspective. I hadn’t noticed this before, but he’s absolutely right. I try to look at strategies and solutions with an eye for the big picture with consideration for how many people/companies/websites the best-practices may benefit or not. If “best practice X” were rolled out on 1,000 or 10,000 or 100,000 websites…
  • What problem(s) does it solve and cause?
  • How much time and effort would be required for implementation?
  • Are there enough skilled people available?
  • What will it cost and who would be willing pay?
  • How much more secure will it make a website?
Etc.

These data points help develop insights into the where the industry is headed, the challenges we’ll face, and where innovation is most helpful. Plus I find it fun. To illustrate what I mean, below are some speculative high-level numbers I’ve collected which are used calculate “best practice” costs. Hopefully my math is accurate.


Important Websites: 60,000
According to Netcraft there are roughly 128 million websites. Of course not all of these are “important” (containing sensitive data, brand sensitive, etc). Netcraft also says there are over 600 thousand SSL websites, which is another useful metric, since why buy a certificate if it isn’t at least somewhat important. There could still be important websites not using SSL and neither accounts for intranet or access restricted websites. But because the number is still so large, I decided to stay conservation and take only 10% of the SSL total and use that moving forward. Figuring out how to properly secure the top 60,000 websites would really be something.


Vulnerability Assessments
The standard one-time black box vulnerability assessment (with or without the aid of tools) conducted on a single website performed by a qualified Web security pen-tester.

Required man-hours per assessment: 40 hours
Bill rate: $250 (per hour)
Cost per website: $10,000 (40 * $250)
Max number of assessments per year per person: 40
* Estimates are based on the data collected from the Web Application Security Professionals Survey and sanity checked through other sources.

To perform a vulnerability assessment on 60,000 websites each year requires:

Total man-hours: 2,400,000
Qualified pen-testers: 1,500 (websites / 40)
Total cost: $600,000,000 (websites * $10,000)
*These numbers do not take into consideration that many website change rapidly and may require multiple assessment per year.


Source Code Reviews
The standard one-time source code review (with or without the aid of tools) conducted on a single website and performed by a qualified software security expert.

Required man-hours per source code review: 80 hours
Bill rate: $250 (per hour)
Cost per website: $20,000 (80 * $250)
Max number of Source code reviews per year per person: 20
*Estimates are based on data contained within the techtarget article entitled, “Inside application assessments: Pen testing vs. code review”, and sanity checked through other sources.

To perform a source code review on all 60,000 websites each year requires:

Total man-hours: 4,800,000
Qualified source code reviewers: 3,000 (websites / 10)
Total cost: $1,200,000,000 (websites * $20,000)
*These numbers do not take into consideration that many website change rapidly and may require multiple source code reviews per year.



Web Application Developers: 300,000
It’s difficult finding a reference for the number of Web developers worldwide, so I figured I’d try JavaScript instead since most if not all of them know the language. According to a “JavaScript: The Definitive Guide advertisement”, there “300,000 JavaScript programmers” took advantage of the book. There are probably more Web developers out there who don’t know JavaScript, but let’s try to stay conservative. Imagine all the Web application code their churning out daily, let alone annually.

Secure Programming Training (2-day course)
* Based relative to SANS pricing information

Per Person: $2,000
Qualified trainers: 375
* Assuming 1 trainer is capable of conducting 20 classes per year with 40 students in each class.

To train all web application developers once per year:

Total cost: $600,000,000 (300,000 * $2,000)
*Does not account for any travel costs


From these numbers many takeaways can be derived, but here’s one that stood out to me:

Clearly more code is being churned out than our ability to assess it. Which means vulnerabilities will be pushed no matter what because the business is not going to wait around for security’s backlog. And if the bad guys just need to find one vulnerability, then we’re going to lose the battle. In fact the only things holding things together is there isn’t a critical mass of bad guys with the skill set to full exploit the opportunity. However, this will only last a short while and the smart money says attacks will continue to increase.

Friday, August 17, 2007

Why aren’t more website hacked?

I’ve given many presentations about website security statistics, most recently at SANS, stating somewhere between 70% and 90% have serious vulnerabilities. I dig into severity breakdowns, top ten lists, vertical industry comparisons, and more. After bearing witness to the data at hand, what the “bad guys” could do (or already have done), attendees emerge from the Denial stage of Web security grief and enter Anger. Others remain skeptical, completely understandable, and curious about something particularly relevant. They ask, “If these statistics are accurate and the Web so insecure, why aren’t websites hacked more often”? This is a darn good question!

Indeed....

Why aren’t the bad guys pillaging everything in sight?

First off, websites ARE getting hacked. A LOT! The Zone-H defacement archive clearly illustrates the size of the problem. Secondly, the public isn’t always made aware of every website hack and media doesn’t advertise every incident. Many profit-driven website hacks will never be made public because both the bad guy and victim keep the incident confidential. The various state disclosure laws only apply to customer personal and private data, not with incidents that compromise source code, trade secrets, brokerage account access, etc. The point is we only know the best-case scenario out there based upon the published information.

However, these explanations aren’t satisfying, probably because we can’t measure it. Perhaps there is another possibility. Consider that Netcraft says there are roughly 128 million websites and about 2 million more are added per month. Those in the industry know the challenge of finding, hiring, training, and retaining web security people. Could it be there simply isn’t enough bad guys with the necessary skills and motivation to monetize web hacks? Could there be more than 2,000 of such morally flexible people? I have no idea if this number is accurate or not, but it seemed reasonable. This could explain why Web banks aren’t yet getting compromised hourly or and why isn’t MySpace or Facebook suffering Web Worms daily.

Something to consider anyway.

How to make a website harder to hack

I mean, that’s what web application security is all about. We know websites will never be 100% secure just like software never be 100% bug free. We also know web application hacks are targeted. All we have to do is look at CardSystems, the U.N., MySpace, CNBC, UC Davis, Microsoft UK, Google, Dolphin Stadium, Circuit City, T-Mobile, and many other incidents to figure that out. Bad guys don’t hammer away at eComSiteA then mistakenly hack into WebBankB. It doesn’t work like that. The victim is the one they’re targeting in the browser URL bar. So instead we should approach website security in terms of time and difficulty just like they’ve done for decades in physical security--with burglary resistance, fire resistance, alarm systems, etc.

For example GSA approved and U.L. certified products such as a:

Class 5 vault door - “shall be resistant to 20 man-hours surreptitious entry, 30 man-minutes covert entry and 10 man-minutes forced entry.“


Class 150-4 hours container - “must maintain an interior temperature less than 150°F and an interior relative humidity less than 85% when exposed to fire as per the Standard Time Temperature Curve for 4 hours to 2000°F.”


These benchmarks make sense. The problem in web application security is everyone so blindly and exclusively talks about “best practices” like the SDLC, input validation, threat modeling, PCI compliance, source code reviews, scanners, developer education, WAFs, and other topics. They forget about the big picture. Do these “solutions”, or combination thereof, actually make a website harder to hack? Yes! Well, probably. Err, maybe. And if so, how much harder? If the answer is unknown then how do we justify their cost in time and money? Oh right, “compliance." Still, imagine telling the CIO you just spent the last 6 months, countless company man-hours, and hundreds of thousands of dollars implementing “best practices” only to raise the bar by maybe 30 minutes!?

Judging from WhiteHat Security’s vulnerability assessment statistics on hundreds of websites, this is exactly what’s happening. Vendors basically attempt to dazzle customers with the most blinky-red-lights, buzzword compliant banter, confusing diagrams, meaningless ROI calculations, and reams of FUD to distract people from their main objective. MAKE A WEBSITE HARDER TO HACK. Do yourself a favor the next time a vendor is hawking is their wares at you, ask them this simple question, “How much harder does your solution make my website to hack?” The answer might surprise you, if there is one.

Rather than continue ranting, I think its time the web application security industry began using this type of testing methodology so we may answer these questions in no uncertain terms. To do so we’d have to take into consideration the skill of the bad guys, the tools and techniques at their disposal, whether they would be internal or external, change rate of the website, and . . . anything else? The methodology probably wouldn’t need to be overly complicated. In fact, we might borrow ideas from physical security on how they set-up their testing processes. Imagine obtaining a measurable degree of security assurance.

BlackHat Encore Webinar Presentation

A lot of people were unable to make it to Black Hat this year and asked how else they might see the presentation RSnake and I gave, "Hacking Intranet Websites from the Outside (Take 2)–"Fun with and without JavaScript Malware". So we decided to do an encore performance webinar style. This means wherever you are in the world (relatively speaking), you can participate and perhaps ask a question of either RSnake or myself live. If you already well-familiar with all the latest and greatest attack techniques discussed here, on RSnake’s blog, and elsewhere… you won’t see much “new”. But maybe if you have an hour to kill and want to see a few demos, why not... it's free! :)

Registers Here -
Date/Time:
Tuesday, August 21, 2007 at 11:00 AM PDT (2:00 PM EDT)

Monday, August 13, 2007

Two kickass Web security papers recently published

1) The first out of the Stanford security lab, Protecting Browsers from DNS Rebinding Attacks by Collin Jackson, Adam Barth, Andrew Bortz, Weidong Shao, and Dan Boneh. Everything you wanted to know about DNS Rebinding (formerly known as anti-DNS Pinning) and probably a lot you didn’t. My favorite part was the real-world experiment they performed using Flash 9 advertisements - very spooking, very easy and apparently highly effective stuff. And not to leave us wanting, the security lab guys also drafted a proposal for a long-term solution to DNS Rebinding attacks using Host Name Authorization (based upon Reverse DNS lookups).

2) The second paper is from Sensepost, It’s all about the timing…, by Haroon Meer and Marco Slaviero. Before they get to their real innovation, upfront they provide a detailed history of how Web-based timing attacks works. This would have been a fantastic resource if only for that and I’m going to have to go back and reread this a few more times and commit it to memory. The real gem though is their Cross Site Request Timing attack. Hopefully I’m describing it correctly, basically this is a way to leverage victim web browsers to blindly perform brute force attacks (among others) on third-party websites. Like I said, I’m going to have to study this more, but I was thoroughly impressed by what I saw.

Web security is moving in the right direction

Despite last posts unpleasantness, this morning I woke will a general sense of excitement and optimism. Sure we all know website and browser security is in an abysmal state - vulnerabilities can be identified in most important websites in under 20 minutes and it’s almost impossible to protect yourself from a malicious web page. However, after every conference I attend where I get to talking with people, I get the sense that things are definitely looking up.

Industry groups (WASC and OWASP) are buzzing with activity, mailing list and message board posts are frequent and informative, browser vendors are engaging with the community and asking for public comment, programmers are using modern development frameworks who are also asking the right questions about secure software, and organizational budgets now have web security line items. These are all very good signs. And think it all started with awareness. You can’t fix it if you don’t know what broken.

The only problem is progress never seems to come fast enough. It’s going to take years for before measurable improvements are made. Any browser security architecture change probably won’t come for another full version or two (Firefox 4 and IE 9). It’s also going to take at least 7-10 years before the majority of important Website are replaced by those using modern technology and/or have remediated their current set of issues. This also means there is opportunity to make a real difference – perhaps a few clever people in the crowd have some bright ideas to speed up the process.

And for those already in Web security, this also means there is job security for quite some time. ;)

Friday, August 10, 2007

Putting up, then shutting up

It appears Billy Hoffman is at it again, trying to start trouble about something that isn’t there. Billy claims that RSnake and I stole Ed Felton's work (from 2000), Timing Attacks on Web Privacy (browser history stealing), because we used a “timing attack” in our Black Hat presentation to do browser port scanning without JavaScript. We’re then accused of either willingly omitting Felton’s work of failing to do proper research on the topic up front. Finally Billy confronts us, well RSnake, to “Put up or shut up", because he was ousted for copying our last years research on JavaScript port scanning.

Before putting up, admittedly over the last 7 years I’ve occasionally released stuff that others had previously published, which I had not known about. This is common for web security researchers due to the number of vast number of unresolved attack techniques, papers, and the liberal use of obscure terminology. CSRF for example, how many “novel” papers and names have their been over the years? When incidents are brought to my attention I’ve had no problem backtracking and quickly updating everything to cite the earlier work as the original source. Often people help out in the blog comments. In my experience, so has RSnake.

Back to Felton’s paper: This was not the first text introducing “timing attacks”. I don’t know what was, but I did find “Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems.” published 4 years prior to Felton's. Felton’s paper also doesn’t cite any other timing attack paper, nor did it need to in my opinion. So to my mind RSnake and I would not be compelled to reference Felton’s paper because our browser port scanning technique used a completely different kind of timing attack and also had nothing to do with browser history stealing. And, we made NO claim to invent timing attacks in general. Sheesh, so much drama.

Tuesday, August 07, 2007

Black Hat 2007 / Defcon 15 Round-up

Update 08.08.2007: RSnake posts his account of BH/DC and uploads some great pics.

My final account of Black Hat 2007 and Defcon 15 is not nearly as entertaining as my wife Llana’s. First of all, Black Hat is by far my favorite conference and I look forward to it every year. The talks and speakers are top notch (well most are), the attendees comes from all over the world with interesting stories to share, and there is always something going on day or night. This years show was bigger than ever, 4,000 strong, with security professionals, press, analysts, hackers, government employees, vendors, etc. Black Hat is totally worth every penny spent if only to meet the people there.

RSnake and I presented Hacking Intranet Websites from the Outside (Take 2) to a packed audience: We discussed the current theory and demonstrated several cool tricks including browser history stealing and port scanning (using only HTML/CSS), De-Anonymizing, and Split VPN Tunnel Hacking (using some JavaScript Malware). For those who regularly read our blogs, you should be familiar with most of these techniques, but judging by the audience reaction they clearly were not. After the speech many dozens of people crowded the stage and met with us afterwards to ask questions and congratulate us on a job well done. We also made lots of press including NetworkWorld, E-Commerce News, InformationWeek, and InfoWorld

In the aftermath, Richard Bejtlich and Michael Farnum moved along to the Depression stage of web security grief:

“Existing defenses are absolutely ineffective against current attacks. I am struggling to describe the importance of this insight. It does not matter if you are fully patched, "properly configured," not running Javascript, or adopting any number of other current defensive stratgies if you use a Web browser that renders modern rich content. Almost none of the techniques described in the Black Hat talks relies upon exploiting vulnerable software. Almost all of them abuse inherent functionality for malicious reasons.”


Between our presentation and the DNS Rebinding talks, I think we really drove the point home that the Intranet is no longer off limits and browser security needs be rethought. And soon! Now it’s the browser vendor’s turn and with all the press I’m sure they’ve been queued up.

The slides and most of the PoC have been made available. Hopefully we get the video soon to post that as well.

Two great talks: Intranet Invasion With Anti-DNS Pinning by David Byrne and Attacking Web Service Security: Message Oriented Madness, XML Worms and Web Service Security Sanity by Brad Hill. I learned several things in each of these and the content was well presented.

Iron Chef Blackhat: OK, I have know idea how Brian Chess conned convinced me into being a judge without knowing exactly what I was getting into up front. But there I was, sitting next to the two other judges, John Viega and the Presendent of Hackistan. Behind us several hundred people thinking the same thing we were, “What the heck was going on here!?” If you’ve ever seen the cooking show Iron Chef then you should be familiar with the format. Two chefs face off for an hour then show off their wares to the judges, winner take all. They even had the mannerisms, aprons, and chef hats. :)

During the hacking, The President was non-stop cracking jokes saying sarcastic things like “this is blowing my mind”, making comparative references to Paris Hilton, and busting John’s Symantec chops for not ridding his computer of viruses. So damn funny. At the end the winner didn’t really matter. Both Iron Chefs showed well, people learned a thing or two about the VA process, and everyone seemed to really enjoy the show. Hopefully Brian and Fortify will keep this going. It was a lot of fun.

Side-channel conversations: There was a good bit of chitchat about BJJ and MMA. A lot of people in infosec train in various forms of martial arts. Makes sense I guess. However, I was not prepared for Chris Hoff’s unprovoked attack. In the front of PURE Chris comes out of no where like the Blaire Witch, hugs me and says, “all I want to do is get in your butterfly guard big boy.“ I think Mike Rothman was standing there just as confused as I was. :) Then later there was talk about some Hacker MMA Smackdown event rumor I hadn’t heard about. RSnake had and immediately said in his best Tyler Durden voice, “I’d fight Erik Birkholz.” I kid you not. Ask the Mozilla guys, there were there! Gotta be on guard at all times around these infosec guys, sheesh.

CiscoGate and Predictable Resource Location: Jeff Moss gave an excellent and entertaining presentation about the timeline of events circling around Michael Lynn, ISS, and Cisco fiasco from a while back. One thing I thought was interesting was that after a federal judge issued a TRO against BlackHat, they removed the offending files from the web server. Strike that, in their haste, they removed the links to the files and forgot to remove the originals. Someone took the opportunity to guess for the files on the web server for 8 straight hours until they finally found it, then shortly thereafter it flooded to every corner of the web. Cisco complained in court that this violated the TRO, but the court saw it otherwise.

Sunday, August 05, 2007

On the road again...

I just got back home from Black Hat / Defcon and spent the last 24 hours detoxing and resting up for the weeks ahead. Starting at 6am tomorrow I’m off traveling an amazing amount to speak at several infosec conferences. What’s cool is I get to see more of the country, meet new people with similar interests, and converse about their various webappsec challenges. I'm looking to learn as well as teach. However, this also puts a serious crimp in my BJJ training. Maybe I’ll have time to work out in the hotels and perhaps visit some local academies in the area. I’ll bring my gi just in case. :) If you are attending any of these events, I’ll see ya soon!

Metricon 2.0
August 7 | Boston, MA

Lockdown 2007
University of Wisconsin-Madison
August 9 | Madison, WI

SANS What Works in Application Security Summit 2007
August 15 | Washington, DC

Tampa, Florida
August 23

NCA Conference 2007
September 6 | Seattle, WA

Los Angeles
September 7

ISACA Network Security Conference
September 10 | Las Vegas, NV

InfoSecurity New York
Septhember 11 | New York, NY

IT Security World 2007
September 17 | San Francisco, CA

OWASP Taiwan
September 27 | Taiwan

OWASP/WASC Cocktail Party

Update 08062007: Heather Cason from Breach confirmed over 300 people attended the party in 2 hours and sent over some pictures as well (see below). Apparently the club liked us so much they gave us some free champagne and a bartender show (which was hella cool).

The OWASP/WASC cocktail party was definitely a Black Hat highlight. Just about everyone in the webappsec world was in attendance having a great time. Somewhere around 300 people showed up completely filling the Shadow Bar and leaving line out the door. If the party were a conference, it probably would have been the biggest in webappsec history. The amount the industry has grown over only the last year is simply amazing and I’m happy just to be a part of it. Personally I got to meet several people I had only known online and many new names I had not. We shared (a lot of) drinks and interesting conversation. We gotta do this again next year!

I'll post some of my pictures shortly. If you have pictures of the part please email them to me, flickr them, or post a link below.

Thanks so much to Breach Security for sponsoring the event and Heather Cason for coordinating the logistics. We could not have done it without them. Also special thanks to Robert Auger, Anurag Agarwal, Jeff Willams, Tom Brennan, Kevin Overcash, and Ofer Shezaf for pulling together an helping out as well.


Leading up to the bar, tons of people inside and lining up to get in.


Dinis Cruz playing the door man. No just kidding. I think he was actually talking to me and explaining the fine art of shadow dancing, but I'm cut off on the right.


The shadow bar is filled with webappsec geeks.


Nothing draws a crowd like free booze


The line out the door



John Terrill drinking and talking shop
Robert Auger in his classic Clark Kent pose, but Jeremy knows whats up.


Mozilla says critical patches in “Ten #$%*ing Days”

Update 08062007: Window Snyder and Mike Shaver in RSnake's post clarified the comment. As expected, this was Mike's personal commitment to the remediation process and not Mozilla's official security policy. Still that's plenty cool with me. Having seriously dedicated people on our side can be just as good as any lip service policy statement.

Web Browsers are under serious attack and the security of the Web hangs in the balance. Over 1 billon people use the Web and no one but the most security conscious power users have any chance at protecting themselves - and sometimes not even them. We REALLY need browsers to be significantly improved because the security of the entire Web depends on it. Consider how many security/privacy extensions you have installed and the behavioral traits you’ve learned along the way. What percentage of the people on the Web do you think possess this level of knowledge?

During Black Hat RSnake and I got to talking a lot with several members of the Mozilla security team. They are all really good guys and gal (Window Synder) working to establish solid working relationships with the security community. This is a good thing, a very good thing, and something I’ve been asking them to do for a while. They’ve listened. Mozilla invited us to their cookies & milk after party, gave us some free T-Shirts, and even bought us an expensive sushi dinner at Caesars. Can’t beat that! By their actions Mozilla clearly wants to get engaged with the industry, voice their commitment to improving the current situation, and hear the ideas of others. But here’s where it gets really interesting.

Apparently during one of the Black Hat after parties, RSnake and Mike Shaver, Mozilla’s Director of Ecosystem Development, discussed the speed of the browser’s critical patch release cycle. Mr. Shaver claimed Mozilla could roll critical patches out in ten days. RSnake quickly challenged the remark, to which Mike whipped out his business card and hand wrote “Ten #$%*ing Days” demonstrating his seriousness. Now, I didn’t witness this take place first hand so I can’t say for sure how many drinks RSnake handed Mr. Shaver over the course of the conversation. This was after all late at night in Las Vegas during Black Hat so anything is possible, but I’ll RSnake at his word that he was coherent, though I’d expect Mozilla to provide clarification.

Whether or not you think Mozilla can make good on the claim, what I like is that there are people over there passionate and ballsy enough willing to stick their neck out because ours already is. Also, it’s important to understand the difference between traditional vulnerabilities like buffer overflows, which Mr. Shaver was probably talking about, and long standing attack techniques such as intranet port scanning and history stealing. These types of attacks and the dozens of others like them take longer to fix because they are more fundamental to the overall browser security model. Unfortunately according to my sources, restricting intranet connections from public IPs will probably not come until Firefox version 3 and content restrictions being even further out. Ugh.

Still the good news is contacts and relationships are being made. Mozilla is listening and reading the research posted across the web security industry in blogs, mailing lists, and message boards. Progress will take time, but we’re moving forward. Perhaps those with the necessary desire, software development skills, and time can help Mozilla create what they need and kick start the effort sooner, especially on the content restrictions front.

Saturday, August 04, 2007

Survived BlackHat 2007 and Defcon 15

I’m about to jump on my flight back to San Jose in a few minutes. Right now I am dead tired, body aches, mind hurts, and over stimulated in every way. These are all very good signs of a solid back-to-back infosec conference week. The presentation RSnake and I gave was a complete success, which was mission priority #1. Right now I’m not coherent enough to post a solid wrap up, so I plan to write several posts about the shows starting tomorrow. People I met, pictures, parties, clubs, presentation, events, embarrassing moments, etc. All in all a simply amazing time.