Venture capitalist (Grossman Ventures https://grossman.vc), Internet protector and industry creator. Founded WhiteHat Security & Bit Discovery. BJJ Black Belt.
Friday, July 27, 2007
How to check if your WebMail account has been hacked (Redux)
Erik Larkin from PC World posted a great tutorial on how to “Set a Hacker Alarm on Your Web Mail Box” in 5 easy steps. This is based upon a trick I blogged a while back. We improved upon the original concept by using a free hit counter, OneStatFree.com, instead of having to host a web server yourself, which many are not able to do. Also by including the “alarm” code as an HTML attachment, the trick will work even if your Web Mail account is set to “block all images” to prevent spam. This is trick is useful not only catch a hacker breaking into your Web Mail account, but perhaps rouge customer service representatives or someone with physical access to your machine. Have fun!
Are web application scanners ***ing useless?
My Attribute-Based Cross-Site Scripting post stimulated an interesting exchange in the comments. You can read for yourself, but one key thing is that this particular issue is not new. In fact it’s been around for years and only now are we figuring out how to scan for it more effectively through trial and error on real live websites. A couple of people picked up on this and one blogger asked a particular relevant question:
“If current web application scanners can't find an issue which is around for 5 years now, aren't they f*** useless?”
The frustrated tone of the question is obvious. As someone who’s been on the customer side of VA solutions, I understand where they’re coming from. We all know web application scanners don’t (and never will) find every vulnerability, but it's imperative to know what they do check for and how well. That’s the point being made. Just because a tool says it “checks” for something, it doesn’t mean it's any good at “finding” it. This is a key piece of information to efficiently complete an assessment and not wasting time overlapping work.
I’ve talked about this lack of knowledge in my web application scan-o-meter post and invited others to comment, including the scanner vendors. Their marketing teams are very good about generating a big list of “we check for this”, so I set the dials where I believed the state-of-the-art to be. Nothing really came of it from their side. My conclusion is they simply don’t know. While they can test their products in the lab, they’re unable to measure real world capabilities on any kind of scale, where their customers actually use them, like WhiteHat does. Hence their technology improvement is painfully slow, resulting in frustrated questions like the above.
The bottom line is automated scanning is important to the vulnerability assessment process. But it doesn’t help anyone when technology capabilities are withheld from customers. I’m hopeful the next sets of web application VA solution reviews will shed light on this from an independent source.
“If current web application scanners can't find an issue which is around for 5 years now, aren't they f*** useless?”
The frustrated tone of the question is obvious. As someone who’s been on the customer side of VA solutions, I understand where they’re coming from. We all know web application scanners don’t (and never will) find every vulnerability, but it's imperative to know what they do check for and how well. That’s the point being made. Just because a tool says it “checks” for something, it doesn’t mean it's any good at “finding” it. This is a key piece of information to efficiently complete an assessment and not wasting time overlapping work.
I’ve talked about this lack of knowledge in my web application scan-o-meter post and invited others to comment, including the scanner vendors. Their marketing teams are very good about generating a big list of “we check for this”, so I set the dials where I believed the state-of-the-art to be. Nothing really came of it from their side. My conclusion is they simply don’t know. While they can test their products in the lab, they’re unable to measure real world capabilities on any kind of scale, where their customers actually use them, like WhiteHat does. Hence their technology improvement is painfully slow, resulting in frustrated questions like the above.
The bottom line is automated scanning is important to the vulnerability assessment process. But it doesn’t help anyone when technology capabilities are withheld from customers. I’m hopeful the next sets of web application VA solution reviews will shed light on this from an independent source.
Get inbound links using XSS
We’ve known for a while that certain search engines will index XSS generated links, possibly improving Page Rank. Though real examples have been relatively limited, however XSS News has some very interesting results from their experiments. Have a look.
See you at Black Hat!
RSnake posted the news and I’m excited to say he and I will both be on stage presenting Hacking Intranet Websites from the Outside (Take 2). This is our first time presenting together so this should be a lot of fun and we’re both busy polishing up the slides and preparing the demos. Last years talk is going to be hard to top, but we plan to seriously push the limits of what’s possible by illustrating what happens when combining a variety of attack techniques, both with and without JavaScript.
More than likely the room will be standing room only. So if you’re going to Black Hat and want to get a good seat, it’s a good idea to camp the room. The talk before ours on Anti-DNS Pinning (DNS Rebinding) should be particularly relevant to the topic as well. PoC code and slides will be made available. The video will come later when get it on DVD. And just after the presentation (during lunch), we’ll be signing XSS Attacks books in the pavilion.
Also, if you’ve RSVP’ed for the OWASP & WASC Cocktail Party, it’ll be later that evening at the Shadow Bar. This event is definitely the place to be with nearly 300 peopled signed-up. Its like the whole webappsec world is going to be in attendance!
Black Hat is going to rock, I can’t wait!
More than likely the room will be standing room only. So if you’re going to Black Hat and want to get a good seat, it’s a good idea to camp the room. The talk before ours on Anti-DNS Pinning (DNS Rebinding) should be particularly relevant to the topic as well. PoC code and slides will be made available. The video will come later when get it on DVD. And just after the presentation (during lunch), we’ll be signing XSS Attacks books in the pavilion.
Also, if you’ve RSVP’ed for the OWASP & WASC Cocktail Party, it’ll be later that evening at the Shadow Bar. This event is definitely the place to be with nearly 300 peopled signed-up. Its like the whole webappsec world is going to be in attendance!
Black Hat is going to rock, I can’t wait!
Tuesday, July 24, 2007
(INSECURE) Magazine Issue 12
(INSECURE) Magazine, one of my personal favorites, has released issue # 12. The magazine is free, no registration hassles, quality content, what more do you want! Oh, did I mention that Mirko Zorz interviewed me for one of the sections? :) Enjoy!
Contents:
Contents:
- Enterprise grade remote access
- Review: Centennial Software DeviceWall 4.6
- Solving the keylogger conundrum
- Interview with Jeremiah Grossman, CTO of WhiteHat Security
- The role of log management in operationalizing PCI compliance
- Windows security: how to act against common attack vectors
- Taking ownership of the Trusted Platform Module chip on Intel Macs
- Compliance, IT security and a clear conscience
- Key management for enterprise data encryption
- The menace within
- A closer look at the Cisco CCNP Video Mentor
- Network Access Control
Monday, July 23, 2007
Attribute-Based Cross-Site Scripting
A couple of weeks ago I posted sections from one of our WhiteHat customer newsletters that focused HTTP Response Splitting. Newsletters are one way we keep customers informed of important industry trends and improvements to the Sentinel Service. Judging from the blog traffic and comments it was well received. So this time I’ll highlight Attribute-Based Cross-Site Scripting, which Arian Evans (WhiteHat’s Director of Operations) has been spending a lot of R&D time to get working properly. Enjoy.
New Vulnerability Detection
Attribute-Based Cross-Site Scripting is one of the hardest types of Cross-Site Scripting to find in an automated fashion. Today, no desktop scanner does a good job at this; most don't even attempt it because false-positives skyrocket – except for the WhiteHat Sentinel Service. Naturally.
WhiteHat Sentinel implemented our second-generation attribute injections last week. Many of you have seen new XSS attack vectors being pushed on your sites, and for quite a few it is a result of these tests. The example we most often push is sourcing in JavaScript via an injected STYLE tag (attack vector for Internet Explorer).
Attribute injection is when user-controlled data lands inside of an HTML tag, or specifically a value inside of an HTML tag, where notorious characters like “<” and “>” may not be required for XSS exploitation. For example:
HTTP GET request (not actual Sentinel test - this is an example for exploitation):
http://www.domain.site/search/partner/index.cfm?sessionid=12345678901
&hid=%22+STYLE%3D%22background-image%3A+expression%28alert
%28%27Is_XSS_HERE%3F%29%29
Will result in this example tag in the HTTP Response:
<* td>
<* a href="/index.cfm?sessionid=12345678901&hid="" STYLE="background-image: expression(alert('Is_XSS_HERE?))">
<* img src="http://www.domain.site/images/topnav/logo.gif" width="274" height="83" border="0">
<* /a>
<* /td>
This is a perfect example of an XSS vulnerability in which the attacker wouldn't need HTML tags or meta characters like <>. All you need in this case is a double-quote, a colon, and some parenthetics to begin your attack. From here the exploit can be carried out in many ways (e.g.-malicious Javascript). The ability to detect these issues accurately will grow exponentially with the advanced conditional logic currently being implemented into the Sentinel Service.
WhiteHat Website Vulnerability Management Practice Tips
Q. How do I stop an XSS attack that lands in an HTML tag?
A. For most attribute-based attacks to work, the attacker needs at least single or double-quotes. Double-quotes are what is most often needed – from what we see at WhiteHat. You could try escaping, removing, or substituting single and double-quotes on input.
Alternately you could encode any user-supplied data safely on output. This is the safest approach. Barring international-language sites – there are a minimum of four alternate encoding types for all Latin-ASCII code page characters: being Unicode, Decimal, Hexadecimal, and Named. This can jump to 18 variants for something as simple as double-quote, if you factor in international-language code pages.
Q. How do I encode my output safely?
A. If you encode double-quotes as their named-entity references, you will remove most of your attribute XSS issues. If you encode single-quotes using Decimal (works across the most browsers) or named-entity reference, this should solve the problem, as well (by breaking the initial escape sequence the attacker needs to take over the tag and begin scripting).
A nice reference page for more on entity-encoding values can be found here:
http://www.crosswinds-cadre.net/?page=character_entities
Q. What is this Unicode craziness you speak of?
A. A great place to start is here:
http://www.joelonsoftware.com/articles/Unicode.html
New Vulnerability Detection
Attribute-Based Cross-Site Scripting is one of the hardest types of Cross-Site Scripting to find in an automated fashion. Today, no desktop scanner does a good job at this; most don't even attempt it because false-positives skyrocket – except for the WhiteHat Sentinel Service. Naturally.
WhiteHat Sentinel implemented our second-generation attribute injections last week. Many of you have seen new XSS attack vectors being pushed on your sites, and for quite a few it is a result of these tests. The example we most often push is sourcing in JavaScript via an injected STYLE tag (attack vector for Internet Explorer).
Attribute injection is when user-controlled data lands inside of an HTML tag, or specifically a value inside of an HTML tag, where notorious characters like “<” and “>” may not be required for XSS exploitation. For example:
HTTP GET request (not actual Sentinel test - this is an example for exploitation):
http://www.domain.site/search/partner/index.cfm?sessionid=12345678901
&hid=%22+STYLE%3D%22background-image%3A+expression%28alert
%28%27Is_XSS_HERE%3F%29%29
Will result in this example tag in the HTTP Response:
<* td>
<* a href="/index.cfm?sessionid=12345678901&hid="" STYLE="background-image: expression(alert('Is_XSS_HERE?))">
<* img src="http://www.domain.site/images/topnav/logo.gif" width="274" height="83" border="0">
<* /a>
<* /td>
This is a perfect example of an XSS vulnerability in which the attacker wouldn't need HTML tags or meta characters like <>. All you need in this case is a double-quote, a colon, and some parenthetics to begin your attack. From here the exploit can be carried out in many ways (e.g.-malicious Javascript). The ability to detect these issues accurately will grow exponentially with the advanced conditional logic currently being implemented into the Sentinel Service.
WhiteHat Website Vulnerability Management Practice Tips
Q. How do I stop an XSS attack that lands in an HTML tag?
A. For most attribute-based attacks to work, the attacker needs at least single or double-quotes. Double-quotes are what is most often needed – from what we see at WhiteHat. You could try escaping, removing, or substituting single and double-quotes on input.
Alternately you could encode any user-supplied data safely on output. This is the safest approach. Barring international-language sites – there are a minimum of four alternate encoding types for all Latin-ASCII code page characters: being Unicode, Decimal, Hexadecimal, and Named. This can jump to 18 variants for something as simple as double-quote, if you factor in international-language code pages.
Q. How do I encode my output safely?
A. If you encode double-quotes as their named-entity references, you will remove most of your attribute XSS issues. If you encode single-quotes using Decimal (works across the most browsers) or named-entity reference, this should solve the problem, as well (by breaking the initial escape sequence the attacker needs to take over the tag and begin scripting).
A nice reference page for more on entity-encoding values can be found here:
http://www.crosswinds-cadre.net/?page=character_entities
Q. What is this Unicode craziness you speak of?
A. A great place to start is here:
http://www.joelonsoftware.com/articles/Unicode.html
On top of Google!
As displayed in the blog title headline, when I first started posting one of the goals was to control the page that showed up first on a Google when searching for my name (Jeremiah Grossman). Well, a co-worker mentioned to me that is had happened and I hadn’t even noticed. Woohoo, success! Launch the fireworks, release the balloons, and rain down the confetti! Thanks everyone for the links!
What's also cool is 111,000 search results are returned with my name in quotes, which is still a bit shy of Steve Jobs who has 31,500,000. Hey, I'm working on it. :)
What's also cool is 111,000 search results are returned with my name in quotes, which is still a bit shy of Steve Jobs who has 31,500,000. Hey, I'm working on it. :)
Fox News, Directory Indexing, and FTP Passwords
Update 07.24.2007: Reportedly the ziffdavis FTP account contained the names, phone numbers, and email addresses of at least 1.5 million people. Also, while the details are undisclosed, some XSS and SQLi vulnerabilities have been found in the FOX site. FUN!
A 19 year old photography student (Gordon Lowrey) found that the Fox News website had Directory Indexing enabled (now disabled). Sure it’s not a good practice (against PCI-DSS), but typically not a big deal security wise and it happens occasionally on other major websites. What made this one interesting in the person navigated up the directory tree their way to the /admin/ folder, no password required, where inside was a curious bash shell script thats still available.
#!/usr/local/bin/bash
filedir="/data/fnc01_docs/admin/xml_parser/zdnet"
workdir="/in"
importdir="f93kl"
ftpoptions="-vn"
logname="foxnews_\%1"
logpasswd="T1me\ 0ut"
hostip="ftp.g.ziffdavis.com"
date_string=`date +%y%m%d`
echo "start ftp..."
ftp $ftpoptions $hostip << -EOF-
user $logname $logpasswd
asc
prompt
cd $importdir
mget *$date_string*
close
-EOF-
mv Fox*.xml $filedir$workdir
echo "end ftp..."
It looks like Fox News is taking a feed from zdnet and inside is their FTP username and password. Oops. I’m not certain how much access this account granted, no way I’m testing it, but it certainly doesn’t look good. Maybe this account is used elsewhere, maybe it could write back to zdnet and push content to other outlets. Very bad. Hard to say for sure, but certainly not a headline the Fox News administrators want to wake up to. This also goes to show that not all websites vulnerabilities are in the code and you never know who might give you a free pen-test.
A 19 year old photography student (Gordon Lowrey) found that the Fox News website had Directory Indexing enabled (now disabled). Sure it’s not a good practice (against PCI-DSS), but typically not a big deal security wise and it happens occasionally on other major websites. What made this one interesting in the person navigated up the directory tree their way to the /admin/ folder, no password required, where inside was a curious bash shell script thats still available.
#!/usr/local/bin/bash
filedir="/data/fnc01_docs/admin/xml_parser/zdnet"
workdir="/in"
importdir="f93kl"
ftpoptions="-vn"
logname="foxnews_\%1"
logpasswd="T1me\ 0ut"
hostip="ftp.g.ziffdavis.com"
date_string=`date +%y%m%d`
echo "start ftp..."
ftp $ftpoptions $hostip << -EOF-
user $logname $logpasswd
asc
prompt
cd $importdir
mget *$date_string*
close
-EOF-
mv Fox*.xml $filedir$workdir
echo "end ftp..."
It looks like Fox News is taking a feed from zdnet and inside is their FTP username and password. Oops. I’m not certain how much access this account granted, no way I’m testing it, but it certainly doesn’t look good. Maybe this account is used elsewhere, maybe it could write back to zdnet and push content to other outlets. Very bad. Hard to say for sure, but certainly not a headline the Fox News administrators want to wake up to. This also goes to show that not all websites vulnerabilities are in the code and you never know who might give you a free pen-test.
Sunday, July 22, 2007
Status Update: Brazilian Jiu Jitsu
It’s been 4 months since I last posted about my Brazilian Jui Jitsu training. Well, I’m lovin’ it. BJJ is extremely demanding physically and equally so mentally. I’m still only a blue belt, but if I continue training hard, purple may come in another year or so. One thing I’ve learned is the ability to fight hard and think smart, while completely exhausted and in pain is what really separates people. As for myself, I have slight shiner underneath my right eye, the cauliflower ear is almost healed up, and the ribs only hurt when I cough/sneeze/sit-up/laugh/yawn. But it’s all part of BJJ if you expect your game to improve.
My goal is to drop down to 221lbs / 100kilos (from 270lbs / 122.5kilos, originally at 300lbs) so I don't have fight in the unlimited monster weight class. The training regiment consists BJJ (Muay Tai mixed in) 4-5 days a week, circuit training 2-3 days, running 4, and a dash of Aussie Rules Football sprinkled in. Strength is up, stamina way up, and my BJJ/MMA skills are coming along nicely. I’ve lost about 20lbs / 9 kilos and presently hovering around 250lbs / 113kilos. Weight loss has been a little slow due to increased muscle mass, but that’s OK since I’m visibly a lot trimmer. Everything is on track and in 6 more months I should be where I need to be.
The best part is I’m finally starting to hold my own in the academy. Takedowns have improved and I’m submitting people as often as I get submitted. And a couple of huge breakthroughs occurred recently. I tapped a BJJ brown belt from a visiting academy! I didn’t even know he was until someone mentioned it. To me he was just an opponent like any other. And then this weekend my instructor (Tom Cissero) took a couple of us students to a Tae Kwon Do academy because they wanted to be introduced to BJJ. We taught a few moves, performed a sparring demo, and at the end their instructed asked/insisted we role (spar) with some of their students. Two happened to be black belts in TKD, who fought full contact, and one guy obviously had some wrestling background. We took it a little easy on them, but tapped them all just the same.
Also my academy is starting to put up a few moves on YouTube. Cool stuff if your interested.
My goal is to drop down to 221lbs / 100kilos (from 270lbs / 122.5kilos, originally at 300lbs) so I don't have fight in the unlimited monster weight class. The training regiment consists BJJ (Muay Tai mixed in) 4-5 days a week, circuit training 2-3 days, running 4, and a dash of Aussie Rules Football sprinkled in. Strength is up, stamina way up, and my BJJ/MMA skills are coming along nicely. I’ve lost about 20lbs / 9 kilos and presently hovering around 250lbs / 113kilos. Weight loss has been a little slow due to increased muscle mass, but that’s OK since I’m visibly a lot trimmer. Everything is on track and in 6 more months I should be where I need to be.
The best part is I’m finally starting to hold my own in the academy. Takedowns have improved and I’m submitting people as often as I get submitted. And a couple of huge breakthroughs occurred recently. I tapped a BJJ brown belt from a visiting academy! I didn’t even know he was until someone mentioned it. To me he was just an opponent like any other. And then this weekend my instructor (Tom Cissero) took a couple of us students to a Tae Kwon Do academy because they wanted to be introduced to BJJ. We taught a few moves, performed a sparring demo, and at the end their instructed asked/insisted we role (spar) with some of their students. Two happened to be black belts in TKD, who fought full contact, and one guy obviously had some wrestling background. We took it a little easy on them, but tapped them all just the same.
Also my academy is starting to put up a few moves on YouTube. Cool stuff if your interested.
Tuesday, July 17, 2007
Chasing Vulnerabilities for Fun and Profit III
I’ve written before about WhiteHat Security office events in which we race to find the first and best vulnerability in never-seen-before websites - the winner receiving company-wide bragging rights. Speed hack contests are also great for learning and testing one’s skills. They get the competitive juices flowing, typically finish in less than 20 minutes, and keep the day-to-day work fun! Lately, winning has proved to be extremely challenging, especially when you’re up against people like Bill Pennington, Arian Evans, and the entire Operations Team who does this stuff everyday.
We ran two bouts last week. The first was a financial application, which was a little bit different, because it had a social networking aspect. We weren’t provided any usernames or passwords, couldn’t self-register without a special code; and, as a result, the attack surface was limited. This meant we could still probably find the first XSS fast, but the high-severity issue probably wasn’t going to be there. The domain was called out, fingers hit the keyboard, and we were off. Bill P. and I went immediately after XSS in the search fields, but struck out because of proper HTML encoding. Arian, who only sees filters as a challenge, busied himself with some crazy encoding attacks. The rest of the Operations Team were eagerly trying to take down the giants.
One, two minutes flew by with every avenue of attack closed off. Input sanity checked, output encoded, use of custom error messages, and directory indexing disabled. Not one to be easily frustrated, I was getting nervous because only 3 minutes in, application real estate was running out. Then just like that, at ~3 minutes 30 seconds, Daniel Herrera (security engineer) scored an XSS victory with a simple form-based injection in the user registration form. Something I had totally overlooked. Not wanting to be shut out completely, BillP and I found our own form-based XSS issues after 4 and 6 minutes time respectively. Too little, too late. No high severity issue was found after exhausting our options without valid accounts.
The second contest was a little bit different. This was not a customer website, but an Arian Evans’s special originally designed to test the capabilities of Web application vulnerability scanners. Arian pointed to the exact location of a parameter that was vulnerable to XSS. But, it was guarded by a complex blacklist that had to be bypassed in order to get a working exploit. This was our challenge. No accounts were necessary, so I figured this should be a fairly quick and easy exercise. To sweeten the deal, I put lunch on the line for the first person to bypass the filter and issue a JavaScript alert.
The green flag dropped and Arian assumed the role of referee/comic relief, (which he plays very well). I hammed on the filter, noticed the blacklists denying keywords like “alert“, “javascript”, “document”, “window” and only allowed a set of limited “safe” HTML. I managed to get JavaScript to execute via an <* input src="http://aaaa/" onerror="a" type="image"> quick enough. But, because of the blacklist, I couldn’t pop a window FTW. 2 minutes turned into 5, which turn into 10, then 15. Argh! I couldn’t get the filter to break. Meanwhile, Arian is teasing me for being so close but yet so far.
~15 minutes, Sarah Groark (Sr. Security Engineer), scored a very clever filter-bypass win using hex HTML entities:
I had tried decimal HTML entities earlier to beat the filter, but gave up on it because the filter was immediately wise to it. Adding insult to injury, Sarah took a few minutes break to get some water and a yogurt. Shortly afterwards, I was overheard saying, “I got beat by a girl!?”. To which Sarah replied, “That’s because girls rule!”, and under the circumstances I was in no position to debate the point.
I did manage to complete my hack without using any encoding at all 3 minutes later:
<* input%20type=image%20src=http://aaaa/%20onerror=x%3d'aler'%3bx%2b%3d't(\'arian_i5_0wn3d!!1\')'%3beval(x)%3b>
Decoded: <* input type="image" src="http://aaaa/" onerror="x="'aler';x+="'t(\'Arian_i5_0wN3d!!1\')';eval(x);">
Clever? Sure. But, not fast enough.
You win some, you lose some I guess. Maybe all this press and presentation stuff is ruining my skills! ;) Hopefully, InfoWorld won’t take away my CTO award!
We ran two bouts last week. The first was a financial application, which was a little bit different, because it had a social networking aspect. We weren’t provided any usernames or passwords, couldn’t self-register without a special code; and, as a result, the attack surface was limited. This meant we could still probably find the first XSS fast, but the high-severity issue probably wasn’t going to be there. The domain was called out, fingers hit the keyboard, and we were off. Bill P. and I went immediately after XSS in the search fields, but struck out because of proper HTML encoding. Arian, who only sees filters as a challenge, busied himself with some crazy encoding attacks. The rest of the Operations Team were eagerly trying to take down the giants.
One, two minutes flew by with every avenue of attack closed off. Input sanity checked, output encoded, use of custom error messages, and directory indexing disabled. Not one to be easily frustrated, I was getting nervous because only 3 minutes in, application real estate was running out. Then just like that, at ~3 minutes 30 seconds, Daniel Herrera (security engineer) scored an XSS victory with a simple form-based injection in the user registration form. Something I had totally overlooked. Not wanting to be shut out completely, BillP and I found our own form-based XSS issues after 4 and 6 minutes time respectively. Too little, too late. No high severity issue was found after exhausting our options without valid accounts.
The second contest was a little bit different. This was not a customer website, but an Arian Evans’s special originally designed to test the capabilities of Web application vulnerability scanners. Arian pointed to the exact location of a parameter that was vulnerable to XSS. But, it was guarded by a complex blacklist that had to be bypassed in order to get a working exploit. This was our challenge. No accounts were necessary, so I figured this should be a fairly quick and easy exercise. To sweeten the deal, I put lunch on the line for the first person to bypass the filter and issue a JavaScript alert.
The green flag dropped and Arian assumed the role of referee/comic relief, (which he plays very well). I hammed on the filter, noticed the blacklists denying keywords like “alert“, “javascript”, “document”, “window” and only allowed a set of limited “safe” HTML. I managed to get JavaScript to execute via an <* input src="http://aaaa/" onerror="a" type="image"> quick enough. But, because of the blacklist, I couldn’t pop a window FTW. 2 minutes turned into 5, which turn into 10, then 15. Argh! I couldn’t get the filter to break. Meanwhile, Arian is teasing me for being so close but yet so far.
~15 minutes, Sarah Groark (Sr. Security Engineer), scored a very clever filter-bypass win using hex HTML entities:
<* BODY ONLOAD=javascript:alert('XSS');>Decoded: <* body onload="javascript:alert('XSS');">
I had tried decimal HTML entities earlier to beat the filter, but gave up on it because the filter was immediately wise to it. Adding insult to injury, Sarah took a few minutes break to get some water and a yogurt. Shortly afterwards, I was overheard saying, “I got beat by a girl!?”. To which Sarah replied, “That’s because girls rule!”, and under the circumstances I was in no position to debate the point.
I did manage to complete my hack without using any encoding at all 3 minutes later:
<* input%20type=image%20src=http://aaaa/%20onerror=x%3d'aler'%3bx%2b%3d't(\'arian_i5_0wn3d!!1\')'%3beval(x)%3b>
Decoded: <* input type="image" src="http://aaaa/" onerror="x="'aler';x+="'t(\'Arian_i5_0wN3d!!1\')';eval(x);">
Clever? Sure. But, not fast enough.
You win some, you lose some I guess. Maybe all this press and presentation stuff is ruining my skills! ;) Hopefully, InfoWorld won’t take away my CTO award!
WebAppSec Twilight Zone
1) Cenzic is recasting their Hailstorm ARC product as a central reporting dashboard for competing commercial web application vulnerability scanners (AppScan, WebInspect, Fortify, etc) A webappsec SIM? Organizations who buy copies of multiple scanners do so because during evaluation the reports were wildly inconsistent (the PCI Council noticed this as well). So some organizations buy it all to get “more coverage”. What seems odd is Cenzic is placing less emphasis on the value of their vulnerability identification and repositioning as a central UI previously described by Network Computing as “An Unpretty Face”.
2) Watchfire (now IBM), bastion of scanner products, is now promoting the enterprise value of Software-as-a-Service (Saas) for web application vulnerability assessment!!? Inconceivable! The equivalent would be if Qualys all of a sudden started offering a network scanner product and hyping the ROI of in-house scanning. I guess I should be flattered, since WhiteHat has been pioneering the model since 2003. The webappsec VA market is definitely moving towards services and the battle is going to be won on terms of ease of deployment/use/scale.
3) Google buys everything, even security companies apparently. So why would Google build their own black box fuzzing tool (its called Lemon) to scan their websites for XSS and SQL Injection? I mean, its not like they’re short of cash or anything. Maybe they decided rolling their own would be more effective than buying a commercial product. That would be telling. Or the purchase of Watchfire and SPI Dynamics, putting the stand-alone web application vulnerability scanner market in limbo, Google wants a piece of the action. I’m sure Chris Hoff would make me eat my words again if they did that. ;)
2) Watchfire (now IBM), bastion of scanner products, is now promoting the enterprise value of Software-as-a-Service (Saas) for web application vulnerability assessment!!? Inconceivable! The equivalent would be if Qualys all of a sudden started offering a network scanner product and hyping the ROI of in-house scanning. I guess I should be flattered, since WhiteHat has been pioneering the model since 2003. The webappsec VA market is definitely moving towards services and the battle is going to be won on terms of ease of deployment/use/scale.
3) Google buys everything, even security companies apparently. So why would Google build their own black box fuzzing tool (its called Lemon) to scan their websites for XSS and SQL Injection? I mean, its not like they’re short of cash or anything. Maybe they decided rolling their own would be more effective than buying a commercial product. That would be telling. Or the purchase of Watchfire and SPI Dynamics, putting the stand-alone web application vulnerability scanner market in limbo, Google wants a piece of the action. I’m sure Chris Hoff would make me eat my words again if they did that. ;)
Thursday, July 12, 2007
SANS Interview: Thought Leaders in Software Security
Stephen Northcutt (President of the SANS Technology Institute) was kind enough to interview me as part of the Thought Leaders in Software Security Series. Already interviewed are several other stellar experts (Ryan Barnett, Dinis Cruz, Brian Chess, Caleb Sima) who are well worth the read.
Stephen asked some highly in-depth, diverse, and very timely questions, which covered a lot of webappsec ground. This was fun because the opportunity to do so in a public forum is so few and far between. Discussed were the recent scanner vendor acquisitions and what they mean to the market, attack trends, the XSS Attacks book, what makes WhiteHat unique, my thoughts on the SDLC and developer education, and he even offered me the chance to provide advice to other engineer turned executives.
Read the Interview
Stephen asked some highly in-depth, diverse, and very timely questions, which covered a lot of webappsec ground. This was fun because the opportunity to do so in a public forum is so few and far between. Discussed were the recent scanner vendor acquisitions and what they mean to the market, attack trends, the XSS Attacks book, what makes WhiteHat unique, my thoughts on the SDLC and developer education, and he even offered me the chance to provide advice to other engineer turned executives.
Read the Interview
Wednesday, July 11, 2007
HTTP Response Splitting Revelations
HTTP Response Splitting (HRS) is one of those webappsec attacks that’s poorly understood, even among the experts (myself included), despite Amit Klein’s best efforts to show us the light. The common understanding is vulnerabilities are rare, severity is high, but the preconditions necessary for an attack lower the threat profile. Recently Arian Evans, WhiteHat’s Director of Operations, took a renewed interest in the complexities of HRS with the assistance of Amit (original discoverer) and several customers. Arian added new checks to Sentinel, adjusted others, and run scans across a few hundred websites to test the vulnerability identification rate. The results were eye opening to say the least. We sent out a customer newsletter last week talking a lot about our HRS R&D, which I thought other would be interested in.
HTTP Response Splitting is a little known, very complex, and frequently misunderstood website vulnerability.
The best way to think about Response Splitting is that it’s executed similarly to Cross-Site Scripting (XSS), but more powerful. Take a loose analogy of a written letter in an envelope. XSS targets the message inside the envelope, while Response Splitting targets not only the message inside the envelope, but the envelope itself.
There several different variations of Response Splitting and many emergent behaviors that make accurate vulnerability identification challenging. WhiteHat has been investing a lot of R&D time perfecting the accuracy of our tests and has started pushing the results to Sentinel users last week. WhiteHat Security plans to release a paper on this subject, breaking down the details of various conditions, implications, and how to measure them. In the meantime, the results of our testing contained a few surprises:
HTTP Response Splitting issues are far more widespread than expected.
Simply put: We’re finding it everywhere. The interesting thing about many Response Splitting vulnerabilities is that the Web application is not necessarily doing anything wrong compared to strict RFC specification. Though this doesn’t change the fact that the website is vulnerable to a malicious attacker taking control of it. Do not be surprised if we find this in a few places on your website(s).
HTTP Response Splitting issues are far more severe than expected.
HTTP Response Splitting issues can be very, very bad, especially on production sites if a caching server is present. If you are using an intermediary caching server/proxy/load-balancer there is a chance that one of several conditions could be true:
1. One-to-Many: One attack can target many users.
2. Persistence: HTTP Response Splitting attacks will be persistent.
3. Domino Effect: One vulnerability may be exploited to take over an entire site.
These issues only occur in very specific situations. We will actively notify you if we discover that those conditions are true. But, this can be tough to measure and we may not always know if the above worst case scenario conditions are possible. This evaluation may require some investigation on your end.
WhiteHat Website Vulnerability Management Practice Tips:
Q. How do I fix a vulnerability to HTTP Response Splitting?
A. Whew, tough question…you should likely take two approaches:
1. Input Validation: You can try to remove every CRLF (\r \n) from input. The problem you will have is that the CRLF is likely to be encoded in some fashion. It could simply be URI-escaped (%0d%0A) or some other Hex or Decimal encoding variant.
If you do not find all the encoded CRLF variants that your application is capable of decoding, you will still be vulnerable.
2. URI-escaping: If you properly escape the URI in every place it is output, like the HTTP Location Header, the CRLF will not be parsed by the browser.
The problem with this approach is that there are some conditions, like personalization cookies, that are not URI data but could be a Response-Splitting attack. We could take our personalization cookie that is name=WhiteHat and make that name=WhiteHat%0D%0A and craft our attack after that.
When your application goes to set that cookie with our name and subsequent attack, the HTTP Response Splitting attack will occur when it reaches the browser.
You would have to try and catch that on input validation, or write a special library to escape \r \n anywhere you found it in output that was potentially user-supplied data.
HTTP Response Splitting is a little known, very complex, and frequently misunderstood website vulnerability.
The best way to think about Response Splitting is that it’s executed similarly to Cross-Site Scripting (XSS), but more powerful. Take a loose analogy of a written letter in an envelope. XSS targets the message inside the envelope, while Response Splitting targets not only the message inside the envelope, but the envelope itself.
There several different variations of Response Splitting and many emergent behaviors that make accurate vulnerability identification challenging. WhiteHat has been investing a lot of R&D time perfecting the accuracy of our tests and has started pushing the results to Sentinel users last week. WhiteHat Security plans to release a paper on this subject, breaking down the details of various conditions, implications, and how to measure them. In the meantime, the results of our testing contained a few surprises:
HTTP Response Splitting issues are far more widespread than expected.
Simply put: We’re finding it everywhere. The interesting thing about many Response Splitting vulnerabilities is that the Web application is not necessarily doing anything wrong compared to strict RFC specification. Though this doesn’t change the fact that the website is vulnerable to a malicious attacker taking control of it. Do not be surprised if we find this in a few places on your website(s).
HTTP Response Splitting issues are far more severe than expected.
HTTP Response Splitting issues can be very, very bad, especially on production sites if a caching server is present. If you are using an intermediary caching server/proxy/load-balancer there is a chance that one of several conditions could be true:
1. One-to-Many: One attack can target many users.
2. Persistence: HTTP Response Splitting attacks will be persistent.
3. Domino Effect: One vulnerability may be exploited to take over an entire site.
These issues only occur in very specific situations. We will actively notify you if we discover that those conditions are true. But, this can be tough to measure and we may not always know if the above worst case scenario conditions are possible. This evaluation may require some investigation on your end.
WhiteHat Website Vulnerability Management Practice Tips:
Q. How do I fix a vulnerability to HTTP Response Splitting?
A. Whew, tough question…you should likely take two approaches:
1. Input Validation: You can try to remove every CRLF (\r \n) from input. The problem you will have is that the CRLF is likely to be encoded in some fashion. It could simply be URI-escaped (%0d%0A) or some other Hex or Decimal encoding variant.
If you do not find all the encoded CRLF variants that your application is capable of decoding, you will still be vulnerable.
2. URI-escaping: If you properly escape the URI in every place it is output, like the HTTP Location Header, the CRLF will not be parsed by the browser.
The problem with this approach is that there are some conditions, like personalization cookies, that are not URI data but could be a Response-Splitting attack. We could take our personalization cookie that is name=WhiteHat and make that name=WhiteHat%0D%0A and craft our attack after that.
When your application goes to set that cookie with our name and subsequent attack, the HTTP Response Splitting attack will occur when it reaches the browser.
You would have to try and catch that on input validation, or write a special library to escape \r \n anywhere you found it in output that was potentially user-supplied data.
Webinar: Cross-Site Request Forgery
For those interested in learning about Cross-Site Request Forgery (CSRF), WhiteHat is hosting a webinar on July 24, 2007 at 11:00 AM PDT. This is about the basics, in and outs, and solutions in straight forward terms. If you want to attend registration is free. Description is below:
Cross-Site Request Forgery (CSRF). Session Riding. Client-Side Trojans. Confused Deputy. Web Trojans. Confused? Every year, for the past several years, the exact same Web attack is discovered, analyzed, and subsequently then renamed. Whatever it's called, it all means the same thing: An attacker is forcing an unsuspecting user’s browser to compromise their own banking, eCommerce or other website accounts without the real user’s knowledge.
Attackers have begun to actively exploit CSRF vulnerabilities across the Web. Why now? Because it's incredibly easy and the vast majority of websites are vulnerable to it. How do you stop an attack originating from a “real user,” who appears to be properly logged-in, and making a legitimate request - except that they did not intend to make the request?
Jeremiah Grossman will:
- Define Cross-Site Request Forgery
- Provide live, technical demonstrations
- Offer solutions to this growing problem
- Present strategies for complete website vulnerability management
Cross-Site Request Forgery (CSRF). Session Riding. Client-Side Trojans. Confused Deputy. Web Trojans. Confused? Every year, for the past several years, the exact same Web attack is discovered, analyzed, and subsequently then renamed. Whatever it's called, it all means the same thing: An attacker is forcing an unsuspecting user’s browser to compromise their own banking, eCommerce or other website accounts without the real user’s knowledge.
Attackers have begun to actively exploit CSRF vulnerabilities across the Web. Why now? Because it's incredibly easy and the vast majority of websites are vulnerable to it. How do you stop an attack originating from a “real user,” who appears to be properly logged-in, and making a legitimate request - except that they did not intend to make the request?
Jeremiah Grossman will:
- Define Cross-Site Request Forgery
- Provide live, technical demonstrations
- Offer solutions to this growing problem
- Present strategies for complete website vulnerability management
Zero-day auction
From the recent media coverage,slashdot’ing, list banter, and blog posts many know about WabiSabiLabi’s new zero-day auction. Much of the conversation revolves around ethics, “is this good for the community?”, speaking in the best interest of consumers, software vendors, security researchers, and security vendors. The ethics are an important sure, but I don’t think it’ll have a whole lot to do with whether or not the marketplace will get off the ground.
For my part I believe vulnerabilities have value and where there is value there will be a marketplace for traders (above ground or under ground). The traders being security vendors (IDS/IPS/VA), security researchers, software vendors, and the black hats. Security vendors already buy vulnerabilities from researchers, through TippingPoint’s ZDI initiative for instance, and rumor has it black hats are opening their pocketbooks as well.
That means the only group not participating so far are software vendors and of course they’re not going to want to pay for something they’ve always gotten for free. So its easy for them to say “ITS BLACKMAIL!”. My question for them is, despite how they feel, do they have a responsibility to at least to attempt to bid for a vulnerability to defend their customers? I think so. That and invest more into their SDLC so there is less to bid on.
Will auctions impact the value of a vulnerability by making is existence known? It should, probably both positively (causing a bidding frenzy) and negatively (letting others know where to look). Either way the bidders will have to adjust their price tolerance accordingly, but it shouldn’t kabosh the whole idea. What I like about the idea it gives researchers more options. If they want to put their vulnerabilities on the open market, they can. If they choose full disclosure, OK. If they want to sell discretely to TippingPoint or go to the software vendor directly, they have the choice.
For the time being I’ll stick to my original comment:
“All this would take is a couple of successful transactions, and it could cause a big shift in the way we traditionally think about the vulnerability disclosure process."
For my part I believe vulnerabilities have value and where there is value there will be a marketplace for traders (above ground or under ground). The traders being security vendors (IDS/IPS/VA), security researchers, software vendors, and the black hats. Security vendors already buy vulnerabilities from researchers, through TippingPoint’s ZDI initiative for instance, and rumor has it black hats are opening their pocketbooks as well.
That means the only group not participating so far are software vendors and of course they’re not going to want to pay for something they’ve always gotten for free. So its easy for them to say “ITS BLACKMAIL!”. My question for them is, despite how they feel, do they have a responsibility to at least to attempt to bid for a vulnerability to defend their customers? I think so. That and invest more into their SDLC so there is less to bid on.
Will auctions impact the value of a vulnerability by making is existence known? It should, probably both positively (causing a bidding frenzy) and negatively (letting others know where to look). Either way the bidders will have to adjust their price tolerance accordingly, but it shouldn’t kabosh the whole idea. What I like about the idea it gives researchers more options. If they want to put their vulnerabilities on the open market, they can. If they choose full disclosure, OK. If they want to sell discretely to TippingPoint or go to the software vendor directly, they have the choice.
For the time being I’ll stick to my original comment:
“All this would take is a couple of successful transactions, and it could cause a big shift in the way we traditionally think about the vulnerability disclosure process."
Tuesday, July 10, 2007
First multi-site XSS WebMail Worm (PoC)
Web Worms are quickly increasing in sophistication. This new proof-of-concept multi-site XSS WebMail Worm, with video, is capable of propagating across multiple WebMail providers using the exponential XSS technique. Sure we knew it was theoretically possible before, but never seen anyone actually do it. Really interesting stuff. For the moment the industry is still largely in the PoC stage, but rest assured it’s only a matter of time being payload are made to be malicious. More and more people are experimenting.
Time to learn DNS-Pinning
Update 07.12.2007: Kelly Jackson Higgins from Dark Reading posted a story highlighting the new Anti-DNS Pinning demos set to be presented at BlackHat. It appears there are many notable industry experts piling on the research trying to figure out how deep the rabbit hole goes. It should be an interesting year.
Per the typical web security M.O, attack techniques we’ve known and ignored for years have a way of coming back around as new ways of using them are discovered. It happened with XSS, recently with CSRF, and now new life is being breathed in Ant-DNS Pinning. Anti-DNS Pinning is a very important issue, especially as it related to intranet hacking, but its HIGHLY complicated and few people understand the nuances. In fact only a few months ago is the first time I’d seen the term mentioned in the mainstream media.
Fortunately learning about Ant-DNS pinning is getting easier as Christian Matthies and PortSwigger both posted freshly minted and extremely well-written white papers. The benefit of Christian’s is that he’s got a bit more data on anti-anti- and anti-anti-anti DNS Pinning, while PortSwigger’s explores web proxy implications which I had not seen anywhere else.
Also, if you are attending Black Hat USA 2007, make sure to catch David Byrne's Intranet Invasion With Anti-DNS Pinning.
Per the typical web security M.O, attack techniques we’ve known and ignored for years have a way of coming back around as new ways of using them are discovered. It happened with XSS, recently with CSRF, and now new life is being breathed in Ant-DNS Pinning. Anti-DNS Pinning is a very important issue, especially as it related to intranet hacking, but its HIGHLY complicated and few people understand the nuances. In fact only a few months ago is the first time I’d seen the term mentioned in the mainstream media.
Fortunately learning about Ant-DNS pinning is getting easier as Christian Matthies and PortSwigger both posted freshly minted and extremely well-written white papers. The benefit of Christian’s is that he’s got a bit more data on anti-anti- and anti-anti-anti DNS Pinning, while PortSwigger’s explores web proxy implications which I had not seen anywhere else.
Also, if you are attending Black Hat USA 2007, make sure to catch David Byrne's Intranet Invasion With Anti-DNS Pinning.
Monday, July 09, 2007
Free Black Hat USA passes!
HNS is giving away 2 Black Hat USA passes. Entering is simple. All you have to do is send them an email to with your contact information (for registration purposes) and they'll pick the winners at random. And as a privacy bonus they promise to destroy all the information they collect once the contest is over. Check out the page more info, but it looks well worth the 15 second investment.
Thursday, July 05, 2007
7 Deadly Sins of Website Vulnerability Disclosure
Someone you don’t know, never met, and didn’t give permission to informs you of a vulnerability in your website. What should you do? Or often just as important, what should you NOT do? Having security issues pointed out, “for free,” happens to everyone eventually (some more than others). People unfamiliar with the process often make poor judgment calls causing themselves more harm than good. We witness these missteps regularly, even amongst security vendors who should know better. I figured that if we document some of these mistakes, maybe we’d start learning from them. Then again, the original seven deadly sins certainly haven’t vanished. :)
Note: The content below should not be considered a substitute for an incident response plan.
Pride
It’s been said many times, all software has bugs. And as long as software has bugs it will contain vulnerabilities. It’s time we faced this fact. If you receive an email from a stranger declaring the discovery of a Cross-Site Scripting, SQL Injection, Cross-Site Request Forgery, Response Splitting, Content Spoofing, Information Disclosure, Predictable Resource Location, or some other esoterically named vulnerability in your website, there’s no need to get defensive. Even if you’ve never heard of or don't understand the type of attack, it doesn’t mean you’re dumb or your baby/code is ugly. It also doesn’t mean that it’s not a real issue with serious consequences undeserving of attention. Investigate each disclosure thoroughly, communicate openly, and fix issues promptly.
Wrath
Depending on the nature of the disclosure, some are tempted to fight back against the person disclosing a vulnerability by informing law enforcement and/or their attorney. While it’s not a bad idea to seek their counsel, only in cases of fraud, extortion, or other malice is taking the next step typically warranted. Investigations are difficult to open unless the financial or data loss is substantial. And, nasty legal threats only lend credence to the fact that something major has taken place. Criminal and civil actions have been known to backfire when the news breaks. Negative PR due to a “security issue” is not something any business owner desires, especially headline news about an issue that could have been quietly resolved with little to no media attention.
Desertion
A fresh security event is no time to start polishing the resume for Monster, CareerBuilder, or Craigslist. This is when your presence is critical and where you can end up learning the most. A vulnerability disclosure is not the end of the world (instead painfully common). Then again maybe a “free” vulnerability assessment reveals you’ve been severely negligent in your duties. It’s much more advantageous to take advantage of an outside vulnerability disclosure because it clearly demonstrates what might have happened “IF” exploited by someone with malicious intent. This should provide management with the motivation to prioritize initiatives previously suggested by the security team.
Blame
Arrogance
Dismissing or reprioritizing a vulnerability disclosure because a website uses SSL, is PCI compliant, or sports a HackerSafe logo is absurd. Compliance != Security. These credentials will not ward off the bad guys or prevent an incident from occurring. In fact, it might attract new attackers because it represents an interesting challenge. Compliance standards are typically a minimum baseline, and the skill of the average hacker (good, bad, or gray) easily outpaces the required security measures. Mandating security throughout the SDLC does not result in perfect code. Protecting a website is very difficult as you have to defend against all issues all the time. Someone on the outside only needs to find ONE issue to place the odds in their favor.
Sloth
Ignoring or flat-out denying that a vulnerability exists will not cause a disclosure to go away. In fact, it is more likely that the discloser will become annoyed and post the issue publicly, thereby alerting the media, a competitor, and perhaps a bad guy or two. The guys on sla.ckers.org have no problem full disclosing a website vulnerability and making you the subject of ridicule. Make sure that when you fix a disclosed issue, you take the time to hunt for others just like it, because where there is one there is likely another. It’s embarrassing to have someone point out a flaw and have the same issue pop up again and again.
Uncommunicative
When a disclosee fails to communicate adequately with the discloser, the media, or customers - one thing people do very well is assume the worst. The disclosure may believe they are being ignored and proper action is not taking place, which forces them to consider full-disclosure. A reporter receiving zero responses to press inquiries might assume they’ve uncovered the next TJX and raise a large FUD flag over the company/website. Should customers catch wind of the matter and not feel properly informed, panic and revolt could easily cross their mind. Striking the right balance between open communication and discretion is difficult, but also crucial.
Note: The content below should not be considered a substitute for an incident response plan.
Pride
It’s been said many times, all software has bugs. And as long as software has bugs it will contain vulnerabilities. It’s time we faced this fact. If you receive an email from a stranger declaring the discovery of a Cross-Site Scripting, SQL Injection, Cross-Site Request Forgery, Response Splitting, Content Spoofing, Information Disclosure, Predictable Resource Location, or some other esoterically named vulnerability in your website, there’s no need to get defensive. Even if you’ve never heard of or don't understand the type of attack, it doesn’t mean you’re dumb or your baby/code is ugly. It also doesn’t mean that it’s not a real issue with serious consequences undeserving of attention. Investigate each disclosure thoroughly, communicate openly, and fix issues promptly.
Wrath
Depending on the nature of the disclosure, some are tempted to fight back against the person disclosing a vulnerability by informing law enforcement and/or their attorney. While it’s not a bad idea to seek their counsel, only in cases of fraud, extortion, or other malice is taking the next step typically warranted. Investigations are difficult to open unless the financial or data loss is substantial. And, nasty legal threats only lend credence to the fact that something major has taken place. Criminal and civil actions have been known to backfire when the news breaks. Negative PR due to a “security issue” is not something any business owner desires, especially headline news about an issue that could have been quietly resolved with little to no media attention.
Desertion
A fresh security event is no time to start polishing the resume for Monster, CareerBuilder, or Craigslist. This is when your presence is critical and where you can end up learning the most. A vulnerability disclosure is not the end of the world (instead painfully common). Then again maybe a “free” vulnerability assessment reveals you’ve been severely negligent in your duties. It’s much more advantageous to take advantage of an outside vulnerability disclosure because it clearly demonstrates what might have happened “IF” exploited by someone with malicious intent. This should provide management with the motivation to prioritize initiatives previously suggested by the security team.
Blame
Your website, your responsibility. That’s how customers see it and that’s how any respectable business owner should treat the security of their website. No one appreciates when people point the finger at others or saying it was only *cough* honeypot when an event takes place. These excuses are transparent, unbecoming and do nothing to build or rebuild customer confidence. When an organization acknowledges an issue (publicly or privately) and resolves the matter quickly, it says a lot about that their integrity. Try not to shoot he messenger. Despite how you might feel at the time, when someone privately discloses a vulnerability in your website, in their mind they’re doing you a favor. Next time you might not get the chance to handle the incident proactively.
Arrogance
Dismissing or reprioritizing a vulnerability disclosure because a website uses SSL, is PCI compliant, or sports a HackerSafe logo is absurd. Compliance != Security. These credentials will not ward off the bad guys or prevent an incident from occurring. In fact, it might attract new attackers because it represents an interesting challenge. Compliance standards are typically a minimum baseline, and the skill of the average hacker (good, bad, or gray) easily outpaces the required security measures. Mandating security throughout the SDLC does not result in perfect code. Protecting a website is very difficult as you have to defend against all issues all the time. Someone on the outside only needs to find ONE issue to place the odds in their favor.
Sloth
Ignoring or flat-out denying that a vulnerability exists will not cause a disclosure to go away. In fact, it is more likely that the discloser will become annoyed and post the issue publicly, thereby alerting the media, a competitor, and perhaps a bad guy or two. The guys on sla.ckers.org have no problem full disclosing a website vulnerability and making you the subject of ridicule. Make sure that when you fix a disclosed issue, you take the time to hunt for others just like it, because where there is one there is likely another. It’s embarrassing to have someone point out a flaw and have the same issue pop up again and again.
Uncommunicative
When a disclosee fails to communicate adequately with the discloser, the media, or customers - one thing people do very well is assume the worst. The disclosure may believe they are being ignored and proper action is not taking place, which forces them to consider full-disclosure. A reporter receiving zero responses to press inquiries might assume they’ve uncovered the next TJX and raise a large FUD flag over the company/website. Should customers catch wind of the matter and not feel properly informed, panic and revolt could easily cross their mind. Striking the right balance between open communication and discretion is difficult, but also crucial.
Evolution of Expectations
Arian Evans, who leads WhiteHat's operations team, is in charge of all customer-facing aspects of our Sentinel Service. He’s tasked with ensuring our customer can track their current security posture no matter how many websites they have, how big they are, or how often the code changes. As such Arian has a front row seat to ongoing vulnerability assessment conducted on the worlds largest websites, week after week, month after month, etc. on a scale that no one else can touch. The other day he said after a lot of customer interaction that an interesting progression consistently take places. A set of phases where a customer’s business needs mature and expectations evolve when it comes to website vulnerability management. Arian drafted the outline and I filled in the fine print.
1. Quantity phase -- where more is more
The first phase of website vulnerability management typically begins by comparing solutions, often in terms of # of vulnerabilities discovered. Without the necessary expertise on staff to properly evaluate the results, unfortunately the solution displaying the most blinky-red-lights seems the most effective. And of course reports hundreds of pages in length must be more valuable than those only containing a few dozen (notice the sarcasm). That is until the time comes to actually parse the data and fix any of the issues whose existence is questionable.
2. Quality phase -- where less is more
During the second phase many figure out that certain website vulnerability assessment solutions, like scanners, come with a high false-positive/negative rate and a large duplicate problem. Reporting on the exact same XSS/SQLInj 10^5 times really doesn’t help an organization solve their problem. Then very soon afterwards it becomes clear that you can’t fix all vulnerabilities all at once. Eventually vulnerability remediation needs to be scheduled and prioritized in line with the general software release cycle. This is another reason why having a big list of vulnerabilities, even if they’re all real, without an accurate severity and threat ratings assigned is nearly useless.
3. Actionable phase -- how do I fix/improve things going forward with this data?
No matter how accurate the vulnerability assessment reports, fixing bug after bug after bug becomes tiring. The third phase of evolutions encompasses a strong desire to get the root of the problem. Customers begin asking why the same vulnerabilities keep popping up. Fortunately the data to address the problem may be already in hand, but only if you understand how to digest the reports. In our experience if a website has more than a few vulnerabilities of one class (XSS/SQLInj, etc.) and not many of the others, chances are the development framework is lacking. A few configuration changes or a version update could go a long way. On the other hand if vulnerabilities are spread out across several classes, mistakes are probably due to a lack of developer education or motivation. Having developers attend a class on secure coding best practices tends to have the most positive impact.
4. Consistency phase -- how do I do this consistently across time, because my software is always changing, without spending a zillion hours doing it?
Websites change, a lot! The responsibility for the security of more than say 3 websites (not to mention dozens or hundreds) easily becomes a full-time job, even if all the reports are perfectly accurate and prioritized. So the fourth phase has a lot to do with the importance of time management. Keeping up with website login credentials, scanner configuration, and simply pushing “go” every week is time consuming to say the least. Oh, and who is going to complete the remainder of the assessments that scanners miss? Dolling out fix-it tasks to various development groups and keeping track of remediation status should not be overlooked. At this point the most important thing to the customer is not about who finds more vulnerabilities (though its still important), but how dos the solution help to manage the entirety of the website vulnerability management process and keep the organization in sync.
1. Quantity phase -- where more is more
The first phase of website vulnerability management typically begins by comparing solutions, often in terms of # of vulnerabilities discovered. Without the necessary expertise on staff to properly evaluate the results, unfortunately the solution displaying the most blinky-red-lights seems the most effective. And of course reports hundreds of pages in length must be more valuable than those only containing a few dozen (notice the sarcasm). That is until the time comes to actually parse the data and fix any of the issues whose existence is questionable.
2. Quality phase -- where less is more
During the second phase many figure out that certain website vulnerability assessment solutions, like scanners, come with a high false-positive/negative rate and a large duplicate problem. Reporting on the exact same XSS/SQLInj 10^5 times really doesn’t help an organization solve their problem. Then very soon afterwards it becomes clear that you can’t fix all vulnerabilities all at once. Eventually vulnerability remediation needs to be scheduled and prioritized in line with the general software release cycle. This is another reason why having a big list of vulnerabilities, even if they’re all real, without an accurate severity and threat ratings assigned is nearly useless.
3. Actionable phase -- how do I fix/improve things going forward with this data?
No matter how accurate the vulnerability assessment reports, fixing bug after bug after bug becomes tiring. The third phase of evolutions encompasses a strong desire to get the root of the problem. Customers begin asking why the same vulnerabilities keep popping up. Fortunately the data to address the problem may be already in hand, but only if you understand how to digest the reports. In our experience if a website has more than a few vulnerabilities of one class (XSS/SQLInj, etc.) and not many of the others, chances are the development framework is lacking. A few configuration changes or a version update could go a long way. On the other hand if vulnerabilities are spread out across several classes, mistakes are probably due to a lack of developer education or motivation. Having developers attend a class on secure coding best practices tends to have the most positive impact.
4. Consistency phase -- how do I do this consistently across time, because my software is always changing, without spending a zillion hours doing it?
Websites change, a lot! The responsibility for the security of more than say 3 websites (not to mention dozens or hundreds) easily becomes a full-time job, even if all the reports are perfectly accurate and prioritized. So the fourth phase has a lot to do with the importance of time management. Keeping up with website login credentials, scanner configuration, and simply pushing “go” every week is time consuming to say the least. Oh, and who is going to complete the remainder of the assessments that scanners miss? Dolling out fix-it tasks to various development groups and keeping track of remediation status should not be overlooked. At this point the most important thing to the customer is not about who finds more vulnerabilities (though its still important), but how dos the solution help to manage the entirety of the website vulnerability management process and keep the organization in sync.
Monday, July 02, 2007
30 days, 104 Search Engine Vulnerabilities
MustLive completed his Month of Search Engines Bugs (MOSEB) project and generated some interesting results. First let’s take a look at the targets, the who’s who of search:
Meta, Yahoo, HotBot, Gigablast, MSN, Clusty, Yandex, Yandex.Server (local engine), Search Europe, Rambler, Ask.com, Ezilon, AltaVista, AltaVista local (local engine), MetaCrawler, Mamma, Google, Google Custom Search Engine (local engine), My Way, Lycos, Aport, Netscape Search, WebCrawler, Dogpile, AOL Search, My Search, My Web Search, LookSmart, DMOZ (Open Directory Project), InfoSpace, Euroseek, Kelkoo, Excite.
Results of the projects: fixed 44 vulnerabilities from 104.
I’m actually a little impressed that so many got fixed so fast. Is this a result of diligence on the part of the search engine vendor? For some I’m sure it was. For others, did the risk of negative press speed remediation? More than likely. I guess Full-Disclosure will live on for web security, just maybe not so much in the US. Ukrainians certainly don’t seem to be deterred.
Meta, Yahoo, HotBot, Gigablast, MSN, Clusty, Yandex, Yandex.Server (local engine), Search Europe, Rambler, Ask.com, Ezilon, AltaVista, AltaVista local (local engine), MetaCrawler, Mamma, Google, Google Custom Search Engine (local engine), My Way, Lycos, Aport, Netscape Search, WebCrawler, Dogpile, AOL Search, My Search, My Web Search, LookSmart, DMOZ (Open Directory Project), InfoSpace, Euroseek, Kelkoo, Excite.
Results of the projects: fixed 44 vulnerabilities from 104.
I’m actually a little impressed that so many got fixed so fast. Is this a result of diligence on the part of the search engine vendor? For some I’m sure it was. For others, did the risk of negative press speed remediation? More than likely. I guess Full-Disclosure will live on for web security, just maybe not so much in the US. Ukrainians certainly don’t seem to be deterred.
Subscribe to:
Posts (Atom)