Dr. Nick: "With my new diet, you can eat as much as you want, any time you want!"
Marge: "And you'll lose weight?"
Dr. Nick: "You might! It's a free country!"
Dr. Nick Riviera (The Simpsons)
A common approach to vulnerability assessment (VA) is going after the so-called “low-hanging fruit" (LHF). The idea is to remove the easy stuff making break-ins more challenging without investing a lot of work and expense. Nothing wrong with that except eliminating the low-hanging fruit doesn't really do much for website security. In network security the LHF/VA strategy can help because that layer endures millions of automated and untargeted attacks using “well-known” vulnerabilities. Malicious attacks on websites are targeted using one-off zero-day vulnerabilities carried out by a real live adversary(ies).
Let’s say a website has 20 Cross-Site Scripting (XSS) vulnerabilities, 18 of which classifiable as LHF. Completing a LHF/VA process to eliminate these might take a week to a month or more depending on the website. By eliminating 90% of the total issues, how much longer might it take a bad guy to identify the one of the two remaining XSS issues they need to hack the site? An hour? A few? A day? Perhaps a week if you’re really lucky. A recent thread on sla.ckers.org offered a perfect illustration.
Someone said vulnerabilities in Neopets, a popular social network gaming site for virtual pets, were hard to come by. The first question was who cares about Neopets? The answer was it has millions of players and currency with monetary value. Through my browser I could almost hear the keyboards as the members raced to be the first. A dozen posts and 24 hours later a XSS disclosure hit. I didn’t bother confirming. sla.ckers.org has already generated over 1,000 similar including several in MySpace, so it wouldn’t be out of the ordinary. However, these are also not the guys we need to be worrying about.
The real bad guys are after the money. They have all day, all night, every day, weekends and holidays to target any website they want. In the above example, we were just talking about some silly gaming site. What if the target was something more compelling? Think the real bad guys will be so nice as to publish their results? They’d bang on a system 24x7 until they got what they wanted and happily be on their way. Reportedly the group that hacked T-Mobile and Paris Hilton’s cell spent more than a year targeting the system.
The point I’m trying to make is if your going to spend weeks and months of time finding and fixing vulnerabilities make sure the end result protects you for more than a lucky week. Sure going after LHF is better than nothing, but if you’re a professional responsible for security, that’s the last thing you want to tell your boss your VA strategy is based on. The strategy you want is comprehensiveness. Push the bad guys away for months, years, or hopefully forever.
23 comments:
[January 18, 2007] it's an accurate point, doing security half-asked isn't much better than not doing it at all - particularly when there's money involved, or large userbases.
[January 18, 2007] and in the first bold text, i think it's missing a "doesn't* really do much" .. atleast i assume.
Thanks maluc, typo fixed. And yah, it just seemed to me that too much stock is being placed in LHF VA. Webappsec is a whole different ballgame.
Jeremiah,
In your posts, you oftentimes approach security with an "All or Nothing" attitude, while in reality, every additional layer of security adds to the overall protection.
A quick glimpse at the Web Hacking Incident Database, will show you that not all malicious actions against web sites were using uber-hacking techniques, some of them are simple and low-hanging as well.
One of the oldest paradigms in security states that if you can protect yourself in such a way that will make a hacker waste a lot of time finding vulnerabilities, he/she will probably move along to the next target, which contains vulnerabilities that are easier to spot and exploit.
-Ory
What you say is definitely true, and certainly applies to a directed, focused attacker. However, not all attackers are focused (I have no evidence for this). It all comes down to threat modeling. (God, I sound like Bruce S.) If an attacker just wants to make money, they'll attack the easiest-to-exploit sites they can find. If it takes them more than a week and they haven't found anything they can exploit to get a return on their investment, there's certainly a side a couple subnets down that's woefully undefended. To put it another way, it's okay if the lock on your house door can be bump-picked if every other house on the block doesn't even have a lock. Alternately, you don't have to run faster than the tiger, you just have to run faster than the other tourists.
Of course, none of this applies if there are people out there who want *you* in particular. If your threat modeling indicates that this is a credible risk. (Big IT layoffs in the past, lots of socio-political enemies, high rewards for compromise, etc.) In these cases, you'll have to spend more on security. But for more folks out there, getting the LHF is fine^W^W presents a sensible investment risk mitigation.
> In your posts, you oftentimes approach security with an "All or Nothing" attitude, while in reality, every additional layer of security adds to the overall protection.
That's because I do. The customers I work with/for have a lot at stake in their web business. They can’t afford to get hacked, field media inquires, or receive calls from the FTC. They desire real security. The rest are directed to Scan Alert.
WebApp VA is about “measuring” the security of web apps, WAF's, configs, etc. so improvements can be made. As a VA vendor, the last call I want to receive is from a customer whose website was hacked by vulnerability we should have found (hasn’t happened). Or even someone else simply finding an issue we didn’t (happened only once in several years).
> A quick glimpse at the Web Hacking Incident Database, will show you that not all malicious actions against web sites were using uber-hacking techniques, some of them are simple and low-hanging as well.
That's right they were. Makes one wonder what WebApp VA solution(s) they were using, if any, and if the issue would have been found it prior to. Unfortunately this is unanswerable. What we do know is the larger and popular targets capable of providing monetary reward are enduring constant attack. Black/White/Gray Hat’s are beating on these systems 24x7 and really don't care much how hard it is, at least not past a week. Again, look at the T-Mobile incident.
> One of the oldest paradigms in security states that if you can protect yourself in such a way that will make a hacker waste a lot of time finding vulnerabilities, he/she will probably move along to the next target, which contains vulnerabilities that are easier to spot and exploit.
I agree and have a methodology for measuring the time differential in Web App VA. Want to go head to head? You solution vs. mine? :)
Jeremiah,
You took my comment, dissected it to small pieces and commented on each piece by itself, without seeing the big picture.
Let me explain again:
Having an application tested by Jeremiah and the whole WASC lineup, still won't mean that it is 100% secure right? there's always a chance that the smartest assessor missed something.
Quote: "The customers I work with/for have a lot at stake in their web business. They can’t afford to get hacked"
(BTW - propaganda anyone?! :-)
Answer: approaching security as a layered model, and not as an all-or-nothing model, doesn't mean that you don't take security seriously. You should remember that a fully secured app won't probably be accessible (you have to cut the network cable).
Quote: "They desire real security. The rest are directed to Scan Alert"
Answer: So between Whitehat Sec. and Scan Alert there's a big void?
Have a great (and secure) weekend.
What you say is definitely true, and certainly applies to a directed, focused attacker. However, not all attackers are focused (I have no evidence for this). It all comes down to threat modeling.
I can agree with this. As it applies to webappsec, I'm saying its the focused attacker that we need to be concerned about. I haven't heard of any shotgun blast attacks (unfocused) where money was lost. Everything I've read about is focused.
> (God, I sound like Bruce S.)
Worse things could happen. :)
...
But for more folks out there, getting the LHF is fine^W^W presents a sensible investment risk mitigation.
For other areas of security I can see it adding value. I'm just not sold on the value as it relates to website security. There's gotta be a better way to measure this. I think Sylvan is onto something though.
> As a VA vendor, the last call I want to receive is from a customer whose website was hacked by vulnerability we should have found (hasn’t happened). Or even someone else simply finding an issue we didn’t (happened only once in several years).
May I risk a guess? the model I have in mind for this situation is that WhiteHat is one of the few organizations that occupy the top of the webapp security auditing niche. So it's unlikely for your customer to find an issue WH didn't come up with. But it doesn't mean there are no such issues! Suppose your customer re-audited its site with a bunch of experts from WASC - what would be the odds then?
If we're to play fair, than THAT is the experiment we should conduct, and not, per your suggestion, "You solution vs. mine?"
Hey Ory,
Ok, I won't comment inline, I'll start at the end.
> So between Whitehat Sec. and Scan Alert there's a big void?
Yes, a BIG void. The more vulnerabilities you wipe out, the harder it is for a hacker to get in. What I'm argueing is that removing just the LHF doesn't make a website substantially harder to hack. At least, that's been my experience.
Just to rephrase my last comment:
"Having an application tested by Jeremiah and the whole WASC lineup, still won't mean that it is 100% secure right? there's always a chance that the smartest assessor missed something."
So does this mean that we should sleep in fear from now until eternity? of course not! We have to do our best right?
I am not familiar with all the laws (you mentioned the FTC), but I have a feeling that there are no laws that say: "Thou shalt not have a security vulnerability in your web site". They probably say that you have to do your best to ensure that customer confidential information is not at risk (among many other things they require).
Quote: "I'm argueing is that removing just the LHF doesn't make a website substantially harder to hack. At least, that's been my experience"
Answer:
1) It depends on how you define LHF
2) We'll have to agree to disagree.
I wonder what Amit Klein has to say about this, I remember he once wrote an article called "Hacker Repellent: Techniques for small companies to deter hackers on a shoestring budget"
((http://www.modsecurity.org/archive/amit/Hack_Repellent.pdf))
-Ory
> May I risk a guess?
Please.
> the model I have in mind for this situation is that WhiteHat is one of the few organizations that occupy the top of the webapp security auditing niche. So it's unlikely for your customer to find an issue WH didn't come up with. But it doesn't mean there are no such issues!
Of course, there is always a chance something is missed. I posted on this very topic.
> Suppose your customer re-audited its site with a bunch of experts from WASC - what would be the odds then?
The WASC crowd is very good. The truth is they could find more, we could find more. Over the average the findings wouldn't be THAT different. What's important here is that the time differential for hacker to breakin after pen-testing with either group would not be substantial (weeks, months, years).
> If we're to play fair, than THAT is the experiment we should conduct, and not, per your suggestion, "You solution vs. mine?"
That would be fair, testing expert vs. expert. Now what do you supposed the time differential would be if you compared scanner (LHF) vs. expert?
> I am not familiar with all the laws (you mentioned the FTC), but I have a feeling that there are no laws that say: "Thou shalt not have a security vulnerability in your web site". They probably say that you have to do your best to ensure that customer confidential information is not at risk (among many other things they require).
The FTC/SEC/PCI probably doesn't care a whole lot when there is a vuln in a website. They only seem to start caring whem a website is hacked and customer data is lost. That's why the time measurements are important, no so much the vulns.
> 1) It depends on how you define LHF
Yes it does. I think I might have to post again on this subject digging in deeper.
> 2) We'll have to agree to disagree.
For now. :)
Jeremiah - there are two issues here:
1. Is WH the ultimate security solution? I claim that you hint to this, by stating that only once did someone find a vulnerability you didn't.
2. Is WH better than automated scanners - that's where you're focused (business-wise), but it's not the issue *I* am talking about...
So back to issue #1, now your argument revolves around the "time differential". But I've got to tell you that unleashing a team of experts against a site audited by WH would yields results within few days (my estimation). So it's still totally within reason for a hacker to successfully target a WH-audited site, if that hacker is a top webapp expert.
For web application security VA, WhiteHat is certainly doing everything we can to be the best solution out there. Customers get to decide if we are or not.
And what I'm saying is limiting testing to LHF using a scanner or something else is not going to help stave off the common focused webapp hacker. So if you'd need a team of "top webapp expert." to do better than "WH-audited" website, I'd say that's a flattering way to be seen.
Hey,
Your recent whitepapers and marketing documents state that the best solution for testing a web application is to use both manual testing, as well as an automated scanner (I agree with this btw).
You also claim that your human penetration testers (WH) are amongst the best out there (I also agree with that), but why should someone assume that the automated part of your service is also the best out there?
Commercial automated scanners go through rigorous benchmarking and bakeoffs against other scanners, a process that forces them to keep up with the rest, while your automated solution was never evaluated like the rest of the pack. How should potential buyers trust the automated part of WH's offering?
Thank you for the compliments.
> How should potential buyers trust the automated part of WH's offering?
Good question. The answer is they don't have to. Whether our scanning technology proves to be better, on par, or lacking... customers receive the same level of service. We're not selling software, we're results. At the end of the day that's what they really care about.
From our standpoint the better our technology/process, the more efficient we become, allowing us to pass the savings to the customer.
Wow! Lots of comments and as usual I'm late to the party :) One of the drawbacks of traveling a lot I suppose :(
Anyway, as usual, Jeremiah pushing peoples buttons ;) Few things I'd like to chime in on here...
>JG: The FTC/SEC/PCI probably doesn't care a whole lot when there is a vuln in a website. They only seem to start caring when a website is hacked and customer data is lost
Yes, the FTC/SEC/PCI do only start to care when sensitive (customer or otherwise) is lost. However, when it comes to when they are in serious trouble is when these TLA's look at the "standard of diligence" - basically, did you do your best to safeguard the system. Guess (the most high-profile in the "sued by the FTC" group) got fined because although they *knew* they had SQL injection vulns in their app (because they were told several times), they still didn't fix anything. In several ways, policies help here (although possibly they shouldn't) in limiting the culpability to the company - did they make reasonable efforts on security. Blatant plug here, but the IEEE Security and Privacy issue I edited specifically on web security had an article on this topic which I selected because I though it was an interesting viewpoint [http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/mags/sp/&toc=comp/mags/sp/2006/04/j4toc.xml] (sorry, as usual for IEEE, articles are PPV, but you can get the abstract)
>Ory: In your posts, you oftentimes approach security with an "All or Nothing" attitude
>> JG: That's because I do.
Ok, I disagree here - IMHO, finding and fixing low-hanging-fruit is certainly worth while for a number of reasons. Is it *enough* to just look for these? Certainly not. Does it increase the security of the site? In some way - there are fewer people with the ability/skills to find and exploit vulns which reduces the risk - risk = probability x exposure? Reduce the probability (fewer people able to find vulns) reduce the risk? The attackers looking for a "fast buck" will probably move on to where there chances are better. However, with a "high profile" site, these bets are off - attackers will keep hitting the site, for a long time if necessary, in order to find a vuln. Take Microsoft for instance - no matter how secure they make their apps, people will still find vulns in them - programs are written by humans, and humans are fallible. TMobile (another example that was mentioned above) was a target where I believe that the attackers actually *joined* the company for a short time to learn more about the system - there's very few companies that can withstand that level of scrutiny, so it comes back down to the level of risk they are willing to accept.
Anyway, the analogy I like to make is high jump/pole vault. Currently, vulns are at the very low high jump stage - anyone with half an athletic ability can make it over and find something. Having these vulns disclosed on a site should be just plain embarrassing and addressing the "policy/FTC/etc" comment above then clearly the site isn't showing a standard of "due care". Move the bar up, and you then get to Olympic high jumper standard or amateur pole vaulter (who is really a high jumper, but with "tools") - to start to find vulns at this level you need to either be really good at spotting things, or know how to use the tools and have good "technique". Remove more low-hanging-fruit and you get to the Olympic pole vaulter level - there still may be vulnerabilities in there (it's impossible to find/remove all bugs right?), but it takes significant time/effort/training/knowledge to discover them and the population of people that are able to make it to that height is probably few and far between (and we hope are working for "our side" :))
If we want a corollary here, why do people put firewalls in front of their network when if they would just keep their machines patched and configured correctly, they wouldn't need them (there's actually a blog post out there somewhere evangelizing this, but I can't find it at the moment)
So, that's my $0.02 for what it's worth.
Great comments Mike. Your 2 cents is worth a full buck in my opinion. I think I've said enough on this topic for now. Gotta think on it a while and perhaps find another way to approach it.
And proof that the bad guys will spend however much time it takes and whatever method they can find to target the money sites:
http://news.zdnet.co.uk/security/0,1000000189,39285547,00.htm
Bother...URL cutoff:
http://news.zdnet.co.uk/security/
0,1000000189,39285547,00.htm
Post a Comment