Friday, April 13, 2007

mainstream media is figuring out the industries new disclosure dilemma

Bug hunters face online apps dilemma (via Joris Evers from CNET)
"Security holes in online applications may go unfixed because well-intended hackers are afraid to report bugs. Web applications pose a dilemma for bug hunters: how to test the security without going to jail? If hackers probe traditional software such as Windows or Word, they can do so on their own PCs. That isn't true for Web applications, which run on servers operated by others. Testing the security there is likely illegal and could lead to prosecution."

We've all debating the legal and ethical issues, but it doesn't change the fact that we're going to lose the canary-in-the coal-mine aspect of information security. Does that mean we're going to have to rely on compliance rather than community peer review? Eeesh!

I also just caught Alan Shimel's follow-up on the article, he comments on one of my quotes:

"Jeremiah Grossman of White Hat Security (and a past guest on our podcast) is quoted as saying that: "We're losing the Good Samaritan aspect of security". He uses the gun law analogy that if we make it illegal to find vulnerabilities in web sites, only bad guys will find them. Sort of like if it is illegal to own guns, than only bad guys will own guns. I disagree with the gun analogy and I disagree with Jeremiah on this one. I just think there is too much room for abuse to allow condone people hacking into web sites. Who really knows what their motives are."

Let me clarify because I still stand by the statement as what will inevitably happen should Good Samaritans be routinely prosecuted. But, I don't think Alan and I fundamentally disagree on the next step of the legal matters. Pen-testing websites without consent is and should be illegal (we can debate proper penalties later). There is just too much risk otherwise. What we do have is a catch-22 situation.


10 comments:

Anonymous said...

It would be fantastic if DAs and judges exercised a bit more judgement in deciding what to prosecute and how to sentence. That's the whole point of judges and juries, they're supposed to use their better judgement to evaluate each case individually. Of course, minimum sentencing laws (which, IMO, are a probably unconstitutional expansion of legislative power) tend to tie judges hands. And as far as DAs go, well, nobody ever won an election by declining to prosecute people who may have technically broken the law. So, you end up with the wrong people ("Why should I cover my tracks, I'm not doing anything bad?") being arrested, demonized, and convicted.

Ironically, good Samaritan hackers and Duke lacrosse players have something in common.

Drew Hintz said...
This comment has been removed by the author.
Drew Hintz said...

Granted "hacking into web sites" should be illegal. However, how should client-side web app sec issues be treated, such as CSRF?

In a typical CSRF testing situation, there's no penetration of the server, only compromise of a user account owned by the tester.

dkza said...

I have taken note of the sec industry moving more and more toward "professionalism" and moving more and more away from the traditional hacking mindset.

It kind of reminds me of the Pirates of the Caribbean, "The world is becoming a small place, there are no places for people like Jack." - something like that :)

Which is right.. its becoming difficult to judge.

I had some thoughts on this a while back, see here.

Jeremiah Grossman said...

@drew, Thats a damn good question. My initial reaction was to think that is probably OK to do test, but who knows these days with the current legal interpretations. Better to not be the first guinea pig.

Fortunately I think CSRF is easier to explain than XSS to a website own who is unfamiliar with the attack.

Unknown said...

Dear Salesforce.com,

Please create a web site and domain "hackme.salesforce.com." This site should be a duplicate of your QA environment with no Personally Identifiable Information. Provide a hacker sign-up agreement that binds the security researcher to a responsible disclosure policy and pays a reasonable bounty for finding holes.

This agreement would authorize the hacker's efforts and also indemnify the researcher for any damage to the hackme systems. Like a Honeypot with some rules, the prudent company will monitor the researchers activities very closely -- I guess that better be in the hackme terms of service.

J. Haynes taught me this trick. It worked for BSD 2.9 and 4.3, maybe it would work for the WWW.

Jeremiah Grossman said...

@Michael: Wow. At first I thought you comment was going to be something snarky, but it suddenly turned into something that sounded really compelling.

And yah, "maybe" it'll work for WWW. :)

Anonymous said...

A few years ago, an israeli guy (don't think he was sofisticated enough to be called a hacker) tried to "hack" the israeli Mosad (the israeli CIA) using tools he downloaded from the web. he basically port scanned and scanned for exploits (he didn't find any thing).
the attorney general prosecuted him and he was found NOT guilty. the ruling is in its self fascinating (but in hebrew, unfortunatly for the world) and the bottom line was that no criminal intent was present (i.e. he had no criminal record, he did not hide anything or so on) and (the major part) that it is a public interest to let the public evaluate web security, for a sense of security is a corner stone of web ecommerce. I think this is the way to go. the word "penetration" is not relevant on the web. penetration is OK, whereas exploitation should be illegal.

S3Jensen said...

This raises an interesting question. Let me give you a scenario I had in Dec. 2005.

I was online looking to purchase some sports related clothing for my father-in-law for christmas. After navigating through several dozen e-commerce websites, I found a site that had what I wanted at the price I wanted to pay.

The website required creating an account and providing personal details such as name, address, credit card info, etc... I created an account and decided, since I was giving this company my business, I wanted to ensure my information was safe. So similar to taking a car for a test drive, I took the site for a "test drive". I realized after analyzing the cookies the site created when you logged in, all it did was MD5 encrypt your username, that was it! It took me about 2 minutes to pass a valid hashed username back to the application to impersonate another user and I was able to see that users details. I notified the company about the issue, they took the site down and thanked me for discovering it.

But this raises the "bury-our-head-in-the-sand" question. Should we blindly trust companies we intend to do business with?

We are the ones with the skills to discover these issues. It becomes painfully obvious everyday that companies aren't doing enough to secure their applications and protect their customers data. So if we intend to do business with these companies, shouldn't we have the right to take the application for a "security test drive" to ensure the confidential information we are entrusting to them is kept confidential and secure?

Just my .02

Anonymous said...

This is a fascinating post! It could not have been expressed better.