Update 06.18.2007: Additional coverage by of SecurityFocus
Update 06.12.2007: CSI Working Group on Web Security Research Law Report is available. (Reg Req.) As I said below, this is well worth the read and especially important for the web security crowd.
Last year I began talking about how vulnerability "discovery" is becoming more important than disclosure as we move into the Web 2.0 era. Unlike traditional software, web applications are hosted on someone else's servers. Attempts to find vulnerabilities, even with honest intentions, on computers other than your own is potentially illegal. Eric McCarty and Daniel Cuthbert serve as examples as covered by SecurityFocus. Whatever your opinion on the issues, few outside web application security field appreciate the finer points or understand the potential long term affects. People have been listening though.
Starting with Scott Berinato’s The Chilling Effect and most recently Sarah Peters from Computer Security Institute assembled a diverse group of Web security researchers (including myself), computer crime law experts and agents from the U.S. Department of Justice to discuss the situation and create a report. After several collaborative calls and email exchanges amongst the participants, I learned a great deal, but unfortunately left with more concern than I originally started with.
I’ve read the report draft and it’s very well written, Dark Reading has coverage (Laws Threaten Security Researchers) and. I’d like to add that this document should be mandatory reading for everyone in or about to become part of the infosec industry. The final report won’t be posted until next week during CSI where a panel (I’ll be there) is planned to discuss the contents. I’ll update the post then when the link becomes available.
3 comments:
unfortunately left with more concern than I originally started with
hahahahaha. that bad, huh?
this document should be mandatory reading for everyone in or about to become part of the infosec industry
get out while you can! we're all going to prison!!!
Yah! I thought I knew what crossing the line was, now I'm not so confident. Sure, ignorance of the law is no excuse, but cmon, we should still be able to learn what is legal and what is not. Apparently it all comes down to who complains and if law enforcement listens, which sucks big time.
Notes:
Industry Regulations and Standards
We need better regulations/standards than PCI DSS and NIST. OWASP is working on a certification criteria:
http://www.owasp.org/index.php/Category:OWASP_Certification_Criteria_Project
which has been kicked off by this thread:
https://lists.owasp.org/pipermail/owasp-webcert/2007-June/thread.html
I suggest we start there.
Better Channel for Disclosure
What's wrong with RFC 2142? security@, abuse@, or cert@ should go to a responsible party in application security. Ultimately, the person responsible should be an Application Security Manager or Director who reports to a CISO.
Not all organizations can afford to fill these positions, but if your website supports SSL signed by a CA, then that means you probably have a hostmaster@, postmaster@, or webmaster@ - and that one of those were somebody who setup an SSL certificate and knows "something" about web security. If they can answer an automated email about their SSL certificate expiring, then I'm sure that they should also be able to be bothered by a vulnerability finding report.
My favorite channel of disclosure is to use Gabbly and report findings on the actual URL (as well as the top of the domain). If the administrators want to, they can simply pull their RSS feed from Gabbly and get their zero days every morning.
Dummy Pages
While I would like to see more test sites and testing grounds (see the list of tools I have built for OWASP here - http://owasp.org/index.php/Phoenix/Tools ), I think this will neither solve nor improve the disclosure situation.
However, I do have a suggestion. What if PCI DSS decided on a standard to use for a fake credit card number or set of PII? It could be sort of like RFC1918 - anyone could use this number (or set of numbers) in their database to not only be used as a HoneyToken, but also to allow vulnerability hunters to pull that record of information legally. While other attacks could still cause damage on their live site, it would at least allow simple McCarty-style SQL Injection checks sans "damage". Little things like this could allow certain attacks, until most attacks are legally allowed under certain circumstances except possibly barring HTTP resource starvation, SYN attacks, and similar complete denial-of-service.
Your other take-aways, such as the matrix of invasiveness doesn't sound especially useful or appealing. It sounds to me like you'll be giving a list of things to law enforcement and prosecutors to "go looking for" so that they can bust somebody. I would rather see a list of things that demonstrate to law enforcement what a "good citizen" would do when reporting a vulnerability. Positive thinking vs. negative thinking, got it?
I'm also really surprised that Jeff Williams isn't a part of the working group - that's too bad. Also - if Daniel Cuthbert has "chosen to leave the security industry" (page 8, top right) - why is he re-writing the OWASP T10-2007 to make it more business-friendly, and why is he in charge of the OWASP testing guide v3 project?
Post a Comment