The vulnerability "disclosure" debate isn't going away. A post at StillSecure, After All These Years has some nice links to experts boiling down their respective arguments attempting to balance researcher ethics, user security, and vendor responsibility. My question is, “what happens to our security when researchers lose the legal ability discover vulnerabilities in the software that's the most important (custom web applications)"?
By their nature, custom web applications are hosted on someone else's servers and available nowhere else. Attempting to find vulnerabilities of any kind on machines other than your own is frowned upon as being potentially illegal. Who cares about disclosure when we can’t even go about finding security issues without running the risk of going to jail. Some say, "do not test a system without written consent", offer good, but also short-sighted advice. The InfoSec community hasn't dealt with the legal issues of "discovering" vulnerabilities, only with "disclosing" them.
Traditionally researchers have played the role of Good Samaritan by finding vulnerabilities in software readily available to them. We're rapidly moving towards a world where the software that holds our most sensitive information (online banks, stores, IRS, etc.) is not on PC desktop software. The same people whom provide the layer of community oversight run into a very real problem besides ethics, a threat to their personal freedom. I’d wager there are few top researchers are willing to risk incarceration in pursuit of a few Cross-Site Scripting and SQL Injection issues. Organizations providing the web-based services are also not going to be handing out hack-me-if-can authorization letters. And with few people looking, software security naturally degrades. That's probably why 8 out of 10 websites have vulnerabilities.