My recent post entitled, Secure (Enough) Software — Do we really know how?, sparked a very thoughtful comment by Mitja Kolsek (ACROS Security), which read more like a well-written blog post than anything else. Mitja goes onto explain one of the more fundamental challenges between the implicit and explicit (security requirements) forms of software security. He really hits the nail on the head. Such a good blog post is not something seen every day, so with Mitja’s permission, I’m republishing all readers to enjoy.
“A great article, Jeremiah, it nicely describes one of the biggest problems with application security: How do you prove that a piece of code is secure? But wait, let’s go back one step: what does “secure” (or “secure enough”) mean? To me, secure software means software that neither provides nor enables opportunities for breaking security requirements. And what are these security requirements?
In contrast to functional requirements, security requirements are usually not even mentioned in any meaningful way, much less explicitly specified by those ordering the software. So the developers have a clear understanding what the customer (or boss) wants in terms of functionalities while security is left to their own initiative and spare time.
When security experts review software products, we (consciously or less so) always have to build some set of implicit security requirements, based on our experience and our understanding of the product. So we assume that since there is user authentication in the product, it is implied that users should not be able to log in without their credentials. Authorization implies that user A is not supposed to have access to user B’s data unless where required. Presence of personal data implies that this data should be properly encrypted at all times and inaccessible to unauthorized users.
These may sound easy, but a complex product could have hundreds of such “atomic” requirements with many exceptions and conditions. Now how about the defects that allow running arbitrary code inside (or even outside) the product, such as unchecked buffers and unsanitized unparameterized SQL statements and cross-site scripting bugs?
We all understand that these are bad and implicitly forbidden in a secure product, so we add them to our list of security requirements. Finally there are unique and/or previously unknown types of vulnerabilities that one is, by definition, unable to include in any security requirements beforehand. My point is that in order to prove something (in this case security), we need to define it first.
Explicit security requirements seem to be a good way to do so. For many years we’ve been encouraging our customers to write up security requirements (or at least threat models, which can be partly translated into security requirements) and found that they help them understand their security models better, allowed them to see some design flaws in time to inexpensively fix them, and gave their developers useful guidelines for avoiding some likely security errors.
For those reviewing such products for security, these requirements provide useful information about the security model so that they know better what exactly they’re supposed to verify. Only when we define the security for any particular product can we tackle the (undoubtedly harder) process of proving. But even the “negative proof and fix” approach the industry is using today, i.e., subjecting a product to good vulnerability experts and hoping they don’t find anything or fix what they find, can be much improved with the use of explicit security requirements.“
No comments:
Post a Comment