Wednesday, October 28, 2009

Black Box vs White Box. You are doing it wrong.

A longstanding debate in Web application security, heck all of application security, is which software testing methodology is the best -- that is -- the best at finding the most vulnerabilities. Is it black box (aka: vulnerability assessment, dynamic testing, run-time analysis) or white box (aka: source code review, static analysis)? Some advocate that a combination of the two will yield the most comprehensive results. Indeed, they could be right. Closely tied into the discussion is the resource (time, money, skill) investment required, because getting the most security bang for the buck is obviously very important.

In my opinion, choosing between application security testing methodologies based upon a vulnerabilities-per-dollar metric is a mistake. They are not substitutes for each other, especially in website security. The reasons for choosing one particular testing methodology over the other are very different. Black and white box testing measure very different things. Identifying vulnerabilities should be considered a byproduct of the exercise, not the goal. When testing is properly conducted, the lack or reduction of discovered vulnerabilities demonstrates improvement of the organization, not the diminished value of the prescribed testing process.

If you reached zero vulnerabilities (unlikely), would it be a good idea to stop testing? Of course not.

Black box vulnerability assessments measure the hackability of a website given an attacker with a certain amount of resources, skill, and scope. We know that bad guys will attack essentially all publicly facing websites at some point in time, so it makes sense for us to learn about the defects before they do. As such, black box vulnerability assessments are best defined as an outcome based metric for measuring the security of a system with all security safeguards in place.

White box source code reviews, on the other hand, measure and/or help reduce the number of security defects in an application resulting from the current software development life-cycle. In the immortal words of Michael Howard regarding Microsoft’s SDL mantra, “Reduce the number of vulnerabilities and reduce the severity of the bugs you miss.” Software has bugs, and that will continue to be the case. Therefore it is best to minimize them to the extent we can in effort to increase software assurance.

Taking a step back, you might reasonably select a particular product/service using vulns-per-dollar as one of the criteria, but again, not the testing methodology itself. Just as you wouldn’t compare the value of network pen-testing against patch management, firewalls against IPS, and so on. Understanding first what you want to measure should be the guide to testing methodology selection.

Wednesday, October 07, 2009

All about Website Password Policies

Passwords are the most common way for people to prove to a website that they are who they say they are, as they should be the only ones who know what it is. That is of course unless they share it with someone else, somebody steals it, or even possibly guesses it. This identity verification process is more commonly know as “authentication.” There are three ways to perform authentication:
  1. Something you have (physical keycard, USB stick, etc.) or have access to (email, SMS, fax, etc.)
  2. Something you are (fingerprint, retina scan, voice recognition, etc.)
  3. Something you know (password or pass-phrase).
The third form of authentication is what most of us are familiar with on the Web. Of course it’s possible to combine more than one of these methods to create systems referred to as two-factor or multi-factor authentication. Multi-factor authentication helps prevent a number of attack techniques including those in which users are tricked into handing over their passwords by a phishing scam. So if logging in to a website or performing sensitive actions requires the physical possession of an authentication device, then it doesn’t matter if the scammer has the password.

Website Password Policy
While the process seems straightforward, organization should never take choosing passwords lightly as it will significantly affect the user experience, customer support volume, and the level of security/fraud. The fact is users will forget their passwords, pick easy passwords, and in all likelihood share them with others (knowingly and unknowingly). Allowing users to choose their password should be considered an opportunity to set the tone for how a website approaches security, which originates from solid business requirements.

Defining an appropriate website password policy is a delicate balancing act between security and ease-of-use. Password policies enforce the minimum bar for password guessability, how often they may be changed, what they can’t be, and so on. For example, if a user could choose a four-character password, and they would if they could, using lower case letters (a through z), an attacker could theoretically try them all (26^4 or 456,976) in roughly 24 hours at only 5 guesses a second. This degree of password guessing, also known as a brute-force attack, is plausible with today’s network and computing power and as a result far too weak by most standards.

On the opposite end of the spectrum, a website could enforce 12-character passwords, that must have upper and lowercase letters, special characters, and be changed every 30 days. This password policy definitely makes passwords harder to guess, but would also likely suffer from a user revolt due to increased password recovery actions, customer support calls, not to mention more passwords written down. Obviously this result defeats the purpose of having passwords to begin with. The goal for a website password policy is finding an acceptable level of security to satisfy users and business requirements.


Length Restrictions
In general practice on the Web, passwords should be no shorter than six characters in length, while systems requiring a higher degree of security consider eight or more advisable. There are some websites that limit the length of passwords that their users may choose. While every piece of user-supplied data should possess a maximum length, this behavior is counter-productive. It’s better to let the user choose their password length, then chop the end to a reasonable size when it’s used or stored. This ensures both password strength and a pleasant user experience.


Character-Set Enforcement
Even with proper length restrictions in place, passwords can often still be guessed in a relatively short amount of time with dictionary-style guessing attacks. To reduce the risk of success, it’s advisable to force the user to add at least one uppercase, number, or special character to their password. This exponentially increases the number of passwords available and reduces the risk of a successful guessing attack. For more secure systems, it’s worth considering enforcing more than one of these options. Again, this requirement is a balancing act between security and ease-of-use.


Simple passwords
If you let them, users will choose the simplest and easiest to remember password they can. This will often be their name (perhaps backwards), username, date of birth, email address, website name, something out of the dictionary, or even “password.” Analysis of leaked Hotmail and MySpace passwords show this to be true. As was the case In just about every case, websites should prevent users from selecting these types of passwords as they are the first targets for attackers. As reader @masterts has wisely said, "I'd gladly take "~7HvRi" over "abc12345", but companies need good monitoring (mitigating controls) to catch the brute force attacks."


Strength Meters
During account registration, password recovery, or password changing processes, modern websites assist users by providing a dynamic visual indication of the strength of their passwords. As the user types in their password, a strength meter dynamically adjusts coinciding with the business requirements of the website. The Microsoft Live sign-up Web page is an excellent example:



Notice how the password recommendations are clearly stated to guide the user to selecting a stronger password. As the password is typed in and grows in strength, the meter adjusts:




There are several freely available JavaScript libraries that developers may use to implement this feature.


Normalization
When passwords are entered, any number of user errors may occur that prevent them from being typed in accurately. The most common reason is “caps lock” being turned on, but whitespace is also problematic as well. The trouble is users tend not to notice because their password is often behind asterix characters that defend against malicious shoulder surfers. As a result, users will mistakenly lock their accounts from too many failed guesses, send in customer support emails, or resort to password recovery. Either way it makes for a poor user experience.

What many websites have done is resort to password normalization functions, meaning they’ll automatically lowercase, remove whitespace, and snip the length of passwords before storing them. Some may argue that doing this counteracts the benefits of character-set enforcement. While this is true, it may be worth the tradeoff when it comes to benefiting the user experience. If passwords are still 6 characters in length and must contain a number, then backed by an acceptable degree of anti-brute force and aging measures, there should certainly be enough strength left in the system to thwart guessing attacks.


Storage (Salting)
Storing passwords is a bit more complicated than it appears as important business decisions need to be made. The decision is, “Should the password be stored encrypted or not?” and there are tradeoffs worth considering. Today’s best-practice standards say that passwords should always be stored in a hash digest form where even the website owner cannot retrieve the plain text version. That way in the event that the database is either digitally or physically stolen, the plain text passwords are not. The drawback to this method is that when users forget their passwords they cannot be recovered to be displayed on screen or emailed to them, a feature websites with less strict security requirements enjoy.

If passwords are to be stored, a digest compare model is preferred. To do this we take the user’s plain text password, append it to a random two-character (or greater) salt value, and then hash digest the pair.

password_hash = digest(password + salt) + salt

The resulting password hash, plus the salt appended to the end in plain text, is what will get stored in the user’s password column of the database. This method has the benefit that even if someone stole the password database they cannot retrieve the passwords -- this was not the case when PerlMonks was hacked. The other benefit, made possible by salting, is that no two passwords will have the same digest. This means that someone with access to the password database can’t tell if more than one user has the same password.


Brute-Force Attacks
There are several types of brute-force password-guessing attacks, each designed to break into user accounts. Sometimes they are conducted manually, but these days most are automated.

1. Vertical Brute-Force: One username, guessing with many passwords. The most common attack technique forced on individual accounts with easy to guess passwords.


2. Horizontal Brute-Force: One password, guessing with many usernames. This is a more common technique on websites with millions of users where a significant percentage of them will have identical passwords.


3. Diagonal Brute-Force: Many usernames, guessing with many passwords. Again, a more common technique on websites with millions of users resulting in a higher chance of success.


4. Three Dimensional Brute-Force: Many usernames, guessing with many passwords while periodically changing IP-addresses and other unique identifiers. Similar to a diagonal brute-force attack, but intended to disguise itself from various anti-brute force measures. Yahoo has been show to have been enduring this for of brute-force attack.


Depending on the type of brute-force attack, there are several mitigation options worth considering, each with its own pros and cons. Selecting the appropriate level of anti-brute-force will definitely affect the overall password policy. For example, a strong anti-brute-force system will slow down guessing attacks to the point where password strength enforcement may not need to be so stringent.

Before anti-brute-force systems are triggered, a threshold needs to be set for failed password attempts. On most Web-enabled systems five, 10, or even 20 guesses is acceptable. It provides the user with enough chances to get their password right in case they forget what it is or typo several times. While 20 attempts may seem like too much room for guessing attacks to succeed, provided password strength enforcement is in place, it really is not.


Aging
Passwords should be periodically changed because the longer they are around, the more likely they are to be guessed. The recommended time between password changes will vary from website to website, anywhere from 30-365 days is typical. For most free websites like WebMail, message boards, or social networks, 365 days or never is reasonably acceptable. For eCommerce websites receiving credit cards, every six months to a year is fine. For higher security needs or an administrator account, 30-90 days is more appropriate. All of these recommendations assume an acceptable password strength policy is being enforced.


For example, let’s say you are enforcing a minimalist six-character password, with one numeric, and no simple passwords allowed. In addition, there is a modest anti-brute-force system feasibly limiting the amount of password guessing attempts an attacker can place on a single account to 50,000 per day (1,500,000 per mo. / 18,000,000 per yr.). That means for an attacker to exhaust just 1% of the possible password combinations, 6^36 (a-z and 0-9) or 2,176,782,336, it would take a little over a year to complete.

The things you have to watch out for with password aging is a little game people play when forced to change passwords. Often when prompted they’ll change their password to something new, then quickly change it back to their old one because it’s easier for them to remember. Obviously this defeats the purpose and benefits of password aging, which has to be compensated for in the system logic.


Minimum Strength Policy
After everything is considered, the following should be considered the absolute minimum standard for a website password policy. If your website requires additional security, these properties can be increased.


Minimum length: six

Choose at least one of the following character-set criteria to enforce:

- Must contain alpha-numeric characters

- Must contain both upper-case and lower-case characters

- Must contain both alpha and special characters


Simple passwords allowed: No

Aging: 365 days

Normalization: yes

Storage: 2-character salted SHA-256 digest



Friday, October 02, 2009

Cloud/SaaS will do for websites what PCI-DSS has not

Make them measurably more secure. If a would-be Cloud/Software-as-a-Service (SaaS) customer is concerned about security, and they should be since their business is on the line, then security should be the vendors concern as well. Unless the Cloud/SaaS vendor is able to meet a customer’s minimum requirements, they risk losing the business to a competitor who can.

This market dynamic encourages the proper alignment of business interests and establishes a truly reasonable minimum security bar. The other significant benefit of Cloud/SaaS business model is that multi-tenant systems are at least as secure as the most demanding customer. Security investments meant to satisfy one customer directly benefits the rest.

Compliance on the other hand is designed to compensate for times when normal market forces fail to provide an adequate alignment of interests. For example, organizations that are in a position to protect data are not responsible for the losses. The payment card industry found itself in one of those situations when it came to cardholder information.

Unfortunately compliance, specifically PCI-DSS, in practice is implemented in a much different way than the aforementioned market forces. Apparently a checklist approach is most common where strategic planning is generally not an incentive. The result of which is performing a bunch of “best-practices” that may or may not affect a better outcome because “security” is not the primary goal. Satisfying audit requirements is.

The interesting thing about SaaS is the last word, “service.” Customers are buying a service and not a product with a lopsided, zero liability end-user licensing agreement (EULA). Customers may demand vendors provide assurances by passing third-party vulnerability assessments, encrypting their data, onsite visits, or taking on contractual liability in the event of a breach or service degradation, etc. This all before signing on the dotted line. This requires vendors implement safeguards customers may not be able to do for themselves without incurring significant expense. These are serious considerations to be made before outsourcing sales force automation, enterprise resource planning, email hosting, and so on.

Sure there are Cloud/SaaS vendors with equally customer-unfriendly EULA and no SLAs or security guarantees to speak of, but I am confident this only opens the door for healthy competition. Customers WANT security, the question is are they willing to pay a premium for it. If so, Cloud/SaaS vendors who view security as a strategic way to differentiate could find themselves as the new market leaders. I believe this form of competition is doing a lot more to improve website security than how PCI is typically applied. At least, so far this has been my experience.

Best of Application Security (Friday, Oct. 2)

Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!