Lets look at Black, White, and Gray Box software testing from a high-level as it relates to a website security standpoint and highlight their strong points. I realize that not everyone will agree with my conclusions. So as always, feel free to comment and let me know if anything has been overlooked and should be considered. Also for perspective I’m of the opinion that all three methodologies require tools (scanners) and experienced personnel as part of the process. No exceptions.
Black Box (Dynamic Analysis)
An attacker starting off with zero knowledge any application source code, system access, documentation or anything a typical user wouldn't have access to. This is normally an attacker with a web browser, proxy, and perhaps some fault injection tools at their disposal.
Black Box’ing:
- Measures the amount of effort required for an attacker to compromise the data on a website. This measurement takes into consideration additional layers of defense (web servers, WAFs, permissions, configs, proxies, etc.). This enables website owners to focus on areas that directly improve security.
- Allows faster business logic testing by leveraging actual operational context. Testers are able to uncover flaws (Information Leakage, Insufficient Authorization, Insufficient Authentication, etc.) that may otherwise not be visible (or significantly harder to find) by analyzing vast amounts of source code.
- Is generally considered to be faster, more repeatable, and less expensive than White Box’ing. This is helpful to development environments where websites are updated more than a few times per year and as a result requires constant security (re)-assurance.
- Provides coverage for common vulnerabilities that are not present in the code, such as Predictable Resource Location and Information Leakage. Files containing sensitive information (payment logs, backups, debug messages, source code, etc.) routinely become unlinked, orphaned, and forgotten.
White Box (Static Analysis)
An attacker with access to design document, source code, and other internal system information.
White Box’ing:
- Generally considered to a be a deeper method of software testing as it can touch more of the code that may be very difficult or impossible to access from the outside. Vulnerabilities such as SQL Injection, backdoors, buffer overflows, and privacy violations become easier to identify and determine if an exploit will work.
- Can be employed much earlier in the software development lifecycle (SDLC) because Black Box’ing requires that websites and applications are at least somewhat operational. Libraries and APIs can be tested early and independently of the rest of the system.
- Is capable of recommending secure coding best-practices and pinpoint the exact file or line number of vulnerabilities. While a website might be “vulnerability free” from an external perspective, bad design decisions may cause a precarious security posture. Weak use of encryption, insufficient logging, and insecure data storage are examples of issues that often prove problematic.
- Has an easier time understanding if identified vulnerabilities are of a one-off developer mistakes or architectural problems across the entire application infrastructure. This insight in useful when strategizing security decisions such as training or standardizing on software frameworks and APIs or both.
Gray Box or Glass Box
The combination of both Black and White Box methodologies. The spork of software testing if you will. This approach takes all the capabilities of both and reduces their respective drawbacks. The goal is to make the whole (process) worth more than the sum of its parts. In many ways Gray Box’ing achieves this so no need to go back over the above material. The largest negative issue is the increased cost in time, skill, repeatability, and overall expense of the process. Qualified people proficient in either Black or White Box’ing are hard to find and retain. Locating someone who is solid at both is extremely rare and as a result they can demand high bill rates. And of course they need double the tools so double the cost.
4 comments:
there are actually three types (i call them, "shades") of greybox testing from my perspective:
1) whitebox side of greybox testing. this is my favorite method. the idea is to be able to fuzz and do fault injection on every input at full code coverage. the only tool i know that does this is jCUTE
2) blackbox side of greybox testing, which could be numerous things - but normally i think of binary analysis (e.g. BinAudit, BugScam, etc - both requiring IDA Pro)
3) the middle-ground, which could also be numerous things. i tend to think of this as runtime, memory, or core file debugging along with custom fuzz testing and/or fault injectors that attempt to get full code coverage. the primary tools in this area are PaiMei and BinNavi (which both require IDA Pro), or possibly OllyDbg with the hit-trace plugin. on the WAVA side, combining Firebug with a fault-injector would count
speaking of web application vulnerability assessment tools, normally these are very black or white. even FortifySoftware Tracer and SPI Dynamics DevInspect have a lot to learn from the non-webapp world of vulnerability assessment.
when i think of web application fault-injectors/scanners, i normally think of forced browsing / directory traversal (owasp t10 2007, section a10) - which isn't really "black-box" bug/flaw finding in my mind. sure, they also find real bugs/flaws such as xss or sql injection, but there are usually better tools and methods that do this manually - let alone business logic flaws such as csrf.
in all cases (white/black/grey, the 3 shades of grey, webapp vs. non-webapp), there is no universal answer to bug/flaw finding. each approach should be unique to the person/team (i.e. talent) working on discoveries and should be tailored to the specific application under test.
there are also many other methods (e.g. using virtualization, build tools, system analysis tools, etc) to speed up the process of finding bugs/flaws. finally, process and methodology can provide huge gains in time-to-finding, especially when dealing with a complex application.
The definitions of white box and black box testing that this discussion is based on are incorrect. You link "black box" to wikipedia's "dynamic analysis" definition. Then you link "white box" to wikipedia's "static analysis" definition. No where in wikipedia's definitions are the terms "white box" or "black box" used.
I cover what these terms mean in Chapter 5, "Shades of Analysis: White, Grey, and Black Box Testing" in my book, "The Art of Software Security Testing".
Black box refers to the tester not being informed about the test subject. In black box testing, if the subject it is a web site, the only information the tester has is what he can gather by probing the site like an attacker. You can certainly white box test a web site dynamically. This would mean the tester had access to design documentation and source code.
The converse is true. You can statically black box test software. If the software is COTS the tester has the same information an attacker has. He must disassemble it with a tool such as IDA Pro.
These terms have been around for ages in the QA community. It is best to not redefine them.
-Chris
@ntp:
As example of black box testing, you talk about binary analysis; but it's not true. Binary Analysis can be white or black box:
- white box if you try to get information from your binary file (try to get x86 patterns etc.), when you read in that file.
- black box if you use your binary only as executable ie basically if you try to detect a buffer overflow with a fuzzer...
Thank you all for the comments, some excellent points have been made that lend context to the discussion. I'm going to have to revisit my definitions and amend my post a little to align with the descriptions that I originally had in mind. Fortunately for me the feedback seems limited to that aspect and my "highlight" material seemed to make sense. Otherwise Im sure everyone would have said something instant.
Post a Comment