Whether we like it or not, whether we want them to or not, whether it’s legal or not, there are some unsavory people out there who will try to hack into our website(s). History and headlines prove that no business, school, government, or personal blog is off limits and out of harms way. And since the vast majority of websites are riddled with exploitable vulnerabilities, the odds of a bad guy finding a way to break in is in their favor — increasingly so with each passing day. Therefore, it only makes sense to Hack Yourself First. Learn what the bad guys know about the security of your website, or will eventually.
Hack Yourself First requires website vulnerability scans because, if nothing else, the many thousands of attack variants that must be tested for can never be completed manually. At the same time, one must realize that scanning can negatively impact a website and it’s ability to conduct business. Sometimes the impact is negligible. Other times the impact is severe. Sometimes the damage is the scanners fault. Other times the website itself is at fault. Whoever’s fault it is, what we all know is the bad guys WILL scan your website(s) looking for exploitable vulnerabilities. So if a vulnerability scan is capable of harming your website, not to mention able to identify vulnerabilities, it’s far preferable you are in the driver seat, ready, and in control of the process.
Fortunately, precautions can be taken to drastically reduce the risk of a vulnerability scan harming a website. At WhiteHat Security, we know these techniques better than anyone. We know because after ten years of scanning tens of thousands of real-live websites of all shapes and sizes, we’ve admittedly harmed our fair share. We’ve received the angry calls. We’ve triaged and investigated the root-cause. What all this experience has done is helped us improve our technology and processes. As a result, we’ve gotten all that risk behind us.
Right now, I’m happy to report that only between 0.3% and 0.7% of websites that receive a WhiteHat Sentinel scan experience any kind of negative impact, and most are extremely minor at that. Years of battlefield testing is the only way to accomplish this, to truly know this. The other [desktop] scanner guys simply can’t confidently say how often they harm website, because they just don’t know. By nature of their business model, they don’t get access to the necessary data to measure. What we do know is their reputations have gotten around and they are not good.
We know this because when companies switch to WhiteHat, they share with us their past experience and fears in continuing production scanning. In our experience the best way to overcome these fears is understanding exactly what causes a vulnerability scanner to negatively impact a website. Here are the things everyone should know:
1) Following “Sensitive” Hyperlinks: RFC specification states HTTP GET requests, such as those initiated by clicking on hyperlinks, should be idempotent. Idempotent means any number of HTTP requests to the same URL should yield the same response. Even still, it is not uncommon to encounter websites with hyperlinks (GET requests) that when “clicked” execute backend functionality that deletes data, cancel orders, remits payment, removes user accounts, disables functionality, and many other examples of non-idempotent behavior.
And when non-idempotent functions are initiated by hyperlink URLs, the simple act of crawling a website can easily adversely affect the system, system performance, and the entire business. By extension, this RFC noncompliant behavior proves problematic for vulnerability scanners because crawling, whether HTML5 technology is in use or not, is an essential step to identifying the attack surface of a website that needs to be tested.
2) Automatically Testing “Sensitive” Web Forms: Non-idempotent requests, such as POST requests typically generated by Web forms, may have varied responses from multiple requests with the same URL. As such, submission of a Web form may generate emails to customer support, execute computationally expensive backend processes, direct submitted data that’ll be visible to other users, incur monetary charges, and so on.
With this in mind its easy to see why vulnerability scanning, which is to say automatically sending thousands of POST requests, to these types of Web forms can harm a website. Even when submitting completely valid data, the results can be spamming inboxes with thousands of emails, taking down the website due to resource load, negatively impacting the user experience of the entire user-base by showing them unexpected data, and costing the company large sums of money. While all Web forms may potentially harbor vulnerabilities, blindly testing them has proven to be highly dangerous in the field.
3) Poorly Designed Vulnerability Tests: Dynamically testing a website for vulnerabilities include submitting various meta-character strings into input fields, be they in URLs, POST bodies, headers, etc. When websites mistake meta-characters for executable code, such as when they execute it or try to, this indicates that a vulnerability may be present. During this execution process, whether taking place server-side or client-side, poorly designed and invasive vulnerability tests (i.e. strings of meta-characters) may mistakenly harm the website or a user’s browser.
For example, an OS Command Injection test may not account for when a website executes the attack payload command an infinite number of times. A SQL Injection test might, for whatever reason, be designed to DROP database tables or cause the system to “wait” an extended period of time. In the case of Cross-Site Scripting, submission of active javascript payloads may generate user confusing client-side errors all across the system.
4) Connection Denial of Service (DoS): Websites may contain a tiny number of Web pages, an infinitely large sum, or fall somewhere in between. Many websites also have dozens or even hundreds of complex Web forms, tens of thousands of links, and each with a dozen parameters that all require testing. Thoroughly dynamically testing the attack surface of such websites may require upwards of a hundred thousand Web requests, give or take, considering all the attack variation. Processing each request, one after the other, may take days, weeks, and even months of scanning hours depending on the websites response speed. To shorten the process a vulnerability scanner may send dozen or even hundreds of requests simultaneously.
As we know, not all websites are designed, or have the underlying infrastructure, to support such a system load. This is especially true if vulnerability scans are run during peak business hours where the normal load is already high. If the performance capabilities of the system is not accounted for in the scan process, a vulnerability scanner can easily exhaust a website’s available connection pool and render the system unable to serve legitimate visitors. Obviously then caution must be exercised when scanners requests are threaded.
5) Session Exhaustion Denial of Service (DoS): When users log-in to a website, the backend system may generate a new session credential, spawn process threads and allocate memory. In many cases, these threads and memory allocations are not shared across the system. And, not all websites perform effective session credential garbage collection, which includes killing the threads and freeing up the memory, when session credentials are logged-out or are inactive for an extended period of time. Such scenarios are prone to denial of service via session exhaustion.
For example, thoroughly testing a website requires that vulnerability scans are run in an authenticated state. And, during these scans, it is common and expected for a vulnerability scanner to become logged-out for a wide variety of reasons. So, a vulnerability scanner is required to log back in, perhaps dozens, or even hundreds of times, to continue. With each login, the website provides a new session credential in return. If this happens too often in the above scenario, it may consume all the session credential resources the website has available via its flawed design. When the session credential limit is met, no additional users can log-in — that is until the session credential garbage collection is conducted.
6) CPU Denial of Service (DoS): Many websites are designed to support an expected user flow through the application. This flow expects that users will click on certain links, in a certain order, a certain number of times, in a given amount of time. Under these assumptions, there are poorly designed websites which have hyperlinks that when clicked execute computationally expensive database queries. This, of course, is fine under normal usage. However, when an attacker targets a website, or the website receives a vulnerability scan, the traffic patterns are anything, but normal.
During a vulnerability scan these computationally expensive hyperlinks may be clicked a large number of times, contrary to what was expected, and consume all of a websites available CPU resources. When the CPU is exhausted, no additional requests from anyone will be responded to. As stated above, just the simple act of crawling a website, or posting forms with valid data, can illicit this condition.
7) Verbose Logging and Run-Time Errors: Vulnerability scanning requires that a website endure a large number of abnormal requests. The requests might contain parameter names and values that weren’t expected, which could in turn raise various backend application exceptions and verbose run-time error logging. Since vulnerability scanning websites generate a huge number of requests, the disk size of the logs generated and stored could be substantial. If the verbose logging fills the available disk space, the website could be significantly harmed or at least logging might cease from that point on.
How to Avoid Harming Websites While Vulnerability Scanning
1) Before commencing a scan, manually identify any and all potentially sensitive areas of website functionality, and them rule them out of the automated process. Especially during authenticated scans, pay careful attention to admin level functionality that is executed via GET request, which may execute dangerous non-idempotent requests. Each sensitive area may be tested manually to complement the scan. Secondly, authenticated scans may be restricted to special test accounts, so any potential negative impact is compartmentalized.
2) Do not perform vulnerability scans on Web forms without first manually ensuring each one is safe for automated testing. Doing otherwise is extremely hazardous. Each Web form discovered during the crawl phase of a scan, including multi-form process flows, should be custom configured (i.e. marked as safe or unsafe for testing). Each Web form can also be optionality configured with valid data to assist with more thorough scans the next time around.
With respect to #1 and #2, vulnerability scans should NOT perform any requests that are non-idempotent, perform write activity, or potentially destructive actions without explicit authorization.
3) All of a vulnerability scanner’s injection payloads should be made safe as possible and noninvasive. For example, live Javascript should NOT be injected for testing of Cross-Site Scripting. When testing for OS Commanding, such as ping, payloads executing a test command should be designed to end otherwise they’ll keep going forever. Another example are SQL Injection payloads, which should avoid containing DROP and UPDATE statements as they could potentially delete or modify data. Every class of attack tested for must include similar precautions.
4) The first series of scans should be performed as a single-threaded user, and carry no more system load than a single [malicious] user. As such, the scanner will not make the next HTTP request until a response to the last request was received. If website performance degrades for any reason, scan speed automatically slows. If a website looks like it is failing to respond within a given threshold, or incapable of creating new authentication sessions, then the scanner should stop testing, raise an alert, and wait until adequate performance returns before resuming. Generally speaking, there is also rarely a need for a scanner to download static content (i.e.. images), which by extension reduces bandwidth consumption.
5) If the website is particularly sensitive, performing vulnerability scans in a staging environment first can be used to increase confidence. It is important to remember though that staging environments are NOT the same as production, the area where bad guys prefer to attack. In our experience the vulnerabilities identified in production and their staging mirrors are rarely identical.
No comments:
Post a Comment