Thursday, December 06, 2012

Top Ten Web Hacking Techniques of 2012

 

Every year the security community produces a stunning amount of new Web hacking techniques that are published in various white papers, blog posts, magazine articles, mailing list emails, conference presentations, etc. Within the thousands of pages are the latest ways to attack websites, Web browsers, Web proxies, and their mobile platform equivilents. Beyond individual vulnerabilities with CVE numbers or system compromises, here we are solely focused on new and creative methods of Web-based attack. Now it its seventh year, The Top Ten Web Hacking Techniques list encourages information sharing, provides a centralized knowledge-base, and recognizes researchers who contribute excellent work. Past Top Tens and the number of new attack techniques discovered in each year: 2006 (65), 2007 (83), 2008 (70), 2009 (82), 2010 (69), 2011 (51)  

 

The Top Ten

  1. CRIME (123 4) by Juliano Rizzo and Thai Duong
  2. Pwning via SSRF (memcached, php-fastcgi, etc) (2345)
  3. Chrome addon hacking (2345)
  4. Bruteforce of PHPSESSID
  5. Blended Threats and JavaScript
  6. Cross-Site Port Attacks
  7. Permanent backdooring of HTML5 client-side application
  8. CAPTCHA Re-Riding Attack
  9. XSS: Gaining access to HttpOnly Cookie in 2012
  10. Attacking OData: HTTP Verb Tunneling, Navigation Properties for Additional Data Access, System Query Options ($select)

 

Honorable Mention

11. Using WordPress as a intranet and internet port scanner

12. .Net Cross Site Scripting – Request Validation Bypassing (1)

13. Bruteforcing/Abusing search functions with no-rate checks to collect data

14. Browser Event Hijacking (23)

15. Bypassing Flash’s local-with-filesystem Sandbox Process oversight. Due to the original discovery date, January 4th, 2011, the technique should not have been included in this years list.   How the winners are selected…

 

Phase 2: Panel of Security Experts [CLOSED]

Judges: Ryan BarnettRobert AugerRobert Hansen (CEO, Falling Rock NetworksDinis Cruz,  Jeff Williams (CEO, Aspect Security), Peleus UhleyRomain Gaucher (Lead Researcher, Coverity), Giorgio MaoneChris WysopalTroy HuntIvan Ristic (Director of Engineering, Qualys), and Steve Christey (MITRE).

From the result of the open community voting, the final 15 Web Hacking Techniques will be voted upon by panel of security experts. Using the exact same voting process as phase 1, the judges will rank the final twenty based of novelty, impact, and overall pervasiveness. Once tabulation is completed, we’ll have the Top Ten Web Hacking Techniques of 2012!

Phase 1: Open community voting for the final 15 [CLOSED]

Each attack technique (listed alphabetically) receives a certain amount of points depending on how highly the entry is ranked in each ballot. For example, an entry in position #1 will be given 15 points, position #2 will get 14 points, position #3 gets 13 points, and so on down to 1 point. At the end all points from all ballots will be tabulated to ascertain the top fifteen overall.

Final 15 List (In no particular order):

 

Prizes

1) The winner of this years top ten will receive an updated Web security book library! If any really good books have been recently published and missing, please let me know. I’ll add it! Violent Python, Clickjacking und UI-Redressing,Web Application Defender’s CookbookSeven Deadliest Web Application AttacksA Bug Hunter’s DiaryThe Tangled WebThe Web Application Hacker’s HandbookWeb Application ObfuscationXSS AttacksHacking Web Apps. 2) After the open community voting process, two survey respondents will be chosen at random and given a $50 Amazon gift card.

Complete 2012 List

Wednesday, November 07, 2012

7 Ways Vulnerability Scanners May Harm Website(s) and What To Do About It


Whether we like it or not, whether we want them to or not, whether it’s legal or not, there are some unsavory people out there who will try to hack into our website(s). History and headlines prove that no business, school, government, or personal blog is off limits and out of harms way. And since the vast majority of websites are riddled with exploitable vulnerabilities, the odds of a bad guy finding a way to break in is in their favor — increasingly so with each passing day. Therefore, it only makes sense to Hack Yourself First. Learn what the bad guys know about the security of your website, or will eventually.

Hack Yourself First requires website vulnerability scans because, if nothing else, the many thousands of attack variants that must be tested for can never be completed manually. At the same time, one must realize that scanning can negatively impact a website and it’s ability to conduct business. Sometimes the impact is negligible. Other times the impact is severe. Sometimes the damage is the scanners fault. Other times the website itself is at fault. Whoever’s fault it is, what we all know is the bad guys WILL scan your website(s) looking for exploitable vulnerabilities. So if a vulnerability scan is capable of harming your website, not to mention able to identify vulnerabilities, it’s far preferable you are in the driver seat, ready, and in control of the process.

Fortunately, precautions can be taken to drastically reduce the risk of a vulnerability scan harming a website. At WhiteHat Security, we know these techniques better than anyone. We know because after ten years of scanning tens of thousands of real-live websites of all shapes and sizes, we’ve admittedly harmed our fair share. We’ve received the angry calls. We’ve triaged and investigated the root-cause. What all this experience has done is helped us improve our technology and processes. As a result, we’ve gotten all that risk behind us.

Right now, I’m happy to report that only between 0.3% and 0.7% of websites that receive a WhiteHat Sentinel scan experience any kind of negative impact, and most are extremely minor at that. Years of battlefield testing is the only way to accomplish this, to truly know this. The other [desktop] scanner guys simply can’t confidently say how often they harm website, because they just don’t know. By nature of their business model, they don’t get access to the necessary data to measure. What we do know is their reputations have gotten around and they are not good.

We know this because when companies switch to WhiteHat, they share with us their past experience and fears in continuing production scanning. In our experience the best way to overcome these fears is understanding exactly what causes a vulnerability scanner to negatively impact a website. Here are the things everyone should know:

1) Following “Sensitive” Hyperlinks: RFC specification states HTTP GET requests, such as those initiated by clicking on hyperlinks, should be idempotent. Idempotent means any number of HTTP requests to the same URL should yield the same response. Even still, it is not uncommon to encounter websites with hyperlinks (GET requests) that when “clicked” execute backend functionality that deletes data, cancel orders, remits payment, removes user accounts, disables functionality, and many other examples of non-idempotent behavior.

And when non-idempotent functions are initiated by hyperlink URLs, the simple act of crawling a website can easily adversely affect the system, system performance, and the entire business. By extension, this RFC noncompliant behavior proves problematic for vulnerability scanners because crawling, whether HTML5 technology is in use or not, is an essential step to identifying the attack surface of a website that needs to be tested.

2) Automatically Testing “Sensitive” Web Forms: Non-idempotent requests, such as POST requests typically generated by Web forms, may have varied responses from multiple requests with the same URL. As such, submission of a Web form may generate emails to customer support, execute computationally expensive backend processes, direct submitted data that’ll be visible to other users, incur monetary charges, and so on.

With this in mind its easy to see why vulnerability scanning, which is to say automatically sending thousands of POST requests, to these types of Web forms can harm a website. Even when submitting completely valid data, the results can be spamming inboxes with thousands of emails, taking down the website due to resource load, negatively impacting the user experience of the entire user-base by showing them unexpected data, and costing the company large sums of money. While all Web forms may potentially harbor vulnerabilities, blindly testing them has proven to be highly dangerous in the field.

3) Poorly Designed Vulnerability Tests: Dynamically testing a website for vulnerabilities include submitting various meta-character strings into input fields, be they in URLs, POST bodies, headers, etc. When websites mistake meta-characters for executable code, such as when they execute it or try to, this indicates that a vulnerability may be present. During this execution process, whether taking place server-side or client-side, poorly designed and invasive vulnerability tests (i.e. strings of meta-characters) may mistakenly harm the website or a user’s browser.

For example, an OS Command Injection test may not account for when a website executes the attack payload command an infinite number of times. A SQL Injection test might, for whatever reason, be designed to DROP database tables or cause the system to “wait” an extended period of time. In the case of Cross-Site Scripting, submission of active javascript payloads may generate user confusing client-side errors all across the system.

4) Connection Denial of Service (DoS): Websites may contain a tiny number of Web pages, an infinitely large sum, or fall somewhere in between. Many websites also have dozens or even hundreds of complex Web forms, tens of thousands of links, and each with a dozen parameters that all require testing. Thoroughly dynamically testing the attack surface of such websites may require upwards of a hundred thousand Web requests, give or take, considering all the attack variation. Processing each request, one after the other, may take days, weeks, and even months of scanning hours depending on the websites response speed. To shorten the process a vulnerability scanner may send dozen or even hundreds of requests simultaneously.

As we know, not all websites are designed, or have the underlying infrastructure, to support such a system load. This is especially true if vulnerability scans are run during peak business hours where the normal load is already high. If the performance capabilities of the system is not accounted for in the scan process, a vulnerability scanner can easily exhaust a website’s available connection pool and render the system unable to serve legitimate visitors. Obviously then caution must be exercised when scanners requests are threaded.

5) Session Exhaustion Denial of Service (DoS): When users log-in to a website, the backend system may generate a new session credential, spawn process threads and allocate memory. In many cases, these threads and memory allocations are not shared across the system. And, not all websites perform effective session credential garbage collection, which includes killing the threads and freeing up the memory, when session credentials are logged-out or are inactive for an extended period of time. Such scenarios are prone to denial of service via session exhaustion.

For example, thoroughly testing a website requires that vulnerability scans are run in an authenticated state. And, during these scans, it is common and expected for a vulnerability scanner to become logged-out for a wide variety of reasons. So, a vulnerability scanner is required to log back in, perhaps dozens, or even hundreds of times, to continue. With each login, the website provides a new session credential in return. If this happens too often in the above scenario, it may consume all the session credential resources the website has available via its flawed design. When the session credential limit is met, no additional users can log-in — that is until the session credential garbage collection is conducted.

6) CPU Denial of Service (DoS): Many websites are designed to support an expected user flow through the application. This flow expects that users will click on certain links, in a certain order, a certain number of times, in a given amount of time. Under these assumptions, there are poorly designed websites which have hyperlinks that when clicked execute computationally expensive database queries. This, of course, is fine under normal usage. However, when an attacker targets a website, or the website receives a vulnerability scan, the traffic patterns are anything, but normal.

During a vulnerability scan these computationally expensive hyperlinks may be clicked a large number of times, contrary to what was expected, and consume all of a websites available CPU resources. When the CPU is exhausted, no additional requests from anyone will be responded to. As stated above, just the simple act of crawling a website, or posting forms with valid data, can illicit this condition.

7) Verbose Logging and Run-Time Errors: Vulnerability scanning requires that a website endure a large number of abnormal requests. The requests might contain parameter names and values that weren’t expected, which could in turn raise various backend application exceptions and verbose run-time error logging. Since vulnerability scanning websites generate a huge number of requests, the disk size of the logs generated and stored could be substantial. If the verbose logging fills the available disk space, the website could be significantly harmed or at least logging might cease from that point on.

How to Avoid Harming Websites While Vulnerability Scanning

1) Before commencing a scan, manually identify any and all potentially sensitive areas of website functionality, and them rule them out of the automated process. Especially during authenticated scans, pay careful attention to admin level functionality that is executed via GET request, which may execute dangerous non-idempotent requests. Each sensitive area may be tested manually to complement the scan. Secondly, authenticated scans may be restricted to special test accounts, so any potential negative impact is compartmentalized.

2) Do not perform vulnerability scans on Web forms without first manually ensuring each one is safe for automated testing. Doing otherwise is extremely hazardous. Each Web form discovered during the crawl phase of a scan, including multi-form process flows, should be custom configured (i.e. marked as safe or unsafe for testing). Each Web form can also be optionality configured with valid data to assist with more thorough scans the next time around.

With respect to #1 and #2, vulnerability scans should NOT perform any requests that are non-idempotent, perform write activity, or potentially destructive actions without explicit authorization.

3) All of a vulnerability scanner’s injection payloads should be made safe as possible and noninvasive. For example, live Javascript should NOT be injected for testing of Cross-Site Scripting. When testing for OS Commanding, such as ping, payloads executing a test command should be designed to end otherwise they’ll keep going forever. Another example are SQL Injection payloads, which should avoid containing DROP and UPDATE statements as they could potentially delete or modify data. Every class of attack tested for must include similar precautions.

4) The first series of scans should be performed as a single-threaded user, and carry no more system load than a single [malicious] user. As such, the scanner will not make the next HTTP request until a response to the last request was received. If website performance degrades for any reason, scan speed automatically slows. If a website looks like it is failing to respond within a given threshold, or incapable of creating new authentication sessions, then the scanner should stop testing, raise an alert, and wait until adequate performance returns before resuming. Generally speaking, there is also rarely a need for a scanner to download static content (i.e.. images), which by extension reduces bandwidth consumption.

5) If the website is particularly sensitive, performing vulnerability scans in a staging environment first can be used to increase confidence. It is important to remember though that staging environments are NOT the same as production, the area where bad guys prefer to attack. In our experience the vulnerabilities identified in production and their staging mirrors are rarely identical.

Tuesday, November 06, 2012

Introducing the “I Know…” series


Fortunately, if you are using one of today’s latest and greatest browsers (Chrome, Firefox, Internet Explorer, Safari, etc.), these tricks, these attack techniques, mostly don’t work anymore. The unfortunate part is that they were by no means the only way to accomplish these feats. In the following sections I’ll be discussing many, many more attack techniques — tricks that reveal a person’s name, work place, physical location, online habits, what websites they log in to, the technology specifics about their computer and browser, and more. The fact is, unless you’ve taken a number of very particular precautions, essentially every website you visit has the ability to quickly acquire all the aforementioned information.

[youtube]https://youtu.be/0PuoRIIHOQI[/youtube]

I’ll expose why the common assumption that people are relatively anonymous, that their online activities are private, as they surf the Web is wrong — from a personal security and privacy standpoint, dangerously wrong. Imagine if a young teen is pregnant, and hasn’t yet informed her parents. As she surfs the Web for information about her situation, websites glean this personal information about her condition, and begin mailing maternity content directly to her home. Imagine a divorcee trying to hide from her hostile ex-husband and her real-world address is revealed with nothing more than a link click. Imagine if somehow your religious, political, and adult entertainment preferences were discovered by a local congregation, employer, and friends.

As you read, what you should find interesting (and concerning) is that a large percentage of the techniques I’ll be leveraging are NOT new — they’ve already been publicly documented. On their own, each technique’s impact may not be terribly severe, which probably explains why they remain unaddressed. However, when these disparate techniques are wired together, they paint a highly problematic and largely misunderstood narrative that is the actual state of Web [browser] security.

From here we’ll progress slowly, building up our exploitation pyramid one blog post section at a time.

 

I Know…

Is It Really True That Application Security has “Best-Practices”?


Application Security professionals commonly advocate for “best-practices” with little regard for the operational environment. The implication of a “best practice” is they are essential for everyone, in every organization, and at all times. Commonly cited “best-practices” include, but are not limited to, software security training, security testing during QA, threat modeling, code reviews, architecture analysis, Web Application Firewalls (WAF), penetration-testing, and a hundred other activities.

Sure, all these “best-practices” sound like good ideas. And any good sounding “application security” idea must be a “best-practice” right? Not so fast. Watch this, I’ll create a new “best-practice” right now: Website Bug Bounty programs. See how easy that was! Google does it. So does PayPal, Mozilla, Facebook, and Etsy. Now everyone must do it! Seriously though, more thoughtfulness is required.

After more than a decade working in the Application Security industry, I’m fairly confident there are few, if any, “best-practices” — that is to say, activities universally effective and investment worthy no matter the current environment. I’m convinced that different application security activities, including those listed above, are best suited in different scenarios. In fairness, I must admit that I’ve a limited amount of data that backs up my assertion, but at the same time it’s probably more data than those blindly advocating “best-practices” have to offer.

I’d like to share my thought process and support my opinions by laying out a few example scenarios. The following are vulnerability statistics descriptions of real-world websites taken over a period of time. These scenarios may be familiar to anyone with more than a few years of application security experience.

Website A
Produces a high rate of new vulnerabilities, about 20 each month, largely dominated by Cross-Site Scripting (XSS). When vulnerabilities are identified in production, the development team fixes nearly every one in under 24 hours. The net result is Website A is an industry laggard in vulnerability volume, but a leader in time-to-fix and remediation rate.

Given this scenario, and if you knew nothing else about the organization responsible for the website, what application security “best-practices” would you recommend they adopt first to improve their security posture?

My Recommendations
• Look into the development framework in use. Perhaps the output encoding APIs are nonexistent, imperfect, not well advertised to developers internally, and/or their use is not strictly mandatory.
• An internal security QA process should be created, or improved upon, to catch the XSS vulnerabilities prior to production release.

Addressing either one of these potential gaps could seriously curtail the XSS problem, quickly too! How close were your recommendations to mine? The same, way different, or somewhere in between? Let’s hear it in the comments.

Which “best-practices” would you not recommend, at least initially?

I’d recommend holding off on:
• Threat modeling is great for spotting missing security controls during the application design phase, but given the type of vulnerabilities in this case (XSS), and the organizations ability to patch them quickly, the problem appears more on the implementation side rather than design.
• The statistics show XSS vulnerabilities are capable of being fixed quickly when identified, indicating that they development team knows what the issue is and how to fix it. So, software security training for developers might help, but only if the programmers have been provided effective anti-XSS tools and for whatever reason aren’t using them consistently.
• WAFs can act as a virtual patches for websites experiencing lengthy time-to-fix metrics and low remediation rates, but that’s not what the metrics are showing us here.

In the next statistics scenario, the situation is slightly varied…

Website B
Experiences roughly 100 serious vulnerabilities a year, a large portion of which are XSS, but also a nontrivial number of SQL Injection, Insufficient Authentication, and Insufficient Authorization issues exist. When vulnerabilities are brought to the attention of the development team, the Insufficient Authentication and Insufficient Authorization issue are fixed consistently and comprehensively within a week or two. On the other hand, XSS and SQL Injection issues remain exposed for several months and do not get fixed often.

Given these statistics, what “best-practices” would you recommend first? The same as Website A or different?

Recommendations
• Insufficient Authentication and Insufficient Authorization issues are of low volume and taken care of quickly, but the more esoteric vulnerabilities such as XSS and SQL Injection are not. This says the odds are developer education is lacking in the latter two vulnerability classes. To address this focus a software security training program on those vulnerability classes, XSS and SQL Injection, placing emphasis on understanding the risks and proper defensive coding techniques.
• Deploy a Web Application Firewall, particularly a product capable of automatically integrating vulnerability results to create virtual-patch rules. This is important so that while the developers come up to speed on XSS and SQL Injection, the IT operations team can be engaged to help mitigate the risks they pose in a timely fashion.
• While Insufficient Authentication and Insufficient Authorization are fixed quickly, the mere fact that they exist could lead to the conclusion that a Threat Modeling process is could be helpful and perhaps some security QA to test for various abuse cases.

Comment below if you agree or disagree.

Website C
Very few vulnerabilities, maybe only 1 XSS per month, make it through the software development life-cycle (SDL) to production. WHEN XSS issues are identified, they are fixed, and fixed extremely quickly, taking no more than a day. But, vulnerabilities don’t get remediated often. In fact, Website C’s annual remediation rate is under 25%.

With these statistics in mind, what “best-practices” should prioritized? The same as Website A and Website B or different?

My initial assumption would be that the organization has a vulnerability prioritization problem and/or lack of development resources where the investment in revenue generating features is placed ahead of security fixes. And for them, perhaps that’s the right decision. If so…

Recommendations
• In the near-term, like in Website B, deploy a Web Application Firewall with virtual-patch integration capability. If the vulnerabilities are to remain publicly exposed in the code for an extended period of time, as a business decision, the bar to exploitation higher should be pushed higher.
• For a longer-term plan, engage experienced consulting firm that specializes in code remediation. That way the organizations core development team can stay focused driving features while the current pool of vulnerabilities is simultaneously taken care of.

Hold-Off
• The metrics obviously show that the development team generates very few vulnerabilities. This is probably because they either are well-versed in XSS and other classes of attack, have the proper tools, and/or an effective QA process. If so, it also means investing in software security training for developers and QA process improvement probably isn’t going to make a huge impact on poor remediation rates.

Agree or disagree with the guidance? Sound off in the comments.

 

We could continue forever discussing all the possible varied statistical scenarios one might encounter in application security. I maintain that application security defense must be adaptive, situationally aware, and inline with the interests of the business. Therefore, while a “best-practice” might sound like good ideas, it must be applied at the right time and place. This is a big reason why customers of WhiteHat Security measurably improve their security posture over time. As the saying goes, anything measured tends to improve.

The ability to capture vulnerability statistics, on an ongoing basis, provides invaluable insight into how to prioritize organizational action that has the best chance to improve the outcomes. Without knowing where the gaps are in the application security program ahead of time, what else can be done except guess what “best-practice” might help, which unfortunately is what most do so. When you don’t have to guess, organization can continue investing in activities that are measurable returning value, divest in those that don’t, and better apply those scarce application security resources.