Vulnerabilities in websites happen, especially the ever pervasive Cross-Site Scripting (XSS). Essentially every major website has had to deal with XSS vulnerabilities published publicly or otherwise. This also includes security companies. No one is perfect, no website has proven immune, ours included. As experts in Web application security and specifically XSS, yesterday even we took our turn. We hope to learn some lessons from the experience and share the details so others may do the same.
For background, the WhiteHat Security corporate website (www.whitehatsec.com) is static brochure-ware. No Web applications, no forms, no log-in, no user-supplied input where XSS can hide. Just the every day HTML/JavaScript/CSS/Flash explaining what we do, why, and how. The website does not carry any sensitive data, although it does carry the WhiteHat Security brand and our reputation is extremely important to us. Therefore any publicized vulnerabilities or defacement would obviously be embarrassing.
Monday afternoon @thetestmanager openly posted on Twitter an XSS vulnerability that reportedly affected www.whitehatsec.com:
"It really does happen to the best of us. XSS on WhiteHatSec http://bit.ly/cIDfEA If you take a feed to your site do you check their code?"
http://twitter.com/thetestmanager/status/17283467463
“By the way, that tweet was meant as a bit of fun and certainly not a poke at @jeremiahg or any of @whitehatsec The hole isn't in their code”
http://twitter.com/thetestmanager/status/17283586987
Upon seeing the tweet a few minutes after posting I was a bit skeptical being familiar with the limited attack surface, but took the matter seriously and immediately sought to confirm or deny the validity. Fortunately/Unfortunately the particular XSS issue was very straight forward to confirm. The all to familiar alert box was plain as day. What was more worrisome was the existence of a query string, something that’s not supposed to be on the website at all! It took a few more moments to isolate out the source of the issue, a third-party supplied JavaScript block (accolo.com) located on our careers page that sourced in additional code remotely.
document.write("<* x-script src='http://members.accolo.com/a02/public/CommunityJobs_include.jsp?isJSInclude=1&" + Accolo.paramsStr + "'>");
Third-party JavaScript includes are obviously extremely common on the Web. First order of business, remove the offending JavaScript code ASAP and then perform root-cause analysis.
Remediation time: ~15min from original time of disclosure.
When embedded in a webpage Accolo’s third-party JavaScript code dynamically creates a DOM web-form where prospective interview candidates may enter their details. It is important to note that the form action points to an off-domain, but various cosmetic interface settings are pulled from a superficially created query string on the hosting website (our website). Whew, the corp website still has no Web applications. However, when Accolo’s form setting values are pulled from the query string they are document.write’ed to the DOM in an unsanitized fashion, hence the source of the XSS vulnerability. All @thetestmanager had to do was to insert a SCRIPT tag into one of the fake parameters to get a DOM-based XSS to fire. A solid find. After removing the code from our site we reported the issue to Accolo so they could remediate. Likely more of their customers are similarly affected.
Some Obvious Questions & Lessons Learned:
1) Do you consider @thetestmanager to be “irresponsible”?
Of course not. Yes, we would have preferred a more private disclosure, but whaddaya gonna do. Name calling is unproductive. Things like this happen to everyone and you deal with it as best you can. We appreciate @thetestmanager for taking the time to find the vulnerability and bringing it to our attention at all. Clearly someone less savory could have found it, not said a word, and did something worse.
2) Why was this vulnerability missed?
Among the many safeguards taken to protect the corp website, including daily network & web application vulnerability scans, we perform intensive static and dynamic security peer review by the Sentinel Operations Team on all “substantive” website changes. However, a process oversight occurred where marketing / HR requested a new website feature that did not meet the threshold for substantive change, just a simple JavaScript include, so no review was requested or performed. The Sentinel Operations Team would have caught this issue. We’ve since updated the process and double checked all third-party JavaScript code currently in use.
The Sentinel Service itself did not identify the issue due to a standard explicit-allow hostname configuration. Sentinel does not test links of forms bound to off-domain locations, such third-party JavaScript includes, for legal and liability reasons. Since Sentinel didn’t follow / fill-out the form it did not see the attack surface required to detect the vulnerability. In a Web 2.0 world this scenario is becoming far more common, is something we’re working with our customers to address who have the same challenge, and speaks to a larger industry problem worth exploring.
A) To assess a third-party (by default): If an assessment includes third-party JavaScript systems, which are technically within the domains zone of trust, there are very real legal and liability consideration when network traffic is sent their way. Getting expressed written authorization to thoroughly and properly assess a third-party is extremely difficult to obtain, but without it prosecution in many countries in a consequence one faces.
B) No control, no visibility: Even if third-party JavaScript is assessed, organizations have a very limited ability to exercise change control or even visibility when that code is updated -- almost always without their knowledge. An organizations website site might be fine one minute and literally vulnerable to XSS the next.
The bigger question then becomes, "how does an organization handle security for 3rd party includes & JS files"? For business reasons, the answer can't be "don't do it."
3) What should Accolo have performed to prevent the vulnerability?
Properly escape and perform input validation, in JavaScript space, on all the URL parameter values before writing them to the DOM. Further, use appendChild() as opposed to document.write to help prevent secondary JavaScript execution. Since the JavaScript code did not need access to the customers domain an IFRAME HTML include, rather than embedded JavaScript, would have been a better choice to achieve cross-domain separation.
4) How are you going to improve you operational website security posture?
On the average we have roughly a single XSS vulnerability in the production every 2-3 years. Remediation occurs within a couple hours at the very most in every case. These metrics are consistent with the absolute most security proactive organizations found anywhere and far exceeding industry averages (a consistent handful of XSS per year, ~50% remediation rate, and fixes taking months). Having said that, as described earlier, we’re improving our procedures to make sure things like this don’t slip by again.
Venture capitalist (Grossman Ventures https://grossman.vc), Internet protector and industry creator. Founded WhiteHat Security & Bit Discovery. BJJ Black Belt.
Tuesday, June 29, 2010
Saturday, June 26, 2010
In a cyber-war, we fight for economic well-being
Earlier this month NPR’s Planet Money podcast had a session entitled, “A War Between States And Corporations,” where they interviewed Ian Bremmer (President, Eurasia Group). Mr. Bremmer is the author of The End of the Free Market: Who Wins the War Between States and Corporations? Near the end of the podcast Ian said something about the economy and internet security that really resonated with me.
“When you have hundreds of western multinational corporations that have seen industrial espionage, that’s been directly targeted at them through cyber attacks, massive unprecedented cyber attacks, that were either directly organized by the Chinese government or were known about and actively tolerated by the Chinese government on behalf of Chinese corporations -- that’s a pretty good description of a war.”
I’m inclined to agree because as he puts it...
“National security is no longer about tanks. National security is increasingly about economic well being, internet security, and issues that allow us to live on a daily basis. We’re not worried today about the soviets blowing us up with nukes, but we are worried that our kids to be able to enjoy a quality of life vaguely related to our own.”
Precisely. We want our children to have a good quality of life and the lack of internet security places that in jeopardy for all us. Historically economic failings, obviously not through cyber-war, played a role in the fall of the Roman Empire, the Soviet Union, and very nearly Greece. Our cyber-war, and it is a war, isn’t over in so much as that we haven’t lost our economy; nor solved the problem. What we citizens want, what we desire most (qualify of life), is facilitated through economic prosperity. To achieve this the U.S. needs entrepreneurialism and innovation. The latter is what enables business to grow and our economy flourish, which is exactly what our enemies want to steal from us, over the network, because they can.
“And, I see this as absolutely being a fundamentally conflictual relationship that is coming up between these corporations that are increasingly going to have to fight against other entities, economic entities, that are being supported by governments where there isn’t rule of law.”
Yes, how exactly can a western corporation, or any non-nation-state sponsored entity, possibly defend itself against such an adversary?
Legal and diplomatic remedies to enforce various cyber-crime laws is an option. Only this approach has proven all but completely ineffective. DoSing malicious network nodes has been suggested, but will certainly not deter let alone stop an advanced persistent threat. Increased attack distribution and subtlety is the result. The current WhiteHouse administration will not easily opt for conventional shock-and-awe warfare to target digital adversaries, even in occasions when we know names and locations. At least I hope not, although it may eventually come to that if we can’t find a way to succeed through technological means.
On the defensive side the U.S. government is simply not equipped to help businesses defend their networks or the applications above. GOV is out staffed and overwhelmed already trying to defend their own systems from classified data breaches. At best they may provide the private sector some welcome threat intelligence. If corporations desire security, not all do, and survival is optional, they must learn to adequately protect themselves against other corporations who may have the support of nation-states.
Adobe, Juniper, Symantec, Northrop Grumman, etc. recently received a warning shot in Operation Aurora, as did other named and unnamed corporations. A sure sign of the times. Bad guys want more than just money. They’re very keen on intellectual property, new inventions, source code, customer lists, contract negotiations, acquisition plans, product strategy, sales figures, names of employees and their friends & family, and so on. All of which is located on some computer, likely multiple computers, on the corporate network (or Facebook’s) accessible from anywhere the Internet.
“When you have hundreds of western multinational corporations that have seen industrial espionage, that’s been directly targeted at them through cyber attacks, massive unprecedented cyber attacks, that were either directly organized by the Chinese government or were known about and actively tolerated by the Chinese government on behalf of Chinese corporations -- that’s a pretty good description of a war.”
I’m inclined to agree because as he puts it...
“National security is no longer about tanks. National security is increasingly about economic well being, internet security, and issues that allow us to live on a daily basis. We’re not worried today about the soviets blowing us up with nukes, but we are worried that our kids to be able to enjoy a quality of life vaguely related to our own.”
Precisely. We want our children to have a good quality of life and the lack of internet security places that in jeopardy for all us. Historically economic failings, obviously not through cyber-war, played a role in the fall of the Roman Empire, the Soviet Union, and very nearly Greece. Our cyber-war, and it is a war, isn’t over in so much as that we haven’t lost our economy; nor solved the problem. What we citizens want, what we desire most (qualify of life), is facilitated through economic prosperity. To achieve this the U.S. needs entrepreneurialism and innovation. The latter is what enables business to grow and our economy flourish, which is exactly what our enemies want to steal from us, over the network, because they can.
“And, I see this as absolutely being a fundamentally conflictual relationship that is coming up between these corporations that are increasingly going to have to fight against other entities, economic entities, that are being supported by governments where there isn’t rule of law.”
Yes, how exactly can a western corporation, or any non-nation-state sponsored entity, possibly defend itself against such an adversary?
Legal and diplomatic remedies to enforce various cyber-crime laws is an option. Only this approach has proven all but completely ineffective. DoSing malicious network nodes has been suggested, but will certainly not deter let alone stop an advanced persistent threat. Increased attack distribution and subtlety is the result. The current WhiteHouse administration will not easily opt for conventional shock-and-awe warfare to target digital adversaries, even in occasions when we know names and locations. At least I hope not, although it may eventually come to that if we can’t find a way to succeed through technological means.
On the defensive side the U.S. government is simply not equipped to help businesses defend their networks or the applications above. GOV is out staffed and overwhelmed already trying to defend their own systems from classified data breaches. At best they may provide the private sector some welcome threat intelligence. If corporations desire security, not all do, and survival is optional, they must learn to adequately protect themselves against other corporations who may have the support of nation-states.
Adobe, Juniper, Symantec, Northrop Grumman, etc. recently received a warning shot in Operation Aurora, as did other named and unnamed corporations. A sure sign of the times. Bad guys want more than just money. They’re very keen on intellectual property, new inventions, source code, customer lists, contract negotiations, acquisition plans, product strategy, sales figures, names of employees and their friends & family, and so on. All of which is located on some computer, likely multiple computers, on the corporate network (or Facebook’s) accessible from anywhere the Internet.
Friday, June 25, 2010
The Low Hanging Fruit scanner strategy can get you into trouble
Vulnerabilities identifiable in an automated fashion, such as with a scanner, can be loosely classified as “low-hanging fruit" (LHF) -- issues easy, fast, and likely for bad guys to uncover and exploit. Cross-Site Scripting, SQL Injection, Information Leakage, and so on are some of the most typical forms of website LHF. Some approach their website vulnerability assessment (VA) by basing their programs around a scanner focusing primarily on LHF. The belief is also that by weeding out LHF vulnerabilities break-ins become less likely as the organizations security posture rises above the lowest common denominator. They realize they’re not going to get everything, but in return they expect this VA testing depth to be “better than nothing,” perhaps it meets PCI compliance requirements, and very importantly is low-cost.
Unfortunately things often don’t turn out that way. Due to shortcomings in Web application scanning technology, the LHF scanner strategy in highly unlikely to achieve that desired result. First off all Web application scanners can help tremendously with locating LHF vulnerabilities, no question. However, scanners DO NOT identify all the LHF on a given website, not by a long shot. No doubt many of you have come across URLs laden with account_id parameters where all one needs to do is rotate the number up or down to access someone else’s bank/mail/cms account. What about admin=0 params begging for a true string? Money transfers with a negative value?
We could go on all day.
The point is these are not ninja-level AES crypto padding reverse engineering attacks nor are they edge cases. No scanner, no HTTP proxy, and not even view-source is required to spot them. Just a Web browser. How much easier does it get!? Oh right, CSRF vulnerabilities. CSRF remains one of the most pervasive website vulnerabilities and also the easiest to find. That is, find by hand. Web application scanners have a REALLY hard time identifying CSRF issues, false-positive and false-negative city, without a lot of manual assistance.
Secondly, the output from commercial and open source Web application scanners are routinely inconsistent between each other. As can be seen from Larry Suto’s “Accuracy and Time Costs of Web Application Security Scanner [PDF]" study, products including Accunetix, AppScan, Hailstorm, etc. report different LHF vulnerabilities in type and degree -- even on the exact same website. A scanner may technically find more vulnerabilities than another, this does not necessarily mean it found everything the others did. Also, it is not uncommon for the same scanners to produce varied results from successive scans on the same website. This lack of Web application scanner LHF comprehensiveness and consistency presents a hidden dilemma.
If your LHF strategy says run a scanner, find what you can, and fix whatever it finds. Great, but consider what if your daily LHF adversary runs a different scanner across your website than you did, which is entirely likely. Anyone may buy/download/pirate one of the many scanners available on the market. They might also do some manual hunting and pecking. Most importantly though are the favorable odds that they’ll uncover something your scanner missed. Scanners possess very different capabilities with respect to crawling, session-state management, types of injections, and vulnerability detection algorithms. Individual scans can be configured to run under different user accounts with varied privilege levels or instructed to fill forms in a slightly different way. All of this presents a slightly different attack surface to test, which results in missed vulnerabilities and general inconsistency.
While the LHF scanner strategy is in place often none of this is readily apparent. That is until, someone from outside the organization such as a business partner, customer, or security researcher reports a missed vulnerability from a simple scan they ran. This makes it appear like you don’t test your websites well enough, which is at least slightly embarrassing, but usually nothing truly serious results. What really nails organizations in the end, and we’ve all likely seen it many times, is when a website is compromised and management/customers/media asks why. Saying, “we are PCI compliant” is just asking for the rod. As is, “we take security seriously, our top-of-the-line scanner didn’t find the vulnerability” all while the website is defaced with profanity, infecting visitors with malware due to some mass SQL Injection payload, or defrauded by a cyber criminal. Now the organization is sending out disclosure letters, figuring our who to blame or sue, and answering to an FTC investigation.
Maybe doing nothing was a better idea after all, because at least you didn’t pay good money to get hacked just the same. Anything worth doing, is worth doing right. Web application security included.
Unfortunately things often don’t turn out that way. Due to shortcomings in Web application scanning technology, the LHF scanner strategy in highly unlikely to achieve that desired result. First off all Web application scanners can help tremendously with locating LHF vulnerabilities, no question. However, scanners DO NOT identify all the LHF on a given website, not by a long shot. No doubt many of you have come across URLs laden with account_id parameters where all one needs to do is rotate the number up or down to access someone else’s bank/mail/cms account. What about admin=0 params begging for a true string? Money transfers with a negative value?
We could go on all day.
The point is these are not ninja-level AES crypto padding reverse engineering attacks nor are they edge cases. No scanner, no HTTP proxy, and not even view-source is required to spot them. Just a Web browser. How much easier does it get!? Oh right, CSRF vulnerabilities. CSRF remains one of the most pervasive website vulnerabilities and also the easiest to find. That is, find by hand. Web application scanners have a REALLY hard time identifying CSRF issues, false-positive and false-negative city, without a lot of manual assistance.
Secondly, the output from commercial and open source Web application scanners are routinely inconsistent between each other. As can be seen from Larry Suto’s “Accuracy and Time Costs of Web Application Security Scanner [PDF]" study, products including Accunetix, AppScan, Hailstorm, etc. report different LHF vulnerabilities in type and degree -- even on the exact same website. A scanner may technically find more vulnerabilities than another, this does not necessarily mean it found everything the others did. Also, it is not uncommon for the same scanners to produce varied results from successive scans on the same website. This lack of Web application scanner LHF comprehensiveness and consistency presents a hidden dilemma.
If your LHF strategy says run a scanner, find what you can, and fix whatever it finds. Great, but consider what if your daily LHF adversary runs a different scanner across your website than you did, which is entirely likely. Anyone may buy/download/pirate one of the many scanners available on the market. They might also do some manual hunting and pecking. Most importantly though are the favorable odds that they’ll uncover something your scanner missed. Scanners possess very different capabilities with respect to crawling, session-state management, types of injections, and vulnerability detection algorithms. Individual scans can be configured to run under different user accounts with varied privilege levels or instructed to fill forms in a slightly different way. All of this presents a slightly different attack surface to test, which results in missed vulnerabilities and general inconsistency.
While the LHF scanner strategy is in place often none of this is readily apparent. That is until, someone from outside the organization such as a business partner, customer, or security researcher reports a missed vulnerability from a simple scan they ran. This makes it appear like you don’t test your websites well enough, which is at least slightly embarrassing, but usually nothing truly serious results. What really nails organizations in the end, and we’ve all likely seen it many times, is when a website is compromised and management/customers/media asks why. Saying, “we are PCI compliant” is just asking for the rod. As is, “we take security seriously, our top-of-the-line scanner didn’t find the vulnerability” all while the website is defaced with profanity, infecting visitors with malware due to some mass SQL Injection payload, or defrauded by a cyber criminal. Now the organization is sending out disclosure letters, figuring our who to blame or sue, and answering to an FTC investigation.
Maybe doing nothing was a better idea after all, because at least you didn’t pay good money to get hacked just the same. Anything worth doing, is worth doing right. Web application security included.
Thursday, June 17, 2010
anti-waf-software-security-only-zealotry
Recently on Twitter I asked why some people feel oddly compelled to rely upon the shortcomings of Web Application Firewalls (WAFs) as a means to advocate for a Secure Development Lifecycle (SDL). To me this is odd because the long-term, risk-reducing value provided by secure code is enough on its own to warrant the investment. If you can’t demonstrate that, blame directed at WAFs seems misplaced. Most importantly, we must remember our objective: protecting websites from being hacked.
The ultimate scope of this objective, 200 million websites and counting, cannot be understated. Even if we just focus on the 1.3 million websites serving up SSL certificates, the scale is still unbelievably massive. Whatever the metric, experienced industry experts and aggregated statistics reports agree, the vast majority of these websites are riddled with vulnerabilities. The exploitation of thousands of websites that is fueling headlines serves as a further proof point. To quantify vulnerabilities, let’s assume an average of six serious* vulnerabilities per website, WhiteHat Security’s published figures based on our own Sentinel assessments. This totals 7.8 million custom Web application vulnerabilities in circulation. We just don’t know exactly where they are.
The next and equally important problem, fixing the code, is a seemingly insurmountable obstacle. Imagine an extremely limited number of application security pros to convince 17 million developers (some unknown portion being Web developers) to add to their workload, learn about defensive programming, and remediate all the vulnerable code. And by the way, this will be accomplished in small increments. Vendor-supplied patches have no place here. According to Gary McGraw’s (Cigital, CTO) BSIMM studies, observations from large-scale software security initiatives, a software security group (SSG) ideally should be 1% of the size of the development team. Given that baseline, we’d need 170,000 software security experts when I doubt if more than 5,000 currently exist.
To pile on, process and staffing investments will not take place unless the stakeholders are convinced that it is worth it to risk potential revenue by refocusing developer efforts to fix security issues that may or may not be exploited (aka: risk management). This is a scenario application security practitioners are all too familiar with and frustrated by. Despite all the challenges, software security is making significant progress compared to its nearly nonexistent status of only a few years ago. However, change does not occur over night. It takes years. Unfortunately time is not on our side and the bad guys are exploiting websites and their users by the thousands every day. Waiting around for a future of software security utopia while the losses mount is ill-advised. Obviously we need more options. Just yelling for more manual code reviews, developers training, secure frameworks, etc will work for some, but certainly not all. Think about the costs!
Several years ago I was contemplating the sheer size and gravity of the aforementioned numbers, taking into account operational considerations, brainstorming possible solutions worth considering, and then it hit me. I disliked WAFs for all the same reasons as everyone else, but it became clear that even if WAFs didn’t work, practically speaking we really need them to (work). If somehow they could temporarily protect against the exploitation of a certain amount and type of vulnerability, and do so without requiring an application security person to beg for development time, allowing them to take action alone, it would have tremendous appeal. Don’t agree? Remember, WAF technology predates PCI-DSS and hundreds of millions in annual sales is not entirely driven by compliance.
Deciding to be part of a solution, or at least an option for organizations to consider, I began meeting with various WAF vendors asking how WhiteHat could help. One idea we considered was that if a WAF knew ahead of time the exact type and URL location of a website vulnerability, then perhaps WAF effectiveness would improve. Thus our exploration into virtual patching began. Let’s be clear though, WhiteHat Security does not make WAFs. We do not sell WAFs. We have no financial motivation when or if they are sold. Our main interest is assisting our customers in protecting their websites and integrating what we do into technologies they feel are beneficial, to them.
Fast forward to today. Obviously not everyone is impressed with the current state of WAF technology. Staunch critics say WAFs introduce the very vulnerabilities they are supposed to defend against and/or that they can be bypassed. We agree. No doubt, WAF technology is imperfect, but that doesn’t make them any different from every other security product segment. Anti-malware, firewalls, IDS/IPS, MFA, SSL, and so on all have vulnerabilities. All can be bypassed. Still, I’m pretty sure we’d all agree that these products provide a certain amount of value nonetheless. You don’t throw the baby out with the bath water. We regularly identify vulnerabilities and bypasses in various WAFs and results with the vendors so they may have an opportunity to improve.
So where does the anti-waf-software-security-only-zealotry really come from? I don’t believe the critics are ignorant of the problem at hand. Maybe it has something to do with the quote from the movie Vanilla Sky, “What’s the answer to 99 out of 100 questions? Money.” Technically speaking, SDL activities and WAFs are NOT mutually exclusive. They are designed to solve different problems. It is entirely reasonable to deploy one, the other, or both at the same time. The tough reality is that IT security budgets allocated towards Web security are wholly inadequate. Budget constraints force organizations to make painful choices about where to spend first. Decisions are sometimes made for compliance reasons, and other times for security. While SDL and WAFs should not be considered competing solutions, they do compete for the limited dollars. PCI-DSS 6.6 made matters that much worse.
No matter the reason, when WAFs are purchased “first,” the software security guys lose money, naturally making them resentful. They snap back saying, “the customer doesn’t get it” and “SDL is the right thing to do because WAFs suck!” Let’s be honest, if that’s how you are positioning the value of an SDL, you deserve to get back-burnered and furthermore you’ve done your client a disservice. WAF’s suckage should have zero bearing on whether or not an organization should improve its SDL. Instead, focus on the many cost-saving, risk-reducing, top-line-benefiting qualities an organization may realize by implementing a well-regulated software security program.
At the end of the day, our common enemy is really the lack of application security visibility and the allocation of necessary resources. If we come together and help address this as an industry, we’ll all be better off, and the pressure of this either or choice will be lessened.
The ultimate scope of this objective, 200 million websites and counting, cannot be understated. Even if we just focus on the 1.3 million websites serving up SSL certificates, the scale is still unbelievably massive. Whatever the metric, experienced industry experts and aggregated statistics reports agree, the vast majority of these websites are riddled with vulnerabilities. The exploitation of thousands of websites that is fueling headlines serves as a further proof point. To quantify vulnerabilities, let’s assume an average of six serious* vulnerabilities per website, WhiteHat Security’s published figures based on our own Sentinel assessments. This totals 7.8 million custom Web application vulnerabilities in circulation. We just don’t know exactly where they are.
The next and equally important problem, fixing the code, is a seemingly insurmountable obstacle. Imagine an extremely limited number of application security pros to convince 17 million developers (some unknown portion being Web developers) to add to their workload, learn about defensive programming, and remediate all the vulnerable code. And by the way, this will be accomplished in small increments. Vendor-supplied patches have no place here. According to Gary McGraw’s (Cigital, CTO) BSIMM studies, observations from large-scale software security initiatives, a software security group (SSG) ideally should be 1% of the size of the development team. Given that baseline, we’d need 170,000 software security experts when I doubt if more than 5,000 currently exist.
To pile on, process and staffing investments will not take place unless the stakeholders are convinced that it is worth it to risk potential revenue by refocusing developer efforts to fix security issues that may or may not be exploited (aka: risk management). This is a scenario application security practitioners are all too familiar with and frustrated by. Despite all the challenges, software security is making significant progress compared to its nearly nonexistent status of only a few years ago. However, change does not occur over night. It takes years. Unfortunately time is not on our side and the bad guys are exploiting websites and their users by the thousands every day. Waiting around for a future of software security utopia while the losses mount is ill-advised. Obviously we need more options. Just yelling for more manual code reviews, developers training, secure frameworks, etc will work for some, but certainly not all. Think about the costs!
Several years ago I was contemplating the sheer size and gravity of the aforementioned numbers, taking into account operational considerations, brainstorming possible solutions worth considering, and then it hit me. I disliked WAFs for all the same reasons as everyone else, but it became clear that even if WAFs didn’t work, practically speaking we really need them to (work). If somehow they could temporarily protect against the exploitation of a certain amount and type of vulnerability, and do so without requiring an application security person to beg for development time, allowing them to take action alone, it would have tremendous appeal. Don’t agree? Remember, WAF technology predates PCI-DSS and hundreds of millions in annual sales is not entirely driven by compliance.
Deciding to be part of a solution, or at least an option for organizations to consider, I began meeting with various WAF vendors asking how WhiteHat could help. One idea we considered was that if a WAF knew ahead of time the exact type and URL location of a website vulnerability, then perhaps WAF effectiveness would improve. Thus our exploration into virtual patching began. Let’s be clear though, WhiteHat Security does not make WAFs. We do not sell WAFs. We have no financial motivation when or if they are sold. Our main interest is assisting our customers in protecting their websites and integrating what we do into technologies they feel are beneficial, to them.
Fast forward to today. Obviously not everyone is impressed with the current state of WAF technology. Staunch critics say WAFs introduce the very vulnerabilities they are supposed to defend against and/or that they can be bypassed. We agree. No doubt, WAF technology is imperfect, but that doesn’t make them any different from every other security product segment. Anti-malware, firewalls, IDS/IPS, MFA, SSL, and so on all have vulnerabilities. All can be bypassed. Still, I’m pretty sure we’d all agree that these products provide a certain amount of value nonetheless. You don’t throw the baby out with the bath water. We regularly identify vulnerabilities and bypasses in various WAFs and results with the vendors so they may have an opportunity to improve.
So where does the anti-waf-software-security-only-zealotry really come from? I don’t believe the critics are ignorant of the problem at hand. Maybe it has something to do with the quote from the movie Vanilla Sky, “What’s the answer to 99 out of 100 questions? Money.” Technically speaking, SDL activities and WAFs are NOT mutually exclusive. They are designed to solve different problems. It is entirely reasonable to deploy one, the other, or both at the same time. The tough reality is that IT security budgets allocated towards Web security are wholly inadequate. Budget constraints force organizations to make painful choices about where to spend first. Decisions are sometimes made for compliance reasons, and other times for security. While SDL and WAFs should not be considered competing solutions, they do compete for the limited dollars. PCI-DSS 6.6 made matters that much worse.
No matter the reason, when WAFs are purchased “first,” the software security guys lose money, naturally making them resentful. They snap back saying, “the customer doesn’t get it” and “SDL is the right thing to do because WAFs suck!” Let’s be honest, if that’s how you are positioning the value of an SDL, you deserve to get back-burnered and furthermore you’ve done your client a disservice. WAF’s suckage should have zero bearing on whether or not an organization should improve its SDL. Instead, focus on the many cost-saving, risk-reducing, top-line-benefiting qualities an organization may realize by implementing a well-regulated software security program.
At the end of the day, our common enemy is really the lack of application security visibility and the allocation of necessary resources. If we come together and help address this as an industry, we’ll all be better off, and the pressure of this either or choice will be lessened.
Friday, June 04, 2010
Microsoft security IS “good enough” and that’s the problem
Nothing drives a business like customer demand. When customers say they want X or they’ll go with competition, well, you do it or risk losing their business. Nearly 10 years ago this is where Microsoft found itself. Their product security was in terrible shape. No shortage of vulnerabilities resulting in widespread and devastating compromises with patches unpredictable and long in coming. Customers were fed up and threatened to dump Windows for Linux if things didn’t change. They meant it. Bill Gate’s, then Microsoft CEO, recognized the seriousness of the situation and authored the famous Trustworthy Computing memo.
“Over the last year it has become clear that ensuring .NET is a platform for Trustworthy Computing is more important than any other part of our work.” (emphasis mine)
“Security: The data our software and services store on behalf of our customers should be protected from harm and used or modified only in appropriate ways.”
From this executive-level mandate was born the Microsoft Software Development Lifecycle (SDL), into which security was tightly integrated. In the words of Michael Howard (Principal Security Program Manager, Microsoft) the goal of the SDL was/is to, “Reduce the number of vulnerabilities and reduce the severity of the bugs you miss.” Practical, straight forward, and most importantly measurable and achievable.
Flash forward to today, practically no one would argue over the enormous progress Microsoft has made in security with the unveiling of Windows 7, Internet Explorer 8, and other new software packages. Microsoft is now the corporate standard by which all other software security programs are compared. BUT, this success might also be the very thing that halts significant SDL gains in the future.
Look at it this way, in security terms, Linux no longer poses the threat to Windows it once did. When customers choose between Windows and Linux, save for maybe a Google PR play, “security concerns” are not a huge differentiator. Today purchasing decisions are based upon performance and utility, to a lesser extent "safety," but not so much "security." As such, customer demand for a more secure Windows has evaporated. Windows has become “secure enough” relative to available alternatives. Maybe the application on top do, but from a market adoption perspective Windows itself doesn't really need to be any more secure.
Now I don’t know this for a fact, but it wouldn’t surprise me if internally at Microsoft the ability to justify resource expenditure on Trustworthy Computing and the SDL is more difficult than in years past. Today, more security is not really going to drive more business for them. However, some security products they offer do directly drive revenue. So they'll be safe and invested.
Finding external indicators of the overall theory will likely prove challenging. If the theory is correct we’ll probably see security brain drain and rumors of program budget cuts. Time will tell.
“Over the last year it has become clear that ensuring .NET is a platform for Trustworthy Computing is more important than any other part of our work.” (emphasis mine)
“Security: The data our software and services store on behalf of our customers should be protected from harm and used or modified only in appropriate ways.”
From this executive-level mandate was born the Microsoft Software Development Lifecycle (SDL), into which security was tightly integrated. In the words of Michael Howard (Principal Security Program Manager, Microsoft) the goal of the SDL was/is to, “Reduce the number of vulnerabilities and reduce the severity of the bugs you miss.” Practical, straight forward, and most importantly measurable and achievable.
Flash forward to today, practically no one would argue over the enormous progress Microsoft has made in security with the unveiling of Windows 7, Internet Explorer 8, and other new software packages. Microsoft is now the corporate standard by which all other software security programs are compared. BUT, this success might also be the very thing that halts significant SDL gains in the future.
Look at it this way, in security terms, Linux no longer poses the threat to Windows it once did. When customers choose between Windows and Linux, save for maybe a Google PR play, “security concerns” are not a huge differentiator. Today purchasing decisions are based upon performance and utility, to a lesser extent "safety," but not so much "security." As such, customer demand for a more secure Windows has evaporated. Windows has become “secure enough” relative to available alternatives. Maybe the application on top do, but from a market adoption perspective Windows itself doesn't really need to be any more secure.
Now I don’t know this for a fact, but it wouldn’t surprise me if internally at Microsoft the ability to justify resource expenditure on Trustworthy Computing and the SDL is more difficult than in years past. Today, more security is not really going to drive more business for them. However, some security products they offer do directly drive revenue. So they'll be safe and invested.
Finding external indicators of the overall theory will likely prove challenging. If the theory is correct we’ll probably see security brain drain and rumors of program budget cuts. Time will tell.
Subscribe to:
Posts (Atom)