Tuesday, December 30, 2008

Silver Tail Systems tackles Business Logic Flaws

I first started blogging about business logic flaws back in 2006. At a time when there was an overemphasis on technical vulnerabilities such as XSS and SQLi. Issues black-box scanners could identify and the rest conveniently ignored. Many insiders knew serious vulnerabilities remained unchecked, albeit confidentially, even after a clean scan report. Bad guys could monetize heavily on the lack of visibility -- and they have, so it is no longer a secret. This type of fraud has resulted in the loss of 5, 6, and even 7 figure sums in particular instances. Organizations now want and need detection solutions on the back-end, in addition to vulnerability assessments on the front-end, capable of uncovering those taking advantage of business logic flaws.

That is where Silver Tail Systems, a new silicon valley start-up I’ve been following, comes into play. Founded by Laura Mather (Ph.D) and Mike Eynon, Silver Tail is an entire company solely dedicated to addressing what they call “business process abuse.” Basically the same as business logic flaws. If anyone has the pedigree to successfully apply technology to this problem, they do. Their backgrounds are no joke. Do not make the mistake thinking this is product is Web Application Firewall, its not. Something different entirely and more inline with business analytics with a focus on security.

Silver Tail Forensics exposes the way a website is being used – through user, page, and IP statistics. The tool allows a business owner to explore the use of his or her site by displaying the usage of the site on a per page, per user, or per IP level. A search interface provides deep access into the activity on the website using any dimension. When suspicious activity is identified, Silver Tail Forensics enables the business user or security analyst to obtain a full understanding of the bad actors and their specific behaviors and how those behaviors differ from legitimate users.”

This is a company worth tracking and a blog worth following.

Friday, December 26, 2008

It’s unanimous, Web application security has arrived

It’s unanimous. Web application security is the #1 avenue of attack according to basically every industry data security report available (IBM, Websense, Sophos, MessageLabs, Cisco, APWG, MITRE, Symantec, Trend Micro, SecureWorks, ScanSafe, IC3). This is in addition to reports specifically focusing on custom Web application vulnerabilities (WhiteHat Security, WASC, Accunetix). SQL Injection and Cross-Site Scripting are routinely cited as the biggest issues, the ones we can’t apply patches to defend against. Perhaps what we’ve learned in 2008, as pointed out by Gunnar Peterson and Gary McGraw, is we’re spending on the wrong problem. Roughly $150MM in software security products & services versus the lopsided billions annually on network security. 2009 will give us another opportunity to make a difference.

From the mountain of statistics available I've saved several interesting quotes to reference in 2009.

Internet Crime Complaint Center (IC3)
Web Site Attack Preventative Measures

"Over the past year, there has been a considerable spike in cyber attacks against the financial services and the online retail industry."

1. They identify Web sites that are vulnerable to SQL injection. They appear to target MSSQL only.
2. They use "xp_cmdshell", an extended procedure installed by default on MSSQL, to download their hacker tools to the compromised MSSQL server.
3. They obtain valid Windows credentials by using fgdump or a similar tool.
4. They install network "sniffers" to identify card data and systems involved in processing credit card transactions.
5. They install backdoors that "beacon" periodically to their command and control servers, allowing surreptitious access to the compromised networks.
6. They target databases, Hardware Security Modules (HSMs), and processing applications in an effort to obtain credit card data or brute-force ATM PINs.
7. They use WinRAR to compress the information they pilfer from the compromised networks.



WhiteHat Security
Sixth Quarterly Website Security Statistics Report


"finds 82 percent of websites have had at least one security issue, with 63 percent still having issues of high, critical or urgent severity. "

"Vulnerability time-to-fix metrics are slowly improving, but continue to show significant room for improvement, typically requiring weeks to months to achieve resolution. Only about 50 percent of the most prevalent urgent severity issues were resolved during the assessment time frame."



Web Application Security Consortium (WASC)
Statistics Project 2007


"Data analysis shows that more than 7% of analyzed sites can be compromised automatically. About 7.72% applications had a high severity vulnerability detected during automated scanning (P. 1). Detailed manual and automated assessment using white and black box methods shows that probability to detect high severity vulnerability reaches 96.85%."

"
The most prevalent vulnerabilities are Cross-Site Scripting, Information Leakage, SQL Injection and Predictable Resource Location (P. 2, P. 3). As a rule, Cross-Site Scripting and SQL Injection vulnerabilities appears due to system design errors, Information Leakage and Predictable Resource Location are often connected with improper system administration (for example, weak access control)."


IBM Internet Security Systems
X-Force® 2008 Mid-Year Trend Statistics

"The number of vulnerabilities affecting Web applications has grown at a staggering rate. From 2006 to the first half of 2008, vulnerabilities affecting Web server applications accounted for 51 percent of all vulnerability disclosures."

"The predominate types of vulnerabilities affecting Web applications are cross-site scripting (XSS), SQL injection, and file include vulnerabilities. In the past few years, cross-site scripting has been the predominant type of Web application vulnerability, but the first half of 2008 saw a marked rise in SQL injection disclosures, more than doubling the number of vulnerabilities seen on average over the same time period in 2007."


"The number of client-side vulnerabilities with public exploits has risen dramatically, from less than 5 percent in 2004 to almost 30 percent in the first half of 2008. "

"In the first half of 2008, 94 percent of public exploits affecting Web browser-related vulnerabilities were released on the same day as the disclosure."

"Over the past few years, the focus of endpoint exploitation has dramatically shifted from the operating system to the Web browser and multimedia applications. "



Websense security Labs™
state of internet security
Q1 – Q2, 2008

"75 percent of Web sites with malicious code are legitimate sites that have been compromised. This represents an almost 50 percent increase over the previous six-month period."

"60 percent of the top 100 most popular Web sites have either hosted or been involved in malicious activity in the first half of 2008."

"76.5 percent of all emails in circulation contained links to spam sites and/or malicious Web sites. this represents an 18 percent increase over the previous six-month period."

"85 percent of unwanted (spam or malicious) emails contain a link. "

"29 percent of malicious Web attacks included data-stealing code."

"46 percent of data-stealing attacks are conducted over the Web."



Sophos
Security Threat report: 2009

"The scale of this global criminal operation has reached such proportions that Sophos discovers one new infected webpage every 4.5 seconds – 24 hours a day, 365 days a year."

"Web insecurity, notably weakness against automated remote attacks such as SQL injections, will continue to be the primary way of distributing web-borne malware. Cybercriminals can then send innocent-looking spam which link to legitimate, but hacked, webpages. These hacked sites link invisibly to malicious content."



MessageLabs Intelligence:
2008 Annual Security Report

"Complex web-based malware targeting social networking sites and vulnerabilities in legitimate websites became widespread in 2008, resulting in malware being installed onto computers with no user intervention required. The daily number of new websites containing malware rose from ,068 in January to its peak at 5,2 in November. The average number of new websites blocked daily rose to 2,290 in 2008 from ,253 in 2007, largely due to increased attacks using SQL injection techniques."

"In the first half of 2008, vulnerabilities and weak security in Web applications were being exploited by criminals to deploy Web-based malware more widely. New toolkits were able to seek-out Web sites with weak security and target them. Recent examples of these types of attacks include extensive SQL injection attacks able to pollute data-driven Web sites, causing malicious JavaScript to be presented to the sites’ visitors."

"For 2008, the average number of new malicious Web sites blocked each day rose to 2,290, compared with ,253 for 2007. This represents an increase of 82.8% since 2007."

"By June 2008, the average number of malicious Web sites blocked each day rose by 58% to 2,076; taking the threat to its highest level since April 2007. By the second half of 2008, many more malicious Web sites were linked to SQL injection attacks targeted against legitimate, vulnerable Web servers. In July 2008, 83.% of all Web based malware intercepted was new, owing to increased SQL injection attacks. In October 2008, the number of malicious Web sites blocked each day rose further, to its highest level of 5,424."

"Throughout 2008, levels of spyware and adware interceptions have been overshadowed by a shift toward Web-based malware. Web-based malware has now become more attractive to cyber-criminals as they present an opportunity to capitalize on users’ unfamiliarity with the nature of Web-borne threats."


Cisco 2008
Annual Security Report

"In terms of quantity and pervasiveness, the most significant security threats in 2008 involved an online component."

"Top Security Concerns of 2008: Criminals are exploiting vulnerabilities along the entire Web ecosystem to gain control of computers and networks."

“Invisible threats” (such as hard-to-detect infections of legitimate websites) are making common sense and many traditional security solutions ineffective."

"Online security threats continued their growth in 2008. Online criminals combined spam, phishing, botnets, malware, and malicious or compromised websites to create highly effective blended threats that use multiple online vectors to defraud and com promise the security of Internet users."



APWG
Phishing Activity Trends Report - Q2/2008

"The number of crimeware-spreading password-stealing crimeware rose to a high of 9529 in June, fully 47% higher than the previous record of 6500 in March 2008 and 258% greater than the end of Q2/2008."

"The number of crimeware-spreading URLs detected rose from 4,080 in April to a record 9,529 in June. This rise represented an increase of nearly 47 percent from the previous record of 6,500 in March. 2008. The number at quarter’s end is 258 percent higher than the end of Q2 2007. Websense Chief Technology Officer and APWG Phishing Activity Trends Report contributing analyst Dan Hubbard said that the large boost is attributed mainly to malicious code being utilized in SQL injection attacks."



MITRE
Vulnerability Type Distributions in CVE

"The total number of publicly reported web application vulnerabilities has risen sharply, to the point where they have overtaken buffer overflows. This is probably due to ease of detection and exploitation of web vulnerabilities, combined with the proliferation of low-grade software applications written by inexperienced developers. In 2005 and 2006, cross-site script¬ing (XSS) was number 1, and SQL injection was number 2."


Accunetix

"70% of the websites scanned were found to contain high or medium vulnerabilities. There is an extremely high probability of these vulnerabilities being discovered and manipulated by hackers to steal the sensitive data these organizations store."



Symantec Internet Security Threat Report
Trends for July–December 07

"As a result of these considerations, Symantec has observed that the majority of effective malicious activity has become Web-based: the Web is now the primary conduit for attack activity."

"Site-specific vulnerabilities are perhaps the most telling indication of this trend. These are vulnerabilities that affect custom or proprietary code for a specific Web site. During the last six months of 2007, 11,253 site-specific cross-site scripting vulnerabilities were documented.1 This is considerably higher than the 2,134 traditional vulnerabilities documented by Symantec during this period."

"Site-specific vulnerabilities are also popular with attackers because so few of them are addressed in a timely manner. Of the 11,253 site-specific cross-site scripting vulnerabilities documented during this period, only 473 had been patched by the administrator of the affected Web site. Of the 6,961 site-specific vulnerabilities in the first six months of 2007, only 330 had been fixed at the time of writing. In the rare cases when administrators do fix these vulnerabilities, they are relatively slow to do so. In the second half of 2007, the average patch development time was 52 days, down from an average of 57 days in the first half of 2007."



Trend Micro
June 2008 | Trend Micro Threat Roundup and Forecast—1H 2008

"Trend Micro researchers also predicted that high profile Web sites would become the most sought after attack vectors for criminals to host links to phishing sites. In January 2008, this prediction became reality when the reputable BusinessWeek Web site was attacked, as well as popular U.S. clothing and restaurant sites. Also in early January 2008, several massive SQL injection attacks were launched on thousands of Web pages belonging to Fortune 500 corporations, state government agencies, and educational institutions (p. 9)."

"Drive-by-download attacks are increasing exponentially. In early January 2008, several massive SQL injection attacks were reported that involved over 100,000 compromised pages. These Web pages belonged to Fortune 500 corporations, state government agencies, and educational institutions. The most recent SQL injections involved travel sites, forums using phpBB, and sites using an ASP frontend and SQL database backend—either open source or proprietary."



SecureWorks

"Major Retailers Experience 161% Increase in Attempted Hacker Attacks"

"Attempted SQL injection attacks, a technique that exploits security vulnerabilities in Web applications by inserting malicious SQL code in Web requests, increased significantly in May for our retailers, going from an average of 20 per client per month to 237 per client per month. It then hit a peak in July with 17,000 attempted SQL Injection attacks per retail client and since November has dropped off to normal levels, averaging 18 per client per month."



ScanSafe
September 2008 / 3Q08 Global Threat Report

"74% of all malware blocks in 3Q08 resulted from visits to compromised websites;"

"SQL injection and other forms of website compromise led to steadily increasing malware block volumes throughout the first three quarters of 2008."

"On average, the rate of Web-based malware encountered by corporations increased 338% in 3Q08 compared to 1Q08 and 553% compared to 4Q07."


Thursday, December 18, 2008

OH, I Like Surprises!

Gary McGraw (CTO, Cigital) provides a must-read article this week, Software Security Top 10 Surprises. Gary, along with Brian Chess (Chief Scientist, Fortify) and Sammy Migues (Director, Knowledge Management, Cigital), interviewed nine executives running top software security programs and provided their analysis. I’m always curious to know what the more progressive organizations are doing regarding software security. Some organizations take up a leadership position and the rest will follow -- eventually. I’ve highlighted several snippets that were specifically interesting to me.

“All nine have an internal group devoted to software security that we choose to call the Software Security Group or SSG.”

“...an average percentage of SSG to development of just over 1%.”

Ladies and gentlemen, we have metrics! Many organization are searching for benchmarks offering guidance when budgeting resources for software security and finding limited information. Their work, along with the OWASP Security Spending Benchmarks project, plan to fill the void.


“We were surprised to find that only two of the nine organizations we interviewed used Web application firewalls at all.”

Wait for it!

“In our view, the best use of Web application firewalls is to buy some time to fix your security problems properly in the software itself by thwarting an active attack that you are already experiencing.”

Whoa! I am surprised as well, but ironically in the opposite direction. I would have estimated the number of WAF deployment lower than 22% (2 in 9). However, we are speaking progressive organizations in this case so perhaps it does make sense. Then Gary, Brian, and Sammy go on to confirm what I’ve been recommending for some time -- use WAFs to quickly reduce the immediate exposure (time-to-fix), then fix the root cause (the code) as time and budget allow.


“Involving QA in software security is non-trivial... Even the "simple" black box Web testing tools are too hard to use.”

“One sneaky trick to solving this problem is to encapsulate the attackers' perspective in automated tools that can be used by QA. What we learned is that even today's Web application testing tools (badness-ometers of the first order) remain too difficult to use for testers who spend most of their time verifying functional requirements.”

Exactly right. It’s one thing for a security pro to learn to use some security tools, run scanners, and hunt down all manner of esoteric Web application vulnerabilities. It’s quite another to expect a QA person to do the same. QA people are not security experts, they have a different skill set, much separate from what webappsec requires.


“Unless you understand how potential attacks really work and who really does them, it's impossible to build a secure system.“

Well said, nothing more to add.

“However, though attack information is critical for SSG members, the notion of teaching work-a-day developers how to "think like a bad guy" is not widespread.”

Precisely. Effort is better spent teaching developers how to play defense, a smaller domain of knowledge, and not offense. Leave the offense to the security guys.

“All nine programs we talked to have in-house training curricula, and training is considered the most important software security practice in the two most mature software security initiatives we interviewed.”

In-house security training support is a must have. An education process fueling what development security standards the organization keeps, the libraries available, and other helpful resources is essential. Much better bang for the buck than generic external courses offered.

Wednesday, December 17, 2008

History Repeating Itself

“All of this will happen before and all of this will happen again.”, is a memorable quote from Battlestar Galactica (awesome show). Meaning, history tends to repeat itself in a prophetic sort of way. As I’ve been involved in the evolution of the Web Application Security for the better part of a decade, I couldn’t help but notice the strikingly similar paths the field is taking to that of Network Security. Incidents (hacks) prompt technology research and over time drive business cost justification. Follow on attacks, best-practices, and regulation directly impact business models and the style of solution deployment.



To get a better visual comparison I created a timeline mapping key events. What’s interesting, Web application security closely matches network security if you shift by 8 years. We’ll see what the next couple of years has in store.


Network Security
(1988) The Morris Worm, the first computer worm distributed over the Internet, infects over 6,000 hosts. The incident prompts research into network firewall technology.

(1992) The first commercial packet-filter firewall (DEC SEAL) was shipped. Marketed as a perimeter security device able to thwart remote attacks targeting unpatched vulnerabilities, including the Morris Worm.

(1994 - 1995) Given that firewalls were not widely deployed, their costs not yet justified, savvy network administrators sought tools to identify and patch vulnerabilities. ISS released Internet Scanner, a commercial network security scanner. Security Analysis Tool for Auditing Networks (SATAN) was released for free.

(1996) Network security scanners revealed the need for more mature patch management products as security updates were required frequently. PATCHLINK UPDATE, a commercial patch management product released as a solution to the problem.

(1996) The costs for commercial patch management software and the potential downtime held back the technology adoption. Mature free patch management solutions were also not available. In an environment where few systems were diligently patched, hackers successfully exploited large blocks of undefended networks.

(1997) Broadly targeted attacks highlighted the need for additional security controls. The free software firewall, Linux IPChains, became a viable alternative for commercial products. Many enterprises chose perimeter firewalls before deploying patch management solutions because they were often seen as a faster, simpler, and more cost effective approach.

(1998) With the wide availability of network scanners (Nessus), increasing deployment of firewalls, and a proven need to defend against remote attacks - the environment created a need for high-end consultative network penetration testing services.

(1998) To ease the challenge of keeping up-to-date on security patches, Windows Update was first introduced with Windows 98.

(2001) Code Red and Code Red II infected hundreds of thousands of vulnerable Microsoft IIS servers within days of their release. The incident highlighted the need for increased adoption of enterprise patch management, if only on publicly available hosts.

(2002) Bill Gates: Trustworthy Computing Memo.

(2003) SQL slammer and Blaster worm demonstrate the porous state of network security by exploiting tens of thousands of unpatched hosted, even those located within the network perimeter. The incident also highlighted the need for host-based firewall deployments and patch management for all hosts, public and private.

(2003) To keep pace with the increased frequency of remote attacks and patching requirements, adoption of In-House network vulnerability scanning programs increase to offset the prohibitive costs of consultant penetration tests.

(2004) Windows XP Service Pack 2 ships with Windows Firewall as a default security feature to protect unpatched hosts, which may or may not be protected by a perimeter firewall. Shortly thereafter firewalls become fairly ubiquitous for any Internet-connected host.

(2005) Network vulnerability scanning moves towards ubiquity, but the costs of software and management are prohibitive. This, coupled with compliance requirements, lead to the increased adoption of Managed Security Service and Software-as-a-Services (Qualys / ScanAlert) providers to achieve lower a Total Cost of Ownership.

(2006) PCI Security Standards Council formed, uniting the payment brands disparate initiatives, enforcing the requirement of vulnerability mananagment, patch management, and firewall ubiquity.


Web Application Security
(1996) The PHF exploit, one of the more notorious CGI input validation vulnerabilities, was use to compromise untold numbers of Web servers. The incident, couple with other possible Web hacking techniques, prompts research into Web Application Firewall technology.

(1999) The first commercial Web Application Firewalls (AppShield) were shipped. Marketed as a perimeter application security devices able to thwart attacks targeting unmitigated vulnerabilities, including the PHF exploit.

(2000 - 2001) Given that Web Application Firewalls were not widely deployed, their costs not yet justified, security professionals sought tools to identify Web application vulnerabilities. Commercial Web application scanners (AppScan) become commercial available, as well as open source versions such as Whisker.

(2001) Web application scanners, and published research, reveal the need for more secure Web application software. The Open Web Application Security Project was founded as a community effort to improve raise awareness of Web application security. - http://www.owasp.org/index.php/Main_Page

(2001) Code Red and Code Red II infected hundreds of thousands of vulnerable Microsoft IIS servers within days of their release. The incident highlighted the need for increased adoption of enterprise patch management, if only on publicly available hosts.

(2002) Broadly targeted Web application attacks highlighted the need for additional security controls. The free Web Application Firewall, ModSecurity, became a viable alternative for commercial products. Enterprises began choosing WAFs before secure software initiative because they were often seen as a faster, simpler, and more cost effective approach.

(2004) The number of types of attacks and esoteric naming conventions became vast. The OWASP Top Ten was released to highlight and describe the most prevalent and critical Web application security vulnerabilities.

(2005) With the wide availability of Web applications scanners & other free tools, increasing deployment of Web Application Firewalls, and a proven need to defend against remote Web application attacks - the environment created a need for high-end consultative network penetration testing services.

(2005) The Samy Worm, the first major XSS worm, infected over 1 million MySpace profiles in under 24 hours causing an outage on the largest social network. The incident highlighted the need for more secure Web application software.

(2007) Mass SQL Injection attacks begin to appear infected website databases with browser-based malware exploits. Visitors to infected websites would have their machines compromised automatically. The incident highlighted the need for more secure Web application software and Web Application Firewalls used as stop gap measures.

(2008) The deadline for the Payment Card Industry’s Data Security Standard section 6.6, requiring credit card merchants to conduct code reviews or install Web Application Firewalls expires. The requirements fuels the need for both solutions, commercial and open source.

(2008 - 2009) To keep pace with the increased frequency of remote attacks, the rate of Web application change, and frequency of vulnerability testing required by PCI-DSS 6.6, the adoption of In-House network vulnerability scanning programs increase to offset the prohibitive costs of consultant penetration tests.

Next major incident?

Broad SaaS Network Vulnerability Scanning adoption?

Broad WAF Adoption?

Broad SDL Adoption?

Thursday, December 11, 2008

Budgeting for Web Application Security

“Budgeting” is a word I’ve been hearing a lot of questions about recently, which is another data point demonstrating that Web application security and software security are increasingly becoming a top of mind issue. The challenge that many security professionals face is justifying the line item expense for upper management. Upper management often asks, “How much do we need to spend?” well before “What do we need to spend it on?” I was talking with Boaz Gelbord (Executive Director of Information Security of Wireless Generation) and several others about this, and they provided keen insight on the subject. I have identified the following approaches to justifying security spending:

1) Risk Mitigation
"If we spend $X on Y, we’ll reduce of risk of loss of $A by B%."

2) Due Diligence
"We must spend $X on Y because it’s an industry best-practice."

3) Incident Response

"We must spend $X on Y so that Z never happens again."

4) Regulatory Compliance

"We must spend $X on Y because PCI-DSS says so."

5) Competitive Advantage
"We must spend $X on Y to make the customer happy."


Risk Mitigation
Risk mitigation is the efficient, forward thinking, and scientific approach to security. Many extremely bright people are pushing this methodology forward, especially through SecurityMetrics.org. I’ve had the pleasure of rubbing shoulders with many of them at Metricon events including Andrew Jaquith, author of Security Metrics: Replacing Fear, Uncertainty, and Doubt, who does a stellar job of walking the reader through this world. The fundamental idea is to find an acceptable balance between the amount of investment and the risk reduction that represents "good enough."

While measuring investment against risk makes sense, it is also extremely difficult to quantify in Web application and software security given our industry’s lack of data. We don’t yet have strong data tying loss to root cause and those compromised closely guard the details. Only recently, through WhiteHat Security and WASC Statistics projects are we getting a wide look at who is exposed to what. Also remember that just because a vulnerability exists, doesn’t mean it’ll get exploited. Possible != Probable. My hope is that this approach continues to mature and becomes a widely accepted reality.


Due Diligence
Due diligence is another approach to security budgeting that considers the ramifications of a data compromise if it occurs. Among other things, upper management will want to assure customers and business partners that they were investing resources into the security program in line with industry standards. At the same time, a budget executive may not approve a larger than average security budget in the event no security incident occurs during the fiscal year. This common scenario prompts the question as Boaz puts it, “What is industry standard for security spending in Web application security or software development?” For example, with regard to IT backend systems, there are studies that suggest 3-10% or higher for security spending out of the total budget. In this context, the "Security needs to be embedded into every part of software development" mantra isn’t helpful.

To the best of my knowledge there has been no formal study or survey on how much organizations are spending or should be spending regarding security in software development (or webappsec specifically). What most organizations typically do is outline the best-practices (architectural reviews, security QA, pen-testing, WAF, etc.) they should be implementing relative to peers. Next, they identify the appropriate solutions and obtain rates from the vendor.


Incident Response
Few things free up security dollars faster than a data compromise that is publicly attributed to the organization. That last part is really important. It is possible for an organization to ignore a general Web server compromise, which they may not even know about. For example, when the bad guys are only interested in using a company's resources to improve the Google Rank of some website (i.e. Al Gore’s Convenient Truth blog). Things get much different though when a revenue-generating website is SQL Injected and begins delivering malware to its visitors which has already affected millions of users. The result is the website gets blacklisted by the search engines and traffic sinks like a rock. Now we are talking real quantifiable losses that get people's attention.

At this point an organization is in triage mode. How do they best remove the malware, get off the blacklists (may take days or weeks), and continue business operations? This is also a good time for a security professional when they are given license to pursue projects that ensure a similar event never happens again. They’ll be given the necessary resources to be proactive. They then have the power to prioritize activities that may fall into the category of Risk Mitigation without having to justify “why.”


Regulatory Compliance
Regulatory compliance is a huge justification lever for security professionals. Compliance pressures enables them to do the things they want/need to and gives them the ammunition to justify their requests. For many security professionals, this is of course a very welcome development. And again as Boaz lays it out, “it is much easier for me to justify costs that are legally mandated instead of pointing to threats that in practice are not very likely to materialize. As a citizen/user it is also a positive development, since without legal requirements I think that the already poor state of software/application security would get even worse.” Still the challenge is that most regulations do not indicate how far organizations must take these measures so the mileage will certainly vary.


Competitive Advantage
Competitive advantages. Every organization needs them, and wants more of them. Security that can be demonstrated or made visible can be used as a competitive advantage. Sure, security adds cost to the goods, but that fact could be capitalized on if the customer assumes a level of risk. As Bruce Schneier says, “you can feel secure even though you're not, and you can be secure even though you don't feel it.” For example, the McAfee Hacker Safe or secured via SSL logos can be seen as one of those things that makes consumers “feel” safer when conducting e-commerce. A firm can also engage with a reputable security firm to get a security assessment that can be shared. Deciding which to go with is directly proportional to the customers level of understanding.


At the end of the day no one can say if any one way is the right way, and that was not my intention, but instead to document the approaches I’ve seen gain a better understanding of what works best for a given environment. In order for Web application security and software security to improve, more dollars need to be allocated, so we must assist the stakeholders in that process as best we can.

Sixth Quarterly Website Security Statistics Report

Since completing my 2008 speaking tour, I had more time to focus on projects back at the office, which is why posts have been a little slow. One particular project, WhiteHat Security’s Sixth Quarterly Website Security Statistics Report (reg required), is now available for download. There is a mountain of data within the pages providing a real-world perspective on Web application security from the hundreds of vulnerability assessments we perform weekly. This report we added more emphasis on separating historical trends from the current state of things. For instance, historically about 8 out of 10 websites have had at least one serious vulnerability, while today 6 out of 10 still do. Time-to-fix metrics are shortening. So progress is being definitely made, but we still have a long way to go. Enjoy!

As always, would love to hear comments about additional metrics we could add.

Data Overview
– 877 total websites
– Vast majority of websites assessed for vulnerabilities weekly
– Vulnerabilities classified according to WASC Threat Classification
– Vulnerability severity naming convention aligns with PCI-DSS
– Obtained between January 1, 2006 and December 1, 2008

Key Findings
– Total identified vulnerabilities (open & closed): 14,718
– Current open vulnerabilities: 5,283 (64% resolved)
– Historically, 82% of assessed websites have had at least one issue of HIGH, CRITICAL, or URGENT severity
– 63% of assessed websites currently have issues of HIGH, CRITICAL, or URGENT severity
– Historically, websites average 17 vulnerabilities identified during the lifetime of the assessment cycle
– Websites currently average 6 open vulnerabilities
– Cross-Site Request Forgery gained two spots in the Top Ten moving to #8
– Vulnerability time-to-fix metrics are not changing, typically requiring weeks to months to achieve resolution
– Roughly 50% of the most prevalent Urgent severity issues have been resolved