Tuesday, December 30, 2008

Silver Tail Systems tackles Business Logic Flaws

I first started blogging about business logic flaws back in 2006. At a time when there was an overemphasis on technical vulnerabilities such as XSS and SQLi. Issues black-box scanners could identify and the rest conveniently ignored. Many insiders knew serious vulnerabilities remained unchecked, albeit confidentially, even after a clean scan report. Bad guys could monetize heavily on the lack of visibility -- and they have, so it is no longer a secret. This type of fraud has resulted in the loss of 5, 6, and even 7 figure sums in particular instances. Organizations now want and need detection solutions on the back-end, in addition to vulnerability assessments on the front-end, capable of uncovering those taking advantage of business logic flaws.

That is where Silver Tail Systems, a new silicon valley start-up I’ve been following, comes into play. Founded by Laura Mather (Ph.D) and Mike Eynon, Silver Tail is an entire company solely dedicated to addressing what they call “business process abuse.” Basically the same as business logic flaws. If anyone has the pedigree to successfully apply technology to this problem, they do. Their backgrounds are no joke. Do not make the mistake thinking this is product is Web Application Firewall, its not. Something different entirely and more inline with business analytics with a focus on security.

Silver Tail Forensics exposes the way a website is being used – through user, page, and IP statistics. The tool allows a business owner to explore the use of his or her site by displaying the usage of the site on a per page, per user, or per IP level. A search interface provides deep access into the activity on the website using any dimension. When suspicious activity is identified, Silver Tail Forensics enables the business user or security analyst to obtain a full understanding of the bad actors and their specific behaviors and how those behaviors differ from legitimate users.”

This is a company worth tracking and a blog worth following.

Friday, December 26, 2008

It’s unanimous, Web application security has arrived

It’s unanimous. Web application security is the #1 avenue of attack according to basically every industry data security report available (IBM, Websense, Sophos, MessageLabs, Cisco, APWG, MITRE, Symantec, Trend Micro, SecureWorks, ScanSafe, IC3). This is in addition to reports specifically focusing on custom Web application vulnerabilities (WhiteHat Security, WASC, Accunetix). SQL Injection and Cross-Site Scripting are routinely cited as the biggest issues, the ones we can’t apply patches to defend against. Perhaps what we’ve learned in 2008, as pointed out by Gunnar Peterson and Gary McGraw, is we’re spending on the wrong problem. Roughly $150MM in software security products & services versus the lopsided billions annually on network security. 2009 will give us another opportunity to make a difference.

From the mountain of statistics available I've saved several interesting quotes to reference in 2009.

Internet Crime Complaint Center (IC3)
Web Site Attack Preventative Measures

"Over the past year, there has been a considerable spike in cyber attacks against the financial services and the online retail industry."

1. They identify Web sites that are vulnerable to SQL injection. They appear to target MSSQL only.
2. They use "xp_cmdshell", an extended procedure installed by default on MSSQL, to download their hacker tools to the compromised MSSQL server.
3. They obtain valid Windows credentials by using fgdump or a similar tool.
4. They install network "sniffers" to identify card data and systems involved in processing credit card transactions.
5. They install backdoors that "beacon" periodically to their command and control servers, allowing surreptitious access to the compromised networks.
6. They target databases, Hardware Security Modules (HSMs), and processing applications in an effort to obtain credit card data or brute-force ATM PINs.
7. They use WinRAR to compress the information they pilfer from the compromised networks.



WhiteHat Security
Sixth Quarterly Website Security Statistics Report


"finds 82 percent of websites have had at least one security issue, with 63 percent still having issues of high, critical or urgent severity. "

"Vulnerability time-to-fix metrics are slowly improving, but continue to show significant room for improvement, typically requiring weeks to months to achieve resolution. Only about 50 percent of the most prevalent urgent severity issues were resolved during the assessment time frame."



Web Application Security Consortium (WASC)
Statistics Project 2007


"Data analysis shows that more than 7% of analyzed sites can be compromised automatically. About 7.72% applications had a high severity vulnerability detected during automated scanning (P. 1). Detailed manual and automated assessment using white and black box methods shows that probability to detect high severity vulnerability reaches 96.85%."

"
The most prevalent vulnerabilities are Cross-Site Scripting, Information Leakage, SQL Injection and Predictable Resource Location (P. 2, P. 3). As a rule, Cross-Site Scripting and SQL Injection vulnerabilities appears due to system design errors, Information Leakage and Predictable Resource Location are often connected with improper system administration (for example, weak access control)."


IBM Internet Security Systems
X-Force® 2008 Mid-Year Trend Statistics

"The number of vulnerabilities affecting Web applications has grown at a staggering rate. From 2006 to the first half of 2008, vulnerabilities affecting Web server applications accounted for 51 percent of all vulnerability disclosures."

"The predominate types of vulnerabilities affecting Web applications are cross-site scripting (XSS), SQL injection, and file include vulnerabilities. In the past few years, cross-site scripting has been the predominant type of Web application vulnerability, but the first half of 2008 saw a marked rise in SQL injection disclosures, more than doubling the number of vulnerabilities seen on average over the same time period in 2007."


"The number of client-side vulnerabilities with public exploits has risen dramatically, from less than 5 percent in 2004 to almost 30 percent in the first half of 2008. "

"In the first half of 2008, 94 percent of public exploits affecting Web browser-related vulnerabilities were released on the same day as the disclosure."

"Over the past few years, the focus of endpoint exploitation has dramatically shifted from the operating system to the Web browser and multimedia applications. "



Websense security Labs™
state of internet security
Q1 – Q2, 2008

"75 percent of Web sites with malicious code are legitimate sites that have been compromised. This represents an almost 50 percent increase over the previous six-month period."

"60 percent of the top 100 most popular Web sites have either hosted or been involved in malicious activity in the first half of 2008."

"76.5 percent of all emails in circulation contained links to spam sites and/or malicious Web sites. this represents an 18 percent increase over the previous six-month period."

"85 percent of unwanted (spam or malicious) emails contain a link. "

"29 percent of malicious Web attacks included data-stealing code."

"46 percent of data-stealing attacks are conducted over the Web."



Sophos
Security Threat report: 2009

"The scale of this global criminal operation has reached such proportions that Sophos discovers one new infected webpage every 4.5 seconds – 24 hours a day, 365 days a year."

"Web insecurity, notably weakness against automated remote attacks such as SQL injections, will continue to be the primary way of distributing web-borne malware. Cybercriminals can then send innocent-looking spam which link to legitimate, but hacked, webpages. These hacked sites link invisibly to malicious content."



MessageLabs Intelligence:
2008 Annual Security Report

"Complex web-based malware targeting social networking sites and vulnerabilities in legitimate websites became widespread in 2008, resulting in malware being installed onto computers with no user intervention required. The daily number of new websites containing malware rose from ,068 in January to its peak at 5,2 in November. The average number of new websites blocked daily rose to 2,290 in 2008 from ,253 in 2007, largely due to increased attacks using SQL injection techniques."

"In the first half of 2008, vulnerabilities and weak security in Web applications were being exploited by criminals to deploy Web-based malware more widely. New toolkits were able to seek-out Web sites with weak security and target them. Recent examples of these types of attacks include extensive SQL injection attacks able to pollute data-driven Web sites, causing malicious JavaScript to be presented to the sites’ visitors."

"For 2008, the average number of new malicious Web sites blocked each day rose to 2,290, compared with ,253 for 2007. This represents an increase of 82.8% since 2007."

"By June 2008, the average number of malicious Web sites blocked each day rose by 58% to 2,076; taking the threat to its highest level since April 2007. By the second half of 2008, many more malicious Web sites were linked to SQL injection attacks targeted against legitimate, vulnerable Web servers. In July 2008, 83.% of all Web based malware intercepted was new, owing to increased SQL injection attacks. In October 2008, the number of malicious Web sites blocked each day rose further, to its highest level of 5,424."

"Throughout 2008, levels of spyware and adware interceptions have been overshadowed by a shift toward Web-based malware. Web-based malware has now become more attractive to cyber-criminals as they present an opportunity to capitalize on users’ unfamiliarity with the nature of Web-borne threats."


Cisco 2008
Annual Security Report

"In terms of quantity and pervasiveness, the most significant security threats in 2008 involved an online component."

"Top Security Concerns of 2008: Criminals are exploiting vulnerabilities along the entire Web ecosystem to gain control of computers and networks."

“Invisible threats” (such as hard-to-detect infections of legitimate websites) are making common sense and many traditional security solutions ineffective."

"Online security threats continued their growth in 2008. Online criminals combined spam, phishing, botnets, malware, and malicious or compromised websites to create highly effective blended threats that use multiple online vectors to defraud and com promise the security of Internet users."



APWG
Phishing Activity Trends Report - Q2/2008

"The number of crimeware-spreading password-stealing crimeware rose to a high of 9529 in June, fully 47% higher than the previous record of 6500 in March 2008 and 258% greater than the end of Q2/2008."

"The number of crimeware-spreading URLs detected rose from 4,080 in April to a record 9,529 in June. This rise represented an increase of nearly 47 percent from the previous record of 6,500 in March. 2008. The number at quarter’s end is 258 percent higher than the end of Q2 2007. Websense Chief Technology Officer and APWG Phishing Activity Trends Report contributing analyst Dan Hubbard said that the large boost is attributed mainly to malicious code being utilized in SQL injection attacks."



MITRE
Vulnerability Type Distributions in CVE

"The total number of publicly reported web application vulnerabilities has risen sharply, to the point where they have overtaken buffer overflows. This is probably due to ease of detection and exploitation of web vulnerabilities, combined with the proliferation of low-grade software applications written by inexperienced developers. In 2005 and 2006, cross-site script¬ing (XSS) was number 1, and SQL injection was number 2."


Accunetix

"70% of the websites scanned were found to contain high or medium vulnerabilities. There is an extremely high probability of these vulnerabilities being discovered and manipulated by hackers to steal the sensitive data these organizations store."



Symantec Internet Security Threat Report
Trends for July–December 07

"As a result of these considerations, Symantec has observed that the majority of effective malicious activity has become Web-based: the Web is now the primary conduit for attack activity."

"Site-specific vulnerabilities are perhaps the most telling indication of this trend. These are vulnerabilities that affect custom or proprietary code for a specific Web site. During the last six months of 2007, 11,253 site-specific cross-site scripting vulnerabilities were documented.1 This is considerably higher than the 2,134 traditional vulnerabilities documented by Symantec during this period."

"Site-specific vulnerabilities are also popular with attackers because so few of them are addressed in a timely manner. Of the 11,253 site-specific cross-site scripting vulnerabilities documented during this period, only 473 had been patched by the administrator of the affected Web site. Of the 6,961 site-specific vulnerabilities in the first six months of 2007, only 330 had been fixed at the time of writing. In the rare cases when administrators do fix these vulnerabilities, they are relatively slow to do so. In the second half of 2007, the average patch development time was 52 days, down from an average of 57 days in the first half of 2007."



Trend Micro
June 2008 | Trend Micro Threat Roundup and Forecast—1H 2008

"Trend Micro researchers also predicted that high profile Web sites would become the most sought after attack vectors for criminals to host links to phishing sites. In January 2008, this prediction became reality when the reputable BusinessWeek Web site was attacked, as well as popular U.S. clothing and restaurant sites. Also in early January 2008, several massive SQL injection attacks were launched on thousands of Web pages belonging to Fortune 500 corporations, state government agencies, and educational institutions (p. 9)."

"Drive-by-download attacks are increasing exponentially. In early January 2008, several massive SQL injection attacks were reported that involved over 100,000 compromised pages. These Web pages belonged to Fortune 500 corporations, state government agencies, and educational institutions. The most recent SQL injections involved travel sites, forums using phpBB, and sites using an ASP frontend and SQL database backend—either open source or proprietary."



SecureWorks

"Major Retailers Experience 161% Increase in Attempted Hacker Attacks"

"Attempted SQL injection attacks, a technique that exploits security vulnerabilities in Web applications by inserting malicious SQL code in Web requests, increased significantly in May for our retailers, going from an average of 20 per client per month to 237 per client per month. It then hit a peak in July with 17,000 attempted SQL Injection attacks per retail client and since November has dropped off to normal levels, averaging 18 per client per month."



ScanSafe
September 2008 / 3Q08 Global Threat Report

"74% of all malware blocks in 3Q08 resulted from visits to compromised websites;"

"SQL injection and other forms of website compromise led to steadily increasing malware block volumes throughout the first three quarters of 2008."

"On average, the rate of Web-based malware encountered by corporations increased 338% in 3Q08 compared to 1Q08 and 553% compared to 4Q07."


Thursday, December 18, 2008

OH, I Like Surprises!

Gary McGraw (CTO, Cigital) provides a must-read article this week, Software Security Top 10 Surprises. Gary, along with Brian Chess (Chief Scientist, Fortify) and Sammy Migues (Director, Knowledge Management, Cigital), interviewed nine executives running top software security programs and provided their analysis. I’m always curious to know what the more progressive organizations are doing regarding software security. Some organizations take up a leadership position and the rest will follow -- eventually. I’ve highlighted several snippets that were specifically interesting to me.

“All nine have an internal group devoted to software security that we choose to call the Software Security Group or SSG.”

“...an average percentage of SSG to development of just over 1%.”

Ladies and gentlemen, we have metrics! Many organization are searching for benchmarks offering guidance when budgeting resources for software security and finding limited information. Their work, along with the OWASP Security Spending Benchmarks project, plan to fill the void.


“We were surprised to find that only two of the nine organizations we interviewed used Web application firewalls at all.”

Wait for it!

“In our view, the best use of Web application firewalls is to buy some time to fix your security problems properly in the software itself by thwarting an active attack that you are already experiencing.”

Whoa! I am surprised as well, but ironically in the opposite direction. I would have estimated the number of WAF deployment lower than 22% (2 in 9). However, we are speaking progressive organizations in this case so perhaps it does make sense. Then Gary, Brian, and Sammy go on to confirm what I’ve been recommending for some time -- use WAFs to quickly reduce the immediate exposure (time-to-fix), then fix the root cause (the code) as time and budget allow.


“Involving QA in software security is non-trivial... Even the "simple" black box Web testing tools are too hard to use.”

“One sneaky trick to solving this problem is to encapsulate the attackers' perspective in automated tools that can be used by QA. What we learned is that even today's Web application testing tools (badness-ometers of the first order) remain too difficult to use for testers who spend most of their time verifying functional requirements.”

Exactly right. It’s one thing for a security pro to learn to use some security tools, run scanners, and hunt down all manner of esoteric Web application vulnerabilities. It’s quite another to expect a QA person to do the same. QA people are not security experts, they have a different skill set, much separate from what webappsec requires.


“Unless you understand how potential attacks really work and who really does them, it's impossible to build a secure system.“

Well said, nothing more to add.

“However, though attack information is critical for SSG members, the notion of teaching work-a-day developers how to "think like a bad guy" is not widespread.”

Precisely. Effort is better spent teaching developers how to play defense, a smaller domain of knowledge, and not offense. Leave the offense to the security guys.

“All nine programs we talked to have in-house training curricula, and training is considered the most important software security practice in the two most mature software security initiatives we interviewed.”

In-house security training support is a must have. An education process fueling what development security standards the organization keeps, the libraries available, and other helpful resources is essential. Much better bang for the buck than generic external courses offered.

Wednesday, December 17, 2008

History Repeating Itself

“All of this will happen before and all of this will happen again.”, is a memorable quote from Battlestar Galactica (awesome show). Meaning, history tends to repeat itself in a prophetic sort of way. As I’ve been involved in the evolution of the Web Application Security for the better part of a decade, I couldn’t help but notice the strikingly similar paths the field is taking to that of Network Security. Incidents (hacks) prompt technology research and over time drive business cost justification. Follow on attacks, best-practices, and regulation directly impact business models and the style of solution deployment.



To get a better visual comparison I created a timeline mapping key events. What’s interesting, Web application security closely matches network security if you shift by 8 years. We’ll see what the next couple of years has in store.


Network Security
(1988) The Morris Worm, the first computer worm distributed over the Internet, infects over 6,000 hosts. The incident prompts research into network firewall technology.

(1992) The first commercial packet-filter firewall (DEC SEAL) was shipped. Marketed as a perimeter security device able to thwart remote attacks targeting unpatched vulnerabilities, including the Morris Worm.

(1994 - 1995) Given that firewalls were not widely deployed, their costs not yet justified, savvy network administrators sought tools to identify and patch vulnerabilities. ISS released Internet Scanner, a commercial network security scanner. Security Analysis Tool for Auditing Networks (SATAN) was released for free.

(1996) Network security scanners revealed the need for more mature patch management products as security updates were required frequently. PATCHLINK UPDATE, a commercial patch management product released as a solution to the problem.

(1996) The costs for commercial patch management software and the potential downtime held back the technology adoption. Mature free patch management solutions were also not available. In an environment where few systems were diligently patched, hackers successfully exploited large blocks of undefended networks.

(1997) Broadly targeted attacks highlighted the need for additional security controls. The free software firewall, Linux IPChains, became a viable alternative for commercial products. Many enterprises chose perimeter firewalls before deploying patch management solutions because they were often seen as a faster, simpler, and more cost effective approach.

(1998) With the wide availability of network scanners (Nessus), increasing deployment of firewalls, and a proven need to defend against remote attacks - the environment created a need for high-end consultative network penetration testing services.

(1998) To ease the challenge of keeping up-to-date on security patches, Windows Update was first introduced with Windows 98.

(2001) Code Red and Code Red II infected hundreds of thousands of vulnerable Microsoft IIS servers within days of their release. The incident highlighted the need for increased adoption of enterprise patch management, if only on publicly available hosts.

(2002) Bill Gates: Trustworthy Computing Memo.

(2003) SQL slammer and Blaster worm demonstrate the porous state of network security by exploiting tens of thousands of unpatched hosted, even those located within the network perimeter. The incident also highlighted the need for host-based firewall deployments and patch management for all hosts, public and private.

(2003) To keep pace with the increased frequency of remote attacks and patching requirements, adoption of In-House network vulnerability scanning programs increase to offset the prohibitive costs of consultant penetration tests.

(2004) Windows XP Service Pack 2 ships with Windows Firewall as a default security feature to protect unpatched hosts, which may or may not be protected by a perimeter firewall. Shortly thereafter firewalls become fairly ubiquitous for any Internet-connected host.

(2005) Network vulnerability scanning moves towards ubiquity, but the costs of software and management are prohibitive. This, coupled with compliance requirements, lead to the increased adoption of Managed Security Service and Software-as-a-Services (Qualys / ScanAlert) providers to achieve lower a Total Cost of Ownership.

(2006) PCI Security Standards Council formed, uniting the payment brands disparate initiatives, enforcing the requirement of vulnerability mananagment, patch management, and firewall ubiquity.


Web Application Security
(1996) The PHF exploit, one of the more notorious CGI input validation vulnerabilities, was use to compromise untold numbers of Web servers. The incident, couple with other possible Web hacking techniques, prompts research into Web Application Firewall technology.

(1999) The first commercial Web Application Firewalls (AppShield) were shipped. Marketed as a perimeter application security devices able to thwart attacks targeting unmitigated vulnerabilities, including the PHF exploit.

(2000 - 2001) Given that Web Application Firewalls were not widely deployed, their costs not yet justified, security professionals sought tools to identify Web application vulnerabilities. Commercial Web application scanners (AppScan) become commercial available, as well as open source versions such as Whisker.

(2001) Web application scanners, and published research, reveal the need for more secure Web application software. The Open Web Application Security Project was founded as a community effort to improve raise awareness of Web application security. - http://www.owasp.org/index.php/Main_Page

(2001) Code Red and Code Red II infected hundreds of thousands of vulnerable Microsoft IIS servers within days of their release. The incident highlighted the need for increased adoption of enterprise patch management, if only on publicly available hosts.

(2002) Broadly targeted Web application attacks highlighted the need for additional security controls. The free Web Application Firewall, ModSecurity, became a viable alternative for commercial products. Enterprises began choosing WAFs before secure software initiative because they were often seen as a faster, simpler, and more cost effective approach.

(2004) The number of types of attacks and esoteric naming conventions became vast. The OWASP Top Ten was released to highlight and describe the most prevalent and critical Web application security vulnerabilities.

(2005) With the wide availability of Web applications scanners & other free tools, increasing deployment of Web Application Firewalls, and a proven need to defend against remote Web application attacks - the environment created a need for high-end consultative network penetration testing services.

(2005) The Samy Worm, the first major XSS worm, infected over 1 million MySpace profiles in under 24 hours causing an outage on the largest social network. The incident highlighted the need for more secure Web application software.

(2007) Mass SQL Injection attacks begin to appear infected website databases with browser-based malware exploits. Visitors to infected websites would have their machines compromised automatically. The incident highlighted the need for more secure Web application software and Web Application Firewalls used as stop gap measures.

(2008) The deadline for the Payment Card Industry’s Data Security Standard section 6.6, requiring credit card merchants to conduct code reviews or install Web Application Firewalls expires. The requirements fuels the need for both solutions, commercial and open source.

(2008 - 2009) To keep pace with the increased frequency of remote attacks, the rate of Web application change, and frequency of vulnerability testing required by PCI-DSS 6.6, the adoption of In-House network vulnerability scanning programs increase to offset the prohibitive costs of consultant penetration tests.

Next major incident?

Broad SaaS Network Vulnerability Scanning adoption?

Broad WAF Adoption?

Broad SDL Adoption?

Thursday, December 11, 2008

Budgeting for Web Application Security

“Budgeting” is a word I’ve been hearing a lot of questions about recently, which is another data point demonstrating that Web application security and software security are increasingly becoming a top of mind issue. The challenge that many security professionals face is justifying the line item expense for upper management. Upper management often asks, “How much do we need to spend?” well before “What do we need to spend it on?” I was talking with Boaz Gelbord (Executive Director of Information Security of Wireless Generation) and several others about this, and they provided keen insight on the subject. I have identified the following approaches to justifying security spending:

1) Risk Mitigation
"If we spend $X on Y, we’ll reduce of risk of loss of $A by B%."

2) Due Diligence
"We must spend $X on Y because it’s an industry best-practice."

3) Incident Response

"We must spend $X on Y so that Z never happens again."

4) Regulatory Compliance

"We must spend $X on Y because PCI-DSS says so."

5) Competitive Advantage
"We must spend $X on Y to make the customer happy."


Risk Mitigation
Risk mitigation is the efficient, forward thinking, and scientific approach to security. Many extremely bright people are pushing this methodology forward, especially through SecurityMetrics.org. I’ve had the pleasure of rubbing shoulders with many of them at Metricon events including Andrew Jaquith, author of Security Metrics: Replacing Fear, Uncertainty, and Doubt, who does a stellar job of walking the reader through this world. The fundamental idea is to find an acceptable balance between the amount of investment and the risk reduction that represents "good enough."

While measuring investment against risk makes sense, it is also extremely difficult to quantify in Web application and software security given our industry’s lack of data. We don’t yet have strong data tying loss to root cause and those compromised closely guard the details. Only recently, through WhiteHat Security and WASC Statistics projects are we getting a wide look at who is exposed to what. Also remember that just because a vulnerability exists, doesn’t mean it’ll get exploited. Possible != Probable. My hope is that this approach continues to mature and becomes a widely accepted reality.


Due Diligence
Due diligence is another approach to security budgeting that considers the ramifications of a data compromise if it occurs. Among other things, upper management will want to assure customers and business partners that they were investing resources into the security program in line with industry standards. At the same time, a budget executive may not approve a larger than average security budget in the event no security incident occurs during the fiscal year. This common scenario prompts the question as Boaz puts it, “What is industry standard for security spending in Web application security or software development?” For example, with regard to IT backend systems, there are studies that suggest 3-10% or higher for security spending out of the total budget. In this context, the "Security needs to be embedded into every part of software development" mantra isn’t helpful.

To the best of my knowledge there has been no formal study or survey on how much organizations are spending or should be spending regarding security in software development (or webappsec specifically). What most organizations typically do is outline the best-practices (architectural reviews, security QA, pen-testing, WAF, etc.) they should be implementing relative to peers. Next, they identify the appropriate solutions and obtain rates from the vendor.


Incident Response
Few things free up security dollars faster than a data compromise that is publicly attributed to the organization. That last part is really important. It is possible for an organization to ignore a general Web server compromise, which they may not even know about. For example, when the bad guys are only interested in using a company's resources to improve the Google Rank of some website (i.e. Al Gore’s Convenient Truth blog). Things get much different though when a revenue-generating website is SQL Injected and begins delivering malware to its visitors which has already affected millions of users. The result is the website gets blacklisted by the search engines and traffic sinks like a rock. Now we are talking real quantifiable losses that get people's attention.

At this point an organization is in triage mode. How do they best remove the malware, get off the blacklists (may take days or weeks), and continue business operations? This is also a good time for a security professional when they are given license to pursue projects that ensure a similar event never happens again. They’ll be given the necessary resources to be proactive. They then have the power to prioritize activities that may fall into the category of Risk Mitigation without having to justify “why.”


Regulatory Compliance
Regulatory compliance is a huge justification lever for security professionals. Compliance pressures enables them to do the things they want/need to and gives them the ammunition to justify their requests. For many security professionals, this is of course a very welcome development. And again as Boaz lays it out, “it is much easier for me to justify costs that are legally mandated instead of pointing to threats that in practice are not very likely to materialize. As a citizen/user it is also a positive development, since without legal requirements I think that the already poor state of software/application security would get even worse.” Still the challenge is that most regulations do not indicate how far organizations must take these measures so the mileage will certainly vary.


Competitive Advantage
Competitive advantages. Every organization needs them, and wants more of them. Security that can be demonstrated or made visible can be used as a competitive advantage. Sure, security adds cost to the goods, but that fact could be capitalized on if the customer assumes a level of risk. As Bruce Schneier says, “you can feel secure even though you're not, and you can be secure even though you don't feel it.” For example, the McAfee Hacker Safe or secured via SSL logos can be seen as one of those things that makes consumers “feel” safer when conducting e-commerce. A firm can also engage with a reputable security firm to get a security assessment that can be shared. Deciding which to go with is directly proportional to the customers level of understanding.


At the end of the day no one can say if any one way is the right way, and that was not my intention, but instead to document the approaches I’ve seen gain a better understanding of what works best for a given environment. In order for Web application security and software security to improve, more dollars need to be allocated, so we must assist the stakeholders in that process as best we can.

Sixth Quarterly Website Security Statistics Report

Since completing my 2008 speaking tour, I had more time to focus on projects back at the office, which is why posts have been a little slow. One particular project, WhiteHat Security’s Sixth Quarterly Website Security Statistics Report (reg required), is now available for download. There is a mountain of data within the pages providing a real-world perspective on Web application security from the hundreds of vulnerability assessments we perform weekly. This report we added more emphasis on separating historical trends from the current state of things. For instance, historically about 8 out of 10 websites have had at least one serious vulnerability, while today 6 out of 10 still do. Time-to-fix metrics are shortening. So progress is being definitely made, but we still have a long way to go. Enjoy!

As always, would love to hear comments about additional metrics we could add.

Data Overview
– 877 total websites
– Vast majority of websites assessed for vulnerabilities weekly
– Vulnerabilities classified according to WASC Threat Classification
– Vulnerability severity naming convention aligns with PCI-DSS
– Obtained between January 1, 2006 and December 1, 2008

Key Findings
– Total identified vulnerabilities (open & closed): 14,718
– Current open vulnerabilities: 5,283 (64% resolved)
– Historically, 82% of assessed websites have had at least one issue of HIGH, CRITICAL, or URGENT severity
– 63% of assessed websites currently have issues of HIGH, CRITICAL, or URGENT severity
– Historically, websites average 17 vulnerabilities identified during the lifetime of the assessment cycle
– Websites currently average 6 open vulnerabilities
– Cross-Site Request Forgery gained two spots in the Top Ten moving to #8
– Vulnerability time-to-fix metrics are not changing, typically requiring weeks to months to achieve resolution
– Roughly 50% of the most prevalent Urgent severity issues have been resolved

Wednesday, November 26, 2008

Victim of the Silver Bullet Security Podcast

Gary McGraw, CTO of Cigital and oracle of all things software security, and I routinely find ourselves conversing via mailing list threads, serving as experts opinion makers for the media, or presenting at an InfoSec conference around the world. While our time together is short we always have thought provoking discussions where, I at least, learn a good deal. You see, Gary has been around the block once or twice with probably every software security strategy/tactic and is willing to share keen insights on exactly why something will work, not work, or somewhere in between. One particular subject we thought it would be fun to turn the discussion into a podcast.

Last week I became Gary’s most recent “victim” (Episode 32 of the Silver Bullet Security Podcast) where we discuss the differences and similarities between Software Security and Web Application Security. Is WebAppSec just a subset of Software Security? It certainly could be. Are all the “new” Web attacks we “discover” already documented a decade or more ago? I’m not quite there yet, but it would be unnerving if so. It would also seem that Web Application Security could be considered a subset of Website security, because certainly not all vulnerabilities on a website can be found in the code.

Gary went on to publish an article where he raises the question, “Is Web application security commanding too much attention at the expense of other security issues?” I think we all know where I land on the subject, however this is definitely a worthwhile read.

Happy Thanksgiving!

Saturday, November 01, 2008

Browser Security – bolt it on, then build it in

Originally published in (in)-secure magazine #18.

Whether improving ease-of-use, adding new developer APIs, or enhancing security – Web browser features are driven by market share. That’s all there is to it. Product managers perform a delicate balancing act of attracting new users while trying not to “break the Web” or negatively impact their experience. Some vendors attempt an über secure design - Opus Palladianum as an example, but few use it. Others opt for usability over security, such as Internet Explorer 6, which almost everyone used and was exploited as a result. Then, somewhere in the middle, is fan-favorite Firefox. The bottom line is that any highly necessary and desirable security feature that inhibits market adoption likely won’t go into a release candidate of a major vendor. Better to be insecure and adopted instead of secure and obscure.

Fortunately, the major browser vendors have had security on the brain lately, which is a welcome change. Their new attitude might reflect the realization that a more secure product could in fact increase market share. The online environment is clearly more hostile than ever, as attackers mercilessly target browsers with exploits requiring no user intervention. One need only to look at this year’s massive SQL Injection attacks that infected more than one million Web pages, including those belonging to DHS, U.N., Sony, and others. The drive-by-download malware had just one goal - compromise the browser - with no interest in looting the potentially valuable data on the sites. Of course, we still have the garden-variety phishing sites out there. This leads to questions regarding the benefits of end-user education. Users are fed up. So let’s analyze what the Mozilla and Microsoft camps have done in response.

Buffer overflows and other memory corruption issues in the most recent browsers are declining, plus the disclosure-to-patch timeline is trending properly. Firefox 3 and Internet Explorer 7 now offer URL blacklists that block phishing sites and other pages known to be delivering malware. These features are reportedly a little shaky, but it’s clearly better considering there was nothing in place before. Firefox 3 provides additional visibility into the owners of SSL certificates and make it more challenging to blindly accept those that are invalid or self-signed. IE 7 offers a nice red/green anti-phishing toolbar that works with EV-SSL to help users steer clear of dangerous websites. Overall, excellent progress has been made from where we were just a couple years ago, but before the vendors start patting themselves on the back, there’s also some bad news.

If you ask the average Web security expert if they think the typical user is able to protect themselves online and avoid getting hacked, the answer will be an unqualified “no”. While browser vendors are addressing a small slice of a long-standing problem, most people are not aware of the remaining risks of a default install of the latest version of Firefox or Internet Explorer. When visiting any Web page, the site owner is easily able to ascertain what websites you’ve visited (CSS color hacks) or places you’re logged-in (JavaScript errors / IMG loading behavior). They can also automatically exploit your online bank, social network, and webmail accounts (XSS). Additionally, the browser could be instructed to hack devices on the intranet, including DSL routers and printers. And, if that’s not enough, they could turn you into a felon by forcing requests to illegal content or hack other sites (CSRF). The list goes on, but DNS-rebinding attacks get a little scary even for me, and it’s not like we haven’t known of these issues for years.

The browser security oxymoron hasn’t escaped the watchful eyes of the media’s Dan Goodin (The Register) and Brian Krebs (Washington Post), who figured out that something isn’t quite right. Nor Robert “RSnake” Hansen (CEO, SecTheory), who is a little confused as to why organizations such as OWASP don’t pay closer attention to browser security (recent events have shows signs of change). According to sources, only about half of IE users are running the latest, most secure and stable version of the browser. And again, if you ask the experts how they protect themselves, you’ll receive a laundry list of security add-ons, including NoScript, Flashblock, SafeHistory, Adblock Plus, LocalRodeo and CustomizeGoogle. Even with these installed, which relatively few people do, openings still exist resulting in an increasing number of people virtualizing their browsers or running them in pairs. Talk about extreme measures, but this is what it takes to protect yourself online.

Today, my philosophy about browser security and the responsibility of the vendors has changed. In my opinion, the last security-mile won’t and can’t be solved efficiently by the browser vendors, nor should we expect it to. I fully appreciate that their interests in building market share conflicts with those security features experts request, which by the way never ship fast enough. To be fair, there really is no way for browser vendors to make the appropriate amount of security for you, me, or everyone in the world while at the same time defending against all of the known cutting-edge attack techniques. Everyone’s tolerance for risk is different. I need a high-level of browser security and I’m OK if that means limiting my online experience; but, for others that could be a non- starter. So, this leaves the door open for open source or commercial developers to fill in the gaps.

I was recently talking with RSnake about this and he said “I think the browser guys will kill any third party add-ons by implementing their own technology solution, but only when the problem becomes large enough.” I think he’s exactly right! In fact, this has already happened and will only continue. The anti-phishing toolbars were inspired directly from those previously offered by Netcraft and eBay. The much welcome XSSFilter built into the upcoming Internet Explorer 8 is strikingly reminiscent of the Firefox NoScript add-on. Mozilla is already adopting the model themselves by building their experimental Content Security Policy add-on, which may one day work itself into a release candidate.

At the end of the day, the bad guys are going to continue winning the Web browser war until things get so bad that adding security add-ons will be the norm rather than the exception. Frankly, Web browsers aren’t safe now, because they don’t need to be. So, until things change, they won’t be… secure.

Friday, October 10, 2008

Maui Vacation 2008

Some people are busy and then some people have my schedule. Typically my time is spent in thirds -- a third public facing (conferences, presentations, writing, media, etc.), a third speaking with customers, and the remaining third performing R&D. Oh, and a whole lot of time in spent in the air, check out the wall of fame. ;) Fortunately now that the whole clickjacking craziness has died down a bit and the quarter is over, time for a much needed break. Starting this weekend I’ll be heading back home to Maui for a couple of weeks of R&R. Beach, surfing, BJJ, play with the kids, sleeping, checking out my dad’s home built hydrogen powered jeep and whatever else I can fit in. From there I’ll be off to Malaysia for a couple of days attending Hack in the Box Malaysia and delivering a keynote speech. Emails and blog comments are unlikely going to be responded to during that time. Unplug and unwind. See you all in November.

Tuesday, October 07, 2008

Clickjacking: Web pages can see and hear you

Web pages know what websites you’ve been to (without JS), where you’re logged-in, what you watch on YouTube, and now they can literally “see” and “hear” you (via Clickjacking + Adobe Flash). Separate from the several technical details on how to accomplish this feat, that’s the big secret Robert “RSnake” Hansen and myself weren’t able to reveal at the OWASP conference at Adobe’s request. So if you’ve noticed a curious post-it note over a few of the WhiteHat employee machines, that’s why. The rest of clickjacking details, which includes iframing buttons from different websites, we’ve already spoken about with people taking note.

Predictably several people did manage to uncovered much of what we had withheld on their own, whom thankfully kept it to themselves after verifying it with us privately. We really appreciated that they did because it gave Adobe more time. Today though much of the remaining undisclosed details we’re publicly revealed and Adobe issued an advisory in response. Let’s be clear though, the responsibility of solving clickjacking does not rest solely at the feet of Adobe as there is a ton of moving parts to consider. Everyone including browser vendors, Adobe (plus other plug-in vendors), website owners (framebusting code) and web users (NoScript) all need their own solutions to assist incase the other don’t do enough or anything at all.

The bad news is with clickjacking any computer with a microphone and/or a web camera attached can be invisibly coaxed in to being a remote surveillance device. That’s a lot of computers and single click is all it takes. Couple that with clickjacking the Flash Player Global Security Settings panel, something few people new even existed, and the attack becomes persistent. Consider what this potentially means for corporate espionage, government spying, celebrity stalking, etc. Email your target a link and there isn’t really anyone you can’t get to and snap a picture of. Not to mention bypassing the standard CSRF token-based defenses. I recorded a quick and dirty clickjacking video demo with my version having motion detection built-in.



Robert and I are currently scheduled to give more or less simultaneous presentations in Asia about clickjacking. For myself, I’ll be delivering a keynote at HiTB 2008 Malaysia (Oct 29) and RSnake will be speaking at OWASP AppSec Asia 2008 (Oct 28). The timing just happened to work out well. The next couple weeks will give us time to put our thoughts in order, explain the issues in a more cohesive fashion, and bring those up to speed who’ve gotten lost in all the press coverage. For those that have been following very closely, you’ll probably not find any meaningful technical nuggets of information that are not already published. Our job now is to make the subject easier to understand and help facilitate solutions to the problem. Unless the browser is secure, not much else is.

Prevention?
Put tape over your camera, disable your microphone, install NoScript, and/or disable your plugins. In the age of YouTube and Flash games, who’s really going to do the latter? For website owners their CSRF token-based defenses can be easily bypassed, unless they add JavaScript framebusting code to their pages, but the best practices are not yet fully vetted. Again, browser behavior is not at all consistent.

What a couple of a weeks this has been. Thank you to Adobe PSIRT for their diligence and hard work.

Monday, September 29, 2008

New CSRF paper with vulnerability disclosure

Ed Felten and Bill Zeller recently released a very well-written paper about Cross-Site Request Forgery (CSRF), including some real-world vulnerability examples from ING Direct, YouTube, MetaFilter, and The New York Times. As you all know so well, CSRF vulnerabilities are easy find when you just decide to look on basically any website. Don't expect any ground breaking research per-say, but the papers content is really helpful to those unfamiliar with CSRF (and that's still a lot people - especially developers). Ed and Bill also did some work on a potential client-side solution, like LocalRodeo I think, which I hope to find time to investigate further. We need as many smart people as we can trying to solve this problem in creative ways. CSRF certainly isn't going to go away anytime soon.

Insecure Magazine Issue #18

Issue #18 of my favorite online magazine, (IN)SECURE, is now available for download. The magazine is consistently content rich, high quality, and best of all - free! ;) This issue has several articles on Web security, one of which was written by yours truly, "Browser Security: Bolt it on, then build it in." Obviously with software security this is the opposite of what you want things to work, but when you consider the business objectives of Web browser security this is the way it tends to work. Here’s an excerpt of the premise...

"Some vendors attempt an über secure design - Opus Palladianum as an example, but few use it. Others opt for usability over security, such as Internet Explorer 6, which almost everyone used and was exploited as a result. Then, somewhere in the middle, is fan-favorite Firefox. The bottom line is that any highly necessary and desirable security feature that inhibits market adoption likely won't go into a release candidate of a major vendor. Better to be insecure and adopted instead of secure and obscure."

Other compelling web security articles:

- Web application security: risky business?
- Secure web application development
- Enterprise application security: how to balance the use of code reviews and web application firewalls from PCI compliance.

Thursday, September 18, 2008

I used to know what you watched, on YouTube

In doing some crossdomain.xml Flash research I noticed that YouTube’s policy file trusted *.google.com. They quickly removed it after I privately disclosed the following security flaw to Google.

My idea was if an attacker could upload an arbitrary Flash movie (SWF) anywhere on the google.com domain they could leverage that trust. So if an authenticated YouTube user visited an attacker-controlled page anywhere on the Web, the attacker could SRC in the google.com hosted SWF, and use it compromise the victims YouTube username, email address, first/last name, viewing history, and even comment or post/delete videos.

Billy Rios blogged in the past about being able to upload arbitrary files to google.com, but the only place I could locate that allowed SWFs when I checked was Gmail. Maybe others?

Anyway, I emailed a SWF attachment to a Gmail account and located the download URL. Perfect, but the next problem was even with the correct URL the victim is not authorized to view the file unless they are authenticated on THAT particular Gmail account. This is where the login-CSRF / identity misbinding trick the Stanford guys wrote up came in quite handy.

Here’s the step by step.

1) Attacker emails a special SWF to a Gmail account they control and locates the attachment download URL on google.com.
2) Logged-in YouTube user visits an attacker controlled page
3) Attacker forces their victim to authenticate to the attackers Gmail account (identify misbinding / CSRF).
4) Attacker embeds SWF from the Gmail account into the web page
5) Attacker now has read write access on YouTube.com as the victim's account.

Video:






Clever eh? :) I’m sure the Google/YouTube aren’t the only places where this particular scenario is still possible.

Many thanks to Rich Cannings and Chris Evans from the Google Security team who sheparded this along!

What’s important, Palin’s Yahoo Mail account hacked

That’s right, Alaska Governor and republican Vice-presidential candidate Sarah Palin's quasi-personal Yahoo Mail (gov.palin@yahoo.com) account was hacked into by the infamous group called “Anonymous”. While there are conflicting news reports on the incident’s authenticity - emails, screen shots, and family photos have been posted to Wikileaks as proof. If we assume the incident is real, there are so many ways a free WebMail account could be compromised – some more likely than others:

1) Password guessing / brute force attacks
2) Password recovery system flaw or website vulnerability
3) Network sniffers
4) Phishing scams
5) Insider (rouge customer service representation or software backdoor)
6) Operating System Malware/Spyware
7) Stolen hardware
8) Lost backup tape (hah, as if free WebMail providers have backups!)
9) Use of a public computer

etc. maybe more I’m not thinking of.

For myself and the rest of the InfoSec industry the “how” is interesting, but its unimportant for everyday users like our friends, family, coworkers, politicians, etc. What they need to know is WebMail compromises could happen to anyone - even if they do everything “right” because security is largely out of their hands or impossible to behave perfectly all the time. Mistakes happen and the more high profile of a person you are the higher the likelihood you will be targeted.

Bottom line: DO NOT receive or store anything you don’t want read or made public on these “free” WebMail systems. They are NOT private. They are NOT secure. They are NOT safe. The same goes for Google Docs, social network private messages, online backup solutions, whatever. What they are is FREE and CONVEINIENT. The businesses that support them are not accountable for your privacy, security, or lack thereof. Read their EULA or ToS if you don’t want to take my word for it.

Monday, September 15, 2008

Let's talk Software Serviceability

Financial Times graciously invited me to write an opinion piece for their publication entitled "Learn from today’s software flaws to protect corporations tomorrow", where I discuss a bit about Software Serviceability. In the wake of Dan K's vulnerability announcement (and others like it), I couldn't shake the notion that no matter how hard to try to write perfectly secure code, given a long enough time line we'll always fall short. We will miss a bug, there will be a new attack technique, hackers will exploit our systems. To me this says our important systems must have speedy and adaptive security measures to identify threats as they happen and the ability to quickly service our deployed software (preferably within days or hours). Some systems have this capability, but it's too few and far between.

"So in the case of the issues found by Dan, Tony, and Alex it is hard to put a top-end market value on them, but consider that other less severe issues have sold for five and six figure sums. Would seven figures be out of the question? Will the next security researcher be influenced by the potential financial reward instead of giving it away for free? We know for sure that there will be a next time, because software is imperfect. Vulnerabilities will be found and long standing encrypting algorithms will be broken or at least weakened. And it’s difficult, if not impossible, to future-proof our code against attack techniques that don’t yet exist."

(Cancelled) / Clickjacking - OWASP AppSec Talk

“Clickjacking,” the presentation Robert “RSnake” Hansen and I had planned for OWASP AppSec NY 2008, has been postponed due to vendor request.

The premise of Clickjacking is that we know a lot about what JavaScript malware is capable of once a user comes in contact with an attacker-controlled webpage (or a page with their code on it) such as history stealing, intranet hacking, phishing with superbait, Web worms, browser exploit, and so on, but comparably little about what can be done with a captured “click”. Clickjacking gives an attacker the ability to trick a user into clicking on something only barely or momentarily noticeable. What could they possibly do then?

With Clickjacking attackers can do quite a lot. Some things that could be pretty spooky. Things also performed, with a fair amount of ingenuity, quite easily. Over the past couple of weeks/months RSnake and I have been completing our PoC examples to demonstrate the potential attacks and sharing the results privately with a few industry colleagues to obtain a third-party opinion. At the time, we believed our discoveries were more in line with generic Web browsers behavior, not traditional “exploits,” and that guarding against Clickjacking was largely the browser vendors' responsibility. Clickjacking is a well-known issue, but severely underappreciated and largely undefended, and we hope to begin changing that perception.

One Clickjacking PoC utilized an Adobe product with an attack technique they considered to be a critical issue, we just hadn’t realized it, so we narrowly avoided 0-day’ing them! Considering the short notice, Adobe requested additional time in case the browser vendors do nothing to prevent Clickjacking. High severity issue #2 in Internet Explorer 8 would have potentially given the aforementioned issue persistent qualities. There was/is a third issue with websites in general, which would have required all website owners to make an update, but that would obviously be impossible to do so. Again, better fixed by the browser vendors. With much of our technical details taken off the table waiting for patches and/or new safeguards we weren’t left with much to convey the true power of Clickjacking other than what’s already known.

Postponing our OWASP talk wasn’t an easy decision to make as we put a lot of time and effort into the presentation. We apologize to the attendees and had every intention of releasing mind-blowing stuff. At this time just about everyone out there using the latest versions of Internet Explorer (including version 8) and Firefox 3 is affected. Please be assured that as soon as we’re able to expose the information we will do so. In the meantime, the only fix is to disable browser scripting and plugins. We realize this doesn’t give people much technical detail to go on, but it’s the best we can do right now.

Adobe PSIRT (psirt@adobe.com)

More to come.

Thursday, September 11, 2008

My Picks for OWASP NY AppSec 2008

OWASP NY AppSec 2008 is only week away and is going to be big, really big, bigger than anyone expected I think. So big in fact that Tom Brennan, conference organizer, had to find a larger venue this week to accommodate all the attendees. The Park Central Hotel - 870 Seventh Avenue at 56th if you hadn’t already seen the updated page. What Tom and Co. also did was create a jam-packed line-up of sweet looking presentations. So much so that everyone will probably miss something they wanted see because of dueling talk. Oh well, that’s what video is for! While the schedule still seems to be in a bit of flux, I thought I’d list the stuff I’m most interested in and get my personal schedule going.

Disclaimer: If I don’t pick your talk it doesn’t mean I don’t like you or the material. :) It might be that I’ve already seen it and/or familiar with the content.

Day 1

Web Application Security Road Map - Joe White
Because its initiatives like this one that will eventually serve as a template for other organizations to follow.

Http Bot Research - Andre M. DiMino - ShadowServer Foundation
I have a soft spot for bots, seemed interesting, and wanted to see what data they have.

Get Rich or Die Trying - Making Money on The Web, The Black Hat Way - Trey Ford, Tom Brennan, Jeremiah Grossman
Well, you know, I sorta have to be there. :)

New Exploit Techniques - Jeremiah Grossman & Robert "RSnake" Hansen
One of those presentations exposing what Web attacks in the next 12-18 month will look like. We’ve purposely kept really quiet about what we plan to demonstrate, but its certainly going to make people a little nervous. :)

Industry Outlook Panel
Curious about what these folks have on their mind.

Multidisciplinary Bank Attacks - Gunter Ollmann
Good speaker and I enjoy hacking backs. :)

Case Studies: Exploiting application testing tool deficiencies via "out of band" injection
I have no idea, though appeared to be an interesting topic

w3af - A Framework to own the web - Andres Riancho

I'd like to see this tool demonstrated and understand what it can really do.

Coding Secure w/PHP - Hans Zaunere
Want to see more about how this is done. It can be right?


Day 2


Best Practices Guide: Web Application Firewalls - Alexander Meisel
A big toss up between this one and Pen Testing VS. Source Code Analysis, but had to go with the WAFs. Wanted to see what their point of view is and the guidance they're suggesting.

APPSEC Red/Tiger Team Projects - Chris Nickerson
Sounded cool, that’s about it.

Industry Analyst with Forrester Research - Chenxi Wang
It’s always good to know how the certain enterprises will be influenced

Security in Agile Development - Dave Wichers
As before, is this possible? And if so, how!? TELL ME!

Next Generation Cross Site Scripting Worms - Arshan Dabirsiaghi
cmon Arshan, no holding back. Give me the next NEXT generation XSS worms! :)

NIST SAMATE Static Analysis Tool Exposition (SATE) - Vadim Okun
Tools lined-up side-by-side and tested always interested me.

Practical Advanced Threat Modeling - John Steven
It's been a while since I attended a threat modeling talk, especially one targeted towards webappsec, which I hope this is.

Off-shoring Application Development? Security is Still Your Problem - Rohyt Belani

Uh yap it is, but what to do about it is the question. Hopefully Rohyt will answer that one.

Flash Parameter Injection (FPI) - Ayal Yogev & Adi Sharabani
Flash security is HUGE! HUGE I SAY!

Most of these speakers I've never seen present before, which I find refreshing. New talent, new ideas, and shows an emerging industry. Good luck everyone!

Monday, September 08, 2008

WASC Web Application Security Statistics 2007

For those hungry for more web application security vulnerability data, WASC has released its Web Application Security Statistics report for 2007. Under the leadership of Sergey Gordeychik and the broad participation by Booz Allen Hamilton, BT, Cenzic, dblogic.it, HP, Positive Technologies, Veracode, and WhiteHat Security – we’ve combined custom web application vulnerability data from roughly 32,000 websites totaling 70,000 vulnerabilities. Methodologies include white box and black box, automated and manual, all reported using the Web Security Threat Classification as a baseline. Excellent stuff.

Vulnerability frequency by types


The most prevalent vulnerabilities (BlackBox & WhiteBox)


Sergey did a masterful job coordinating all the vendors (whom we thank), compiling the data, and generating a report in a nicely readable format. I’d like to caution those who may read too deeply into the data and draw unfounded conclusions. It’s best to view reports such as these, where the true number and type of vulnerabilities is an unknown, as the best-case scenario. There are certainly inaccuracies, such as with CSRF, but at the very least this gives us something to go on. Future reports will certainly become more complete and representative of the whole as additional sources of vulnerability data come onboard.

Wednesday, August 27, 2008

Download the 5th Website Security Statistics Report

Whew, what a mountain of work! I’m ecstatic the complete 5th installment of our Website Security Statistics Report report (all 13-pages) is finally published and available for everyone to see – and comment. I’m also extremely proud that we’re able to capture a measurable improvement in overall website security. Good news from inside InfoSec!? I know, weird huh!? We still have a long way to go, but these statistics show we’re on the right path and doing the right things:
  • Find and prioritize all websites
  • Find and fix website vulnerabilities
  • Implement a secure software development process
  • Utilize a defense-in-depth website security strategy

Today’s webinar went extremely well, slides are available for those interested. And some quick numbers:

Total Websites: 687
Identified vulnerabilities: 11,234
Unresolved vulnerabilities: 3,541 (66% resolved)
Websites HAVING HAD at least one serious issue: 82%
Websites CURRENTLY WITH at least one serious issue: 61%
Average vulnerabilities per website: 5

The shiny new WhiteHat Top Ten

Yes! CSRF finally make the list!

Also covered is:
- Collection methodology
- Time-to-fix and remediation metrics
- Industry vertical comparisons
- Best practices & lessons learned

Feedback on what other numbers people would like us to report on in the future is very welcome.


Monday, August 25, 2008

5th Website Security Statistics Report

For the last month I’ve been compiling our third quarter 2008 Website Security Statistics Report, which contains a comprehensive vulnerability analysis of over 600 real live websites. We’re talking 11,000 verified vulnerabilities collected from typically weekly assessments. This type of data is not available from reports by Symantec, Mitre (CVE), IBM X-Force, SANS, or anywhere else and we're excited to be able to share it.

We put a ton of work into this report and there is a massive amount of data. Highlights include a revised Top Ten list of vulnerabilities, updated Time-to-Fix metrics, vulnerability remediation percentages showing progress, vertical market comparison, and so on. The information is really valuable because it provides visibility into the future trends, trouble spots, and what action items should be considered.

On Wednesday, August 27th, 11:00 AM PDT I’ll be hosting an hour-long webinar to go over the results. Attendees will be given the opportunity to be the first to see the results and ask questions. Registration is free, but space is limited. If you are interested in attending, now is the time to register.

Back at my Desk

I’m finally home from a 3-city in 3 days tour covering Atlanta, New York, and Chicago. Beyond the excellent turn out for the WH/F5 lunch and learn presentations, I’m always fond of the extracurricular activity. Since I’m still injured, visiting new BJJ academies was out of the question, so instead I substituted the time with food. Fortunately I was able to do so with meals that fit within my high protein low-carb diet (208 lbs is the goal). 30lbs to go!

In Atlanta there was Fogo de Chão, a stellar upscale all-you-can-eat Brazilian BBQ restaurant, that served 15 types of fire-roasted meats brought to your table sizzling on skewers. Now that’s hard to beat. In New York we stopped by my all-time favorite steak place, Smith & Wollensky, for a Colorado bone-in ribeye. The waiter said it was the most flavorful meat in the world, I’m thinking he was right.

Chicago, I left the venue selection to the OWASP Chicago Chapter locals. But before that Cory Scott and Jason Witty invited me deliver an encore performance of “Get Rich or Die Trying” at the recommendation of Ed Bellis (Thank you). The cool thing about that particular talk is afterwards people always clue me into other business logic flaws they’ve encountered. It’s really good content to add, especially if it involves money. For me I was excited to see the presentations by Mike Zussman and Nate McFeters since I didn’t get to see either talk at Black Hat. 

Afterwards we went out for drinks with several people including Nate McFeters, Thomas Ptacek, Shrikant Raman, and a dozen others. The appetizers were decent, but I think the chose the place more for the beer than anything else. Chicago probably has one of the more friendly and vibrant chapters around. Anyway, it’s good to be back home. I got to do some modest BJJ training over the weekend, play in the GGAFL finals (we got spanked badly), and hit some rides with the kids at Great America. Now off to do the expense reports. Yay! ;)

Wednesday, August 20, 2008

Can you identify the white ninja?

During BlackHat USA, there were sightings of a mysterious white ninja. Witness reports claim he spoke with an english accent at 300 words per minute, raved on about sandboxing code, and plotted to take over the Web with a worm to wake people up (but just for a day). Anyone know who this phantom figure is?

Bonus points for posting the best photo caption. I’m thinking “practice safe output encoding”.

Tuesday, August 12, 2008

Application Security Vendors Counting their Millions

Software security sage Gary McGraw (CTO, Cigital) published his market research on what he believes are the 2007 revenue numbers for application security vendors. Speaking for myself, I can neither confirm nor deny the accuracy of this data, certainly when it comes to WhiteHat.

Fortify: $29.2 million
Coverity: $27.2 million
Klockwork: $26 million
Watchfire (IBM): $24.1 million
SPI Dynamics (HP): $22.3 million
Cenzic + Codenomicon + WhiteHat: $12.5 million
Ounce Labs: $9.5 million

$150.8 million total for the tools / SaaS market

“The source code analysis space is now larger than the black box testing tools space….”

Sort of, but more on that in a moment.

“Tools don't run themselves”

Ain’t that the truth.

“The hard-to-track software security services space checks in around $100-140 million in 2007, with growth just shy of 20% over 2006. Services can be divided into three tracks: training (around $7 million), risk assessment ($45-60 million) and penetration testing ($50-75 million).”

I’m not sure about the risk assessment number, but I’m thinking the estimates for training and penetration testing is probably orders of magnitude lower than they should be. The rates for larger players including IBM Global Services, Verizon, Symantec, Ernst & Young, PwC, and KPMG aren’t cheap. And to some extent neither are the smaller players such as Matasano, SecTheory, iSec, Leviathan, Denim Group, Foundstone, Gotham, NGSS, FishNet, Aspect, SANS, IOActive, Immunity, NTO, NGS, BlueInfy, Net-Square and dozens of other regional players. No wonder the overall market totals are tough to track, but each takes their piece of the pie.

I believe when it comes to the black-box testing of web applications, services are likely 5x larger than the tools industry – especially if you consider that few organizations these days haven’t had a professional vulnerability assessment (and its tough to capture international sales as well). The opposite is true for white-box testing where tool purchases a way more common due to the costs of a line-by-line source code review by a consultant. Then we have WAF sales driven by VA sales, which makes sense because an organization typically must identify a need before they can justify the fix. The same was true of network firewalls, patch management, and A/V markets.

All in the all trajectory for the entire web application security segment is going up, and fast. PCI-DSS 6.6 is certainly one stimulant, but so is all the web hacking going on these days. Great numbers Gary, thanks for sharing!

Monday, August 11, 2008

BlackHat encore - Chicago OWASP next week

Chapter leaders Cory Scott or Jason Witty were gracious enough to invite me to present at this months OWASP Chicago meeting. It's always fun to visit a new chapter. I've been to about a dozen so far, and meeting like minded webappsec people from various parts of the country/world. This is also a good opportunity for those who missed Black Hat to see one of the presentations live rather than relying solely on the information in the slides.

August 21 - OWASP Chicago Chapter (6:00pm – 8:30pm)
6:00 Refreshments and Networking
6:15 Bad Cocktail: Spear Phishing + Application Hacks - Rohyt Belani, Managing Partner, Intrepidus Group
7:15 Get Rich or Die Trying - Making Money on The Web, The Black Hat Way - Jeremiah Grossman, Founder & CTO of Whitehat Security

Bank of America Plaza
540 W. Madison, Downtown Chicago, 23rd floor.

*Please RSVP to jason{AT}wittys.com by 8/19/2008 if you plan to attend. Your name will need to be entered into the building's security system in order to gain access to the meeting.*

Road Show - 3 Cities in 3 Days

For those interested in learning more about the WhiteHat Sentinel / F5 ASM integration we have a road show scheduled (If not, go ahead and stop reading now). We’re visiting Atlanta, New York, and Chicago next week. In each city presenting will be our CEO Stephanie Fohn sharing insights on the latest website vulnerability statistics, a guest customer from a financial institution sharing their experiences on “The Challenge of Managing Application Security in Today's Environment”, and myself paired with an F5 engineer performing live VA+WAF demos. A really nice lunch will be served on us and registration is free, space is limited though. Hope to see many of you there!

August 19 (Atlanta @ 11:30am – 1:30pm)
The Four Seasons Hotel
75 Fourteenth Street Atlanta, GA 30309
Guest Speaker: Allen Stone, Senior Security Specialist, E*TRADE Financial


August 20 (New York @ 11:30am – 1:30pm)
The Tribeca Grand Hotel
2 Avenue of the Americas
New York, NY 10013
Guest Speaker: Jim Routh, Chief Information Security Officer, DTCC


August 21 (Chicago @ 11:30am – 1:30pm)
The Peninsula Chicago Hotel
108 East Superior Street
Chicago, IL 60611
(312) 337-2888
Guest Speaker: Anna Sherony, Privacy and Information Protection Officer, Sammons Financial

Get Rich or Die Trying (BlackHat USA 2008)

Update 08.11.2008: Added a video interview of Trey and myself to the bottom of the post.

Our speaking slot was informally dubbed the “power hour” due to the number of stellar presentations all booked at the same time - many of which I would have loved to attend personally. Nate McFeters & Co. unveiled the details on their GIFAR research, Microsoft announced they’ll be revealing vulnerability details to certain vendors prior to public disclosure, Joanna Rutkowska on Xen Hypervisor, etc. And making matters just a little bit more interesting, we were generously given a larger ballroom. This was scary because with a speaking time near the end of the last day combined with top-notch competition, a sparsely attended room would have been entirely likely. So when the room filled to capacity, I’m guessing of around 1,000 people (standing room only) Trey Ford and I were extremely ecstatic! Which reminds me, Trey Ford (Director of Solutions Architecture) pinched hit for Arian Evans (Director of Operations) so he could focus more time on his presentation, “Encoded, Layered and Transcoded Syntax Attacks.

The premise for the “Get Rich or Die Trying” presentation was looking forward at the next 3-5 years considering that we’re probably going to see less fertile ground for XSS/SQLi/CSRF to be taken advantage of – that is if the good guys do their job well. So the bad guys will likely focus more attention on business logic flaws, which QA overlooks, scanners can’t identify, IDS/IPS can’t defend, and more importantly issues potentially generating 4, 5, 6 or even figures a month in illicit revenue. In many ways though this is sort of like predicting the present since just about every example we gave was grounded with a real-world public reference and backed by statistics. We also wanted this presentation was very different than what most are used to at BlackHat that tend to be deeply technical, hard to follow, and often dry. And while everyone in webappsec is transfixed on JavaScript malware issues, we chose another direction.

We designed a presentation meant to be a lot of fun, that taught things anyone could do, and perhaps by the end might have people questioning their ethics. Judging from much of the feedback I think we might have succeeded on the last point. :) RSnake was also a good sport when we ribbed him a little bit. For those interested in the slides, I quickly uploaded them to slideshare. The quality is decent (hard to see the references) and you can download the PDF. I’m working on slenderizing it now, so when I have it I’ll upload that as well, including the video when we get it.




Lastly thank you very much to everyone who came and supported us, it meant a lot.