- Web-Based Worms: How XSS Is Paving the Way for Future Malware
- Best Security Improvements in 2009?
- Securing tomcat
- Microsoft IIS vuln leaves users open to remote attack
- My Gmail Account and Google Apps Got Hacked
- Is code auditing of open source apps necessary before deployment?
- An Unpleasant Anniversary: 11 Years of SQL Injection
- Bypassing the intent of blocking "third-party" cookies
- Serious web vuln found in 8 million Flash files
- BSIMM Data Show an SSG is a Software Security Necessity
Venture capitalist (Grossman Ventures https://grossman.vc), Internet protector and industry creator. Founded WhiteHat Security & Bit Discovery. BJJ Black Belt.
Friday, December 25, 2009
Best of Application Security (Friday, Dec. 25)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order.
Wednesday, December 23, 2009
(Fortify + WhiteHat = Fortify on Demand) or (1 + 1 = 3)

“Fortify on Demand is a set of hosted Software-as-a-Service (SaaS) solutions that allow any organization to test and score the security of all software with greater speed and accuracy. This automated turnkey service offers an efficient way to test third-party application security or to complete baseline assessments of all internal applications. It is the only offering available on the market that correlates best-of-breed static and dynamic analysis into a single dashboard. Fortify on Demand's robust reports prioritize vulnerabilities fixes based on severity and exploitability, with line of code level details.”
The static analysis technology obviously comes from the market leader’s SCA product line, but guess who’s behind the dynamic part. Give up? :) That would be WhiteHat Security of course!
“Fortify selected WhiteHat Sentinel because it is the only software-as-a-service (SaaS) solution to deliver the highly accurate, verified vulnerability data required to ensure effective website security and actionable information for both developers and security operations as a service.”
Needless to say we are very excited! This technology combination and delivery model addresses a number of under-served customer use-cases, such as third-party validation and testing of COTS. As I’ve blogged before, it’s time to move beyond the nonsensical adversarial debates about which testing methodology (black or white) is best and instead focus on the synergies. We’re putting our R&D money where are mouths are and have grand plans to directly benefit our customers.
Today’s integration is already yielding a solid level of vulnerability correlation, right down to a line or code block, which helps prioritize findings into actionable results -- such as what vulnerabilities are confidently exploitable. Looking ahead, consider that static analysis can measure exactly how code coverage is being realized during the dynamic analysis -- furthermore pointing out the gaps in unlinked URLs, back doors, extra form parameters, etc. This will lead to way better and measurable comprehensiveness for both static and dynamic analysis. And don’t even get me started on the metrics we’ll be able to gather. We’re just at the beginning of understanding what is possible!
This will be the basis of Jacob West and my “Fortify on Demand Launch Webinar” (Jan 14.) and RSA presentation “Best of Both Worlds: Correlating Static and Dynamic Analysis Results” (Mar. 4). We’ll be learning a lot in the coming months and will be ready to share our discoveries with the audience.
Friday, December 18, 2009
Best of Application Security (Friday, Dec. 18)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Cross-domain search timing
- HPP -- What is it, and what types of attacks does it augment?
- RockYou Hack: From Bad To Worse
- Attention security researchers! Submit your new 2009 Web Hacking Techniques
- Data collector threatens scribe who reported breach
- Akamai Implements WAF
- Why Microsoft should consider retroactively installing AdBlocking software by default
- XSS Embedded iFrames
- Testing for SSL renegotiation
- DefendTheApp - An OWASP AppSensor Project
- Easily View Hidden Facebook Photo Albums
Thursday, December 17, 2009
Attention security researchers! Submit your new 2009 Web Hacking Techniques
Update: Awesome news, Black Hat is generously sponsoring the effort! The researcher topping the list will be awarded a free pass to attend the BlackHat USA Briefings 2010!
Just 2 weeks left in 2009. Time to start collecting all the latest published research in preparation for the coveted Top Ten Web Hacking Techniques list!
Every year Web security community produces dozens of new hacking techniques documented in white papers, blog posts, magazine articles, mailing list emails, etc. We are not talking about individual vulnerability instances with CVE numbers, nor intrusions / incidents, but the actual new methods of Web attack. Some target the website, some target the browser, or somewhere in between.
Historically many of these works would permanently reside in obscure and overlooked corners of the Web. Now it its fourth year the list provides a centralized reference point and recognizes researchers who have contributed to the advancement of our industry.
The top ten winners will be selected by a panel of judges (names to be announced soon) on the basis of novelty, potential impact, and overall pervasiveness. Those researchers topping the list can expect to receive praise amongst their peers as have those in past years (2006, 2007, 2008).
Then coming up at IT-Defense (Feb.) and RSA USA 2010 (Mar.) it will be my great honor to introduce each of the top ten during my “2010: A Web Hacking Odyssey” presentations. Each technique will be described in technical detail for how they work, what they can do, who they affect, and how best to defend against them. Audiences get an opportunity to better understand the newest attacks believed most likely to be used against us in the future.
To make all this happen we are going to need a lot of help from the community. At the bottom of this post will be the living master list of everything published. If anything is missing, and we know for a fact there is, please comment containing the link to the research. We understand that while not every technique is as powerful as another, please make every effort to include them anyway, nothing should be considered too insignificant. You never know what method might be found useful another researcher down the road.
Thank you and good luck!
The Complete List
- Persistent Cookies and DNS Rebinding Redux
- iPhone SSL Warning and Safari Phishing
- RFC 1918 Blues
- Slowloris HTTP DoS
- CSRF And Ignoring Basic/Digest Auth
- Hash Information Disclosure Via Collisions - The Hard Way
- Socket Capable Browser Plugins Result In Transparent Proxy Abuse
- XMLHTTPReqest “Ping” Sweeping in Firefox 3.5+
- Session Fixation Via DNS Rebinding
- Quicky Firefox DoS
- DNS Rebinding for Credential Brute Force
- SMBEnum
- DNS Rebinding for Scraping and Spamming
- SMB Decloaking
- De-cloaking in IE7.0 Via Windows Variables
- itms Decloaking
- Flash Origin Policy Issues
- Cross-subdomain Cookie Attacks
- HTTP Parameter Pollution (HPP)
- How to use Google Analytics to DoS a client from some website.
- Our Favorite XSS Filters and how to Attack them
- Location based XSS attacks
- PHPIDS bypass
- I know what your friends did last summer
- Detecting IE in 12 bytes
- Detecting browsers javascript hacks
- Inline UTF-7 E4X javascript hijacking
- HTML5 XSS
- Opera XSS vectors
- New PHPIDS vector
- Bypassing CSP for fun, no profit
- Twitter misidentifying context
- Ping pong obfuscation
- HTML5 new XSS vectors
- About CSS Attacks
- Web pages Detecting Virtualized Browsers and other tricks
- Results, Unicode Left/Right Pointing Double Angel Quotation Mark
- Detecting Private Browsing Mode
- Cross-domain search timing
- Bonus Safari XXE (only affecting Safari 4 Beta)
- Apple's Safari 4 also fixes cross-domain XML theft
- Apple's Safari 4 fixes local file theft attack
- A more plausible E4X attack
- A brief description of how to become a CA
- Creating a rogue CA certificate
- Browser scheme/slash quirks
- Cross-protocol XSS with non-standard service ports
- Forget sidejacking, clickjacking, and carjacking: enter “Formjacking”
- MD5 extension attack
- Attack - PDF Silent HTTP Form Repurposing Attacks
- XSS Relocation Attacks through Word Hyperlinking
- Hacking CSRF Tokens using CSS History Hack
- Hijacking Opera’s Native Page using malicious RSS payloads
- Millions of PDF invisibly embedded with your internal disk paths
- Exploiting IE8 UTF-7 XSS Vulnerability using Local Redirection
- Pwning Opera Unite with Inferno’s Eleven
- Using Blended Browser Threats involving Chrome to steal files on your computer
- Bypassing OWASP ESAPI XSS Protection inside Javascript
- Hijacking Safari 4 Top Sites with Phish Bombs
- Yahoo Babelfish - Possible Frame Injection Attack - Design Stringency
- Gmail - Google Docs Cookie Hijacking through PDF Repurposing & PDF
- IE8 Link Spoofing - Broken Status Bar Integrity
- Blind SQL Injection: Inference thourgh Underflow exception
- Exploiting Unexploitable XSS
- Clickjacking & OAuth
- Google Translate - Google User Content - File Uploading Cross - XSS and Design Stringency - A Talk
- Active Man in the Middle Attacks
- Cross-Site Identification (XSid)
- Microsoft IIS with Metasploit evil.asp;.jpg
- MSWord Scripting Object XSS Payload Execution Bug and Random CLSID Stringency
- Generic cross-browser cross-domain theft
- Popup & Focus URL Hijacking
- Advanced SQL injection to operating system full control (whitepaper)
- Expanding the control over the operating system from the database
- HTML+TIME XSS attacks
- Enumerating logins via Abuse of Functionality vulnerabilities
- Hellfire for redirectors
- DoS attacks via Abuse of Functionality vulnerabilities
- URL Spoofing vulnerability in bots of search engines (#2)
- URL Hiding - new method of URL Spoofing attacks
- Exploiting Facebook Application XSS Holes to Make API Requests
- Unauthorized TinyURL URL Enumeration Vulnerability
Tuesday, December 15, 2009
Why Microsoft should consider retroactively installing AdBlocking software by default
I’ve been following the developments of Google Android and Chrome OS with much interest lately. Less from a security/technology perspective and more as a lesson in business. One way Google is expanding Android’s presence in the mobile market is by sharing ad revenue with mobile carriers (ie Verizon). Instead of incurring software licensing costs (of BlackBerry, Windows Mobile, Palm OS, etc) carriers may receive revenue when their Android users click on ads. Carriers love this because they get paid to install an OS rather than the other way around! This business model has been called “Less Than Free” and Microsoft should take notice of it because their Windows / Office business model could be at huge long-term risk. Let me explain.
Microsoft obviously makes significant revenue OEMing Windows to PC manufactures (Dell, etc.). At the same time Microsoft feels some level of price pressure from free good-enough operating systems like Linux installed on ultra cheap PCs. Now imagine for a moment if Google decided to leverage Less Than Free for Chrome OS. Google could feasibly pay PC manufactures to install Chrome OS through an advertising revenue sharing program. PC Manufactures, instead of paying a fee to MS for Windows, get access to a new revenue stream when Chrome OS users click on ads. Additionally, my understanding is you can’t install desktop software on Chrome OS so the huge money maker that is Microsoft Office is gone on that footprint as well. Such movements would not happen overnight, but the writing is on the wall.
Microsoft is of course not without options when it comes to aggressively fending off the Google powerhouse. One way is that Microsoft could leverage their dominant (50%+) Internet Explorer browser market share. They could use Windows Update to retroactively install ad blocking software as a “security feature,” like AdBlocker Plus on Firefox, in all IE versions (6-8). No doubt users the world over would love it! Less annoying ads, less malware distribution (much of which spread by online ads), and a snappier Web experience! How could Google complain, they are all about speed right? :) Oh, right, because it would cut Google and their dual-revenue stream (AdSense / AdWords) off at the knees.
Many users, even Firefox users, might actually flock to Internet Explorer if they knew this feature was available! Most don’t even know AdBlocker Plus exists. This new ad blocking “security improvement” may also pressure Firefox, the other major browser, to do the same as not wanting to be one-up by MS in the security dept. At least one Mozilla exec is encouraging the use of Bing. Giorgio speculates that is might be why Google Chrome doesn’t have NoScript-like support yet, because they can’t figure out how to do it without enabling effective ad blocking. Makes sense.
Sure, Web publishers whose life blood is ad revenue would hate Microsoft, at least temporarily -- but fear not! Those billions in advertising dollars flowing to Google would still need to land somewhere, but where!? MS could open a “blessed” safe, secure, and user targeted advertiser network! So if Google, or anyone else, wants their ads shown to an IE audience they’d have to pay a tax to MS for the privilege. Still I’ve long wondered by pay-wall Web publishers didn’t heavily advocate the use of ad blockers to put pressure on their free content competitors.
I’ve also glossed over a number of important factors that come into play should any of this play out, like antitrust, but Microsoft is presently is 1-0 so maybe that possibility doesn’t scare them. Meanwhile during whatever legal proceedings, Google would be sucking wind revenue wise. As I wrap up this post, please keep in mind that I’m no industry analyst, just a curious observer who hasn’t vetted their ideas nearly enough.
Microsoft obviously makes significant revenue OEMing Windows to PC manufactures (Dell, etc.). At the same time Microsoft feels some level of price pressure from free good-enough operating systems like Linux installed on ultra cheap PCs. Now imagine for a moment if Google decided to leverage Less Than Free for Chrome OS. Google could feasibly pay PC manufactures to install Chrome OS through an advertising revenue sharing program. PC Manufactures, instead of paying a fee to MS for Windows, get access to a new revenue stream when Chrome OS users click on ads. Additionally, my understanding is you can’t install desktop software on Chrome OS so the huge money maker that is Microsoft Office is gone on that footprint as well. Such movements would not happen overnight, but the writing is on the wall.
Microsoft is of course not without options when it comes to aggressively fending off the Google powerhouse. One way is that Microsoft could leverage their dominant (50%+) Internet Explorer browser market share. They could use Windows Update to retroactively install ad blocking software as a “security feature,” like AdBlocker Plus on Firefox, in all IE versions (6-8). No doubt users the world over would love it! Less annoying ads, less malware distribution (much of which spread by online ads), and a snappier Web experience! How could Google complain, they are all about speed right? :) Oh, right, because it would cut Google and their dual-revenue stream (AdSense / AdWords) off at the knees.
Many users, even Firefox users, might actually flock to Internet Explorer if they knew this feature was available! Most don’t even know AdBlocker Plus exists. This new ad blocking “security improvement” may also pressure Firefox, the other major browser, to do the same as not wanting to be one-up by MS in the security dept. At least one Mozilla exec is encouraging the use of Bing. Giorgio speculates that is might be why Google Chrome doesn’t have NoScript-like support yet, because they can’t figure out how to do it without enabling effective ad blocking. Makes sense.
Sure, Web publishers whose life blood is ad revenue would hate Microsoft, at least temporarily -- but fear not! Those billions in advertising dollars flowing to Google would still need to land somewhere, but where!? MS could open a “blessed” safe, secure, and user targeted advertiser network! So if Google, or anyone else, wants their ads shown to an IE audience they’d have to pay a tax to MS for the privilege. Still I’ve long wondered by pay-wall Web publishers didn’t heavily advocate the use of ad blockers to put pressure on their free content competitors.
I’ve also glossed over a number of important factors that come into play should any of this play out, like antitrust, but Microsoft is presently is 1-0 so maybe that possibility doesn’t scare them. Meanwhile during whatever legal proceedings, Google would be sucking wind revenue wise. As I wrap up this post, please keep in mind that I’m no industry analyst, just a curious observer who hasn’t vetted their ideas nearly enough.
Friday, December 11, 2009
Best of Application Security (Friday, Dec. 11)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Why Chrome has No NoScript
- Cross-domain search timing
- A checklist approach to security code reviews
- Potent malware link infects almost 300,000 webpages
- HTML5 new XSS vectors
- Pentagon Web Site Vulnerabilities Identified and Perspective on Pentagon "Pwnage"
- Cross-Site Request Forgery For POST Requests With An XML Body
- Security in Syndicated and Federated Systems
- IP Spoofing
- How fake sites trick search engines to hit the top
Friday, December 04, 2009
Best of Application Security (Friday, Dec. 4)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Seamless iframes + CSS3 selectors = bad idea
- Error Handling using the OWASP ESAPI
- Real World Security: Ed Bellis on Web-based Business and Software Security
- What's powering Web apps: Google waving goodbye to Gears, hello to HTML5
- DNS Rebinding Video
- Vulnerability remediation done right and done wrong
- HTTP parser for intrusion detection and web application firewalls
- Unu Cracks a Wall Street Journal Conference Site, Not WSJ.com
- CSRF Isn't Just For Access
- Frightened by Links
Friday, November 27, 2009
Best of Application Security (Friday, Nov. 27)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Injection attacks, its not just SQL!
- You’ve been hacked. Now what?
- The meaning of metrics.
- Symantec exposed passwords,serials… SQL Injection, full database access
- Web Application Security Scanner List
- Facebook Worm Uses Clickjacking in the Wild
- Ping pong obfuscation
- Bypassing CSP for fun, no profit
- Client-side JavaScript file processing may come via File API
- Presentations Available: OWASP AppSec DC 2009
Friday, November 20, 2009
Best of Application Security (Friday, Nov. 20)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- OWASP Top Ten 2010 and The Principles of Secure Development
- Major IE8 flaw makes 'safe' sites unsafe & NoScript author's response
- DNS Rebinding for Scraping and Spamming
- Reversing JavaScript Shellcode: A Step By Step How-To
- Brute-Forcing Compatibility
- Preventing Security Development Errors: Lessons Learned at Windows Live by Using ASP.NET MVC
- OWASP Board - Election Results
- Announcing ModSecurity Handbook
- ESAPI Web Application Firewall released!
- OWASP Top Ten and ESAPI & Part 2
Friday, November 13, 2009
Best of Application Security (Friday, Nov. 13)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- OWASP Top 10 (2010 release candidate 1)
- Flash Origin Policy Issues and FAQ
- Microsoft to release security guidelines for Agile
- WhiteHat Security 8th Website Security Statistics Report Edit Presentation
- Securely deploying cross-domain policy files
- Vulnerability assessment integration with web application firewalls
- ModSecurity Core Rule Set (CRS) <-> PHPIDS Smoketest
- Website Vulnerability Assessment Q4 2009 (EMA Radar Report™ Summary)
- Facebook groups hacked through design flaw
- Microsoft Tries To Censor Bing Vulnerability
OWASP Top 10 (2010 release candidate 1)
The newest version of the OWASP Top 10, the Top 10 Most Critical Web Application Security Risks, has been made available as a release candidate! This project is extraordinarily meaningful to the application security industry as it exercises influence over PCI-DSS, global policy, developer awareness, and product direction. Notable changes were made from the 2007 version to assist organizations in visualizing, understanding, and solving these issues. Now is the time for the application security community to send in their feedback to make the list the best we possibly can by the end of the year when it will be ratified.
Download: presentation (ppt) and the complete document (pdf)
"Welcome to the OWASP Top 10 2010! This significant update presents a more concise, risk focused list of the Top 10 Most Critical Web Application Security Risks. The OWASP Top 10 has always been about risk, but this update makes this much more clear than previous editions, and provides additional information on how to assess these risks for your applications.
For each top 10 item, this release discusses the general likelihood and consequence factors that are used to categorize the typical severity of the risk, and then presents guidance on how to verify whether you have this problem, how to avoid this problem, some example flaws in that area, and pointers to links with more information.

The primary aim of the OWASP Top 10 is to educate developers, designers, architects and organizations about the consequences of the most important web application security weaknesses. The Top 10 provides basic methods to protect against these high risk problem areas – a great start to your secure coding security program."
Download: presentation (ppt) and the complete document (pdf)
"Welcome to the OWASP Top 10 2010! This significant update presents a more concise, risk focused list of the Top 10 Most Critical Web Application Security Risks. The OWASP Top 10 has always been about risk, but this update makes this much more clear than previous editions, and provides additional information on how to assess these risks for your applications.


The primary aim of the OWASP Top 10 is to educate developers, designers, architects and organizations about the consequences of the most important web application security weaknesses. The Top 10 provides basic methods to protect against these high risk problem areas – a great start to your secure coding security program."
Friday, November 06, 2009
Best of Application Security (Friday, Nov. 6)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Another fine method to exploit SQL Injection and bypass WAF
- Security and Facebook Platform
- When Is More Important Than Where in Web Application Security
- Apple - XSS Attack
- Cross-subdomain Cookie Attacks
- PILOT: Production in lieu of testing (AgoraCart FAIL)
- Facebook and MySpace security: backdoor wide open, millions of accounts exploitable
- SSL and TLS Authentication Gap vulnerability discovered
- Using Blended Browser Threats involving Chrome to steal files on your computer
- LinkedIN With 'Bill Gates'
Friday, October 30, 2009
Best of Application Security (Friday, Oct. 30)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Detecting Malice eBook
- Black Box vs White Box. You are doing it wrong.
- The Barack Obama Donations Site was Hacked…err, no it wasn’t.
- New Q3'09 malware data, and the Dasient Infection Library
- Infrastructure fingerprinting via XSS
- DNS Rebinding in Firefox
- Output Validation using the OWASP ESAPI
- Google Wave as a Tool for Hacking
- Announcing the release of the Enhanced Mitigation Evaluation Toolkit
- Asset Valuation (couldn't settle on just one):
Wednesday, October 28, 2009
Black Box vs White Box. You are doing it wrong.
A longstanding debate in Web application security, heck all of application security, is which software testing methodology is the best -- that is -- the best at finding the most vulnerabilities. Is it black box (aka: vulnerability assessment, dynamic testing, run-time analysis) or white box (aka: source code review, static analysis)? Some advocate that a combination of the two will yield the most comprehensive results. Indeed, they could be right. Closely tied into the discussion is the resource (time, money, skill) investment required, because getting the most security bang for the buck is obviously very important.
In my opinion, choosing between application security testing methodologies based upon a vulnerabilities-per-dollar metric is a mistake. They are not substitutes for each other, especially in website security. The reasons for choosing one particular testing methodology over the other are very different. Black and white box testing measure very different things. Identifying vulnerabilities should be considered a byproduct of the exercise, not the goal. When testing is properly conducted, the lack or reduction of discovered vulnerabilities demonstrates improvement of the organization, not the diminished value of the prescribed testing process.
If you reached zero vulnerabilities (unlikely), would it be a good idea to stop testing? Of course not.
Black box vulnerability assessments measure the hackability of a website given an attacker with a certain amount of resources, skill, and scope. We know that bad guys will attack essentially all publicly facing websites at some point in time, so it makes sense for us to learn about the defects before they do. As such, black box vulnerability assessments are best defined as an outcome based metric for measuring the security of a system with all security safeguards in place.
White box source code reviews, on the other hand, measure and/or help reduce the number of security defects in an application resulting from the current software development life-cycle. In the immortal words of Michael Howard regarding Microsoft’s SDL mantra, “Reduce the number of vulnerabilities and reduce the severity of the bugs you miss.” Software has bugs, and that will continue to be the case. Therefore it is best to minimize them to the extent we can in effort to increase software assurance.
Taking a step back, you might reasonably select a particular product/service using vulns-per-dollar as one of the criteria, but again, not the testing methodology itself. Just as you wouldn’t compare the value of network pen-testing against patch management, firewalls against IPS, and so on. Understanding first what you want to measure should be the guide to testing methodology selection.
In my opinion, choosing between application security testing methodologies based upon a vulnerabilities-per-dollar metric is a mistake. They are not substitutes for each other, especially in website security. The reasons for choosing one particular testing methodology over the other are very different. Black and white box testing measure very different things. Identifying vulnerabilities should be considered a byproduct of the exercise, not the goal. When testing is properly conducted, the lack or reduction of discovered vulnerabilities demonstrates improvement of the organization, not the diminished value of the prescribed testing process.
If you reached zero vulnerabilities (unlikely), would it be a good idea to stop testing? Of course not.
Black box vulnerability assessments measure the hackability of a website given an attacker with a certain amount of resources, skill, and scope. We know that bad guys will attack essentially all publicly facing websites at some point in time, so it makes sense for us to learn about the defects before they do. As such, black box vulnerability assessments are best defined as an outcome based metric for measuring the security of a system with all security safeguards in place.
White box source code reviews, on the other hand, measure and/or help reduce the number of security defects in an application resulting from the current software development life-cycle. In the immortal words of Michael Howard regarding Microsoft’s SDL mantra, “Reduce the number of vulnerabilities and reduce the severity of the bugs you miss.” Software has bugs, and that will continue to be the case. Therefore it is best to minimize them to the extent we can in effort to increase software assurance.
Taking a step back, you might reasonably select a particular product/service using vulns-per-dollar as one of the criteria, but again, not the testing methodology itself. Just as you wouldn’t compare the value of network pen-testing against patch management, firewalls against IPS, and so on. Understanding first what you want to measure should be the guide to testing methodology selection.
Friday, October 23, 2009
Best of Application Security (Friday, Oct. 23)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- The real cost of software security
- Porn, CSS History Hacking, User Recon and Blackmail
- Information Asset Value: Some Cold-Hearted Calculations
- How to Value Digital Assets (Web Sites, etc.)
- Happy 900 and RSnakes on a Plane!
- Hacking Crazy Taxi
- We've been blind to attacks on our Web sites
- First Impressions on Security in Google Wave
- OWASP Podcast #46 Luca Carettoni and Stefano Di Paola (HTTP Parameter Pollution)
- Web Protection Library – CTP Release Coming Soon
Sunday, October 18, 2009
Best of Application Security (Friday, Oct. 16)
Note: Delayed due to travel requirements.
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- OWASP Podcast #44 Interview with Andy Steingruebl
- Cross-Domain Security
- (WASC) Web Application Security Statistics 2008
- Adoption of X-FRAME-OPTIONS header
- Integrating WAFs And Vulnerability Scanners
- Regular Expressions – the secure developers best friend
- Sneaky Microsoft plug-in puts Firefox users at risk
- The Month of Facebook Bugs Report
- Transport Layer Protection Cheat Sheet
- What Security Means to a Healthcare CIO
Friday, October 09, 2009
Best of Application Security (Friday, Oct. 9)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- null-prefix certificate for paypal
- Statistics from 10,000 leaked Hotmail passwords
- OWASP Interview with Andy Steingruebl
- Web Application Security Scanner Evaluation Criteria Version 1.0
- All about Website Password Policies
- 9 Ways to Improve Application Security After an Incident
- CSS History Hack Used To Ban Torrent Users
- BSIMM Begin
- Identifying Denial of Service Conditions Through Performance Monitoring
- XSS Protection by Default in Rails 3.0
Wednesday, October 07, 2009
All about Website Password Policies
Passwords are the most common way for people to prove to a website that they are who they say they are, as they should be the only ones who know what it is. That is of course unless they share it with someone else, somebody steals it, or even possibly guesses it. This identity verification process is more commonly know as “authentication.” There are three ways to perform authentication:
Website Password Policy
While the process seems straightforward, organization should never take choosing passwords lightly as it will significantly affect the user experience, customer support volume, and the level of security/fraud. The fact is users will forget their passwords, pick easy passwords, and in all likelihood share them with others (knowingly and unknowingly). Allowing users to choose their password should be considered an opportunity to set the tone for how a website approaches security, which originates from solid business requirements.
Defining an appropriate website password policy is a delicate balancing act between security and ease-of-use. Password policies enforce the minimum bar for password guessability, how often they may be changed, what they can’t be, and so on. For example, if a user could choose a four-character password, and they would if they could, using lower case letters (a through z), an attacker could theoretically try them all (26^4 or 456,976) in roughly 24 hours at only 5 guesses a second. This degree of password guessing, also known as a brute-force attack, is plausible with today’s network and computing power and as a result far too weak by most standards.
On the opposite end of the spectrum, a website could enforce 12-character passwords, that must have upper and lowercase letters, special characters, and be changed every 30 days. This password policy definitely makes passwords harder to guess, but would also likely suffer from a user revolt due to increased password recovery actions, customer support calls, not to mention more passwords written down. Obviously this result defeats the purpose of having passwords to begin with. The goal for a website password policy is finding an acceptable level of security to satisfy users and business requirements.
Length Restrictions
In general practice on the Web, passwords should be no shorter than six characters in length, while systems requiring a higher degree of security consider eight or more advisable. There are some websites that limit the length of passwords that their users may choose. While every piece of user-supplied data should possess a maximum length, this behavior is counter-productive. It’s better to let the user choose their password length, then chop the end to a reasonable size when it’s used or stored. This ensures both password strength and a pleasant user experience.
Character-Set Enforcement
Even with proper length restrictions in place, passwords can often still be guessed in a relatively short amount of time with dictionary-style guessing attacks. To reduce the risk of success, it’s advisable to force the user to add at least one uppercase, number, or special character to their password. This exponentially increases the number of passwords available and reduces the risk of a successful guessing attack. For more secure systems, it’s worth considering enforcing more than one of these options. Again, this requirement is a balancing act between security and ease-of-use.
Simple passwords
If you let them, users will choose the simplest and easiest to remember password they can. This will often be their name (perhaps backwards), username, date of birth, email address, website name, something out of the dictionary, or even “password.” Analysis of leaked Hotmail and MySpace passwords show this to be true. As was the case In just about every case, websites should prevent users from selecting these types of passwords as they are the first targets for attackers. As reader @masterts has wisely said, "I'd gladly take "~7HvRi" over "abc12345", but companies need good monitoring (mitigating controls) to catch the brute force attacks."
Strength Meters
During account registration, password recovery, or password changing processes, modern websites assist users by providing a dynamic visual indication of the strength of their passwords. As the user types in their password, a strength meter dynamically adjusts coinciding with the business requirements of the website. The Microsoft Live sign-up Web page is an excellent example:

Notice how the password recommendations are clearly stated to guide the user to selecting a stronger password. As the password is typed in and grows in strength, the meter adjusts:

There are several freely available JavaScript libraries that developers may use to implement this feature.
Normalization
When passwords are entered, any number of user errors may occur that prevent them from being typed in accurately. The most common reason is “caps lock” being turned on, but whitespace is also problematic as well. The trouble is users tend not to notice because their password is often behind asterix characters that defend against malicious shoulder surfers. As a result, users will mistakenly lock their accounts from too many failed guesses, send in customer support emails, or resort to password recovery. Either way it makes for a poor user experience.
What many websites have done is resort to password normalization functions, meaning they’ll automatically lowercase, remove whitespace, and snip the length of passwords before storing them. Some may argue that doing this counteracts the benefits of character-set enforcement. While this is true, it may be worth the tradeoff when it comes to benefiting the user experience. If passwords are still 6 characters in length and must contain a number, then backed by an acceptable degree of anti-brute force and aging measures, there should certainly be enough strength left in the system to thwart guessing attacks.
Storage (Salting)
Storing passwords is a bit more complicated than it appears as important business decisions need to be made. The decision is, “Should the password be stored encrypted or not?” and there are tradeoffs worth considering. Today’s best-practice standards say that passwords should always be stored in a hash digest form where even the website owner cannot retrieve the plain text version. That way in the event that the database is either digitally or physically stolen, the plain text passwords are not. The drawback to this method is that when users forget their passwords they cannot be recovered to be displayed on screen or emailed to them, a feature websites with less strict security requirements enjoy.
If passwords are to be stored, a digest compare model is preferred. To do this we take the user’s plain text password, append it to a random two-character (or greater) salt value, and then hash digest the pair.
password_hash = digest(password + salt) + salt
The resulting password hash, plus the salt appended to the end in plain text, is what will get stored in the user’s password column of the database. This method has the benefit that even if someone stole the password database they cannot retrieve the passwords -- this was not the case when PerlMonks was hacked. The other benefit, made possible by salting, is that no two passwords will have the same digest. This means that someone with access to the password database can’t tell if more than one user has the same password.
Brute-Force Attacks
There are several types of brute-force password-guessing attacks, each designed to break into user accounts. Sometimes they are conducted manually, but these days most are automated.
1. Vertical Brute-Force: One username, guessing with many passwords. The most common attack technique forced on individual accounts with easy to guess passwords.
2. Horizontal Brute-Force: One password, guessing with many usernames. This is a more common technique on websites with millions of users where a significant percentage of them will have identical passwords.
3. Diagonal Brute-Force: Many usernames, guessing with many passwords. Again, a more common technique on websites with millions of users resulting in a higher chance of success.
4. Three Dimensional Brute-Force: Many usernames, guessing with many passwords while periodically changing IP-addresses and other unique identifiers. Similar to a diagonal brute-force attack, but intended to disguise itself from various anti-brute force measures. Yahoo has been show to have been enduring this for of brute-force attack.
Depending on the type of brute-force attack, there are several mitigation options worth considering, each with its own pros and cons. Selecting the appropriate level of anti-brute-force will definitely affect the overall password policy. For example, a strong anti-brute-force system will slow down guessing attacks to the point where password strength enforcement may not need to be so stringent.
Before anti-brute-force systems are triggered, a threshold needs to be set for failed password attempts. On most Web-enabled systems five, 10, or even 20 guesses is acceptable. It provides the user with enough chances to get their password right in case they forget what it is or typo several times. While 20 attempts may seem like too much room for guessing attacks to succeed, provided password strength enforcement is in place, it really is not.
Aging
Passwords should be periodically changed because the longer they are around, the more likely they are to be guessed. The recommended time between password changes will vary from website to website, anywhere from 30-365 days is typical. For most free websites like WebMail, message boards, or social networks, 365 days or never is reasonably acceptable. For eCommerce websites receiving credit cards, every six months to a year is fine. For higher security needs or an administrator account, 30-90 days is more appropriate. All of these recommendations assume an acceptable password strength policy is being enforced.
For example, let’s say you are enforcing a minimalist six-character password, with one numeric, and no simple passwords allowed. In addition, there is a modest anti-brute-force system feasibly limiting the amount of password guessing attempts an attacker can place on a single account to 50,000 per day (1,500,000 per mo. / 18,000,000 per yr.). That means for an attacker to exhaust just 1% of the possible password combinations, 6^36 (a-z and 0-9) or 2,176,782,336, it would take a little over a year to complete.
The things you have to watch out for with password aging is a little game people play when forced to change passwords. Often when prompted they’ll change their password to something new, then quickly change it back to their old one because it’s easier for them to remember. Obviously this defeats the purpose and benefits of password aging, which has to be compensated for in the system logic.
Minimum Strength Policy
After everything is considered, the following should be considered the absolute minimum standard for a website password policy. If your website requires additional security, these properties can be increased.
Minimum length: six
Choose at least one of the following character-set criteria to enforce:
- Must contain alpha-numeric characters
- Must contain both upper-case and lower-case characters
- Must contain both alpha and special characters
Simple passwords allowed: No
Aging: 365 days
Normalization: yes
Storage: 2-character salted SHA-256 digest
- Something you have (physical keycard, USB stick, etc.) or have access to (email, SMS, fax, etc.)
- Something you are (fingerprint, retina scan, voice recognition, etc.)
- Something you know (password or pass-phrase).
Website Password Policy
While the process seems straightforward, organization should never take choosing passwords lightly as it will significantly affect the user experience, customer support volume, and the level of security/fraud. The fact is users will forget their passwords, pick easy passwords, and in all likelihood share them with others (knowingly and unknowingly). Allowing users to choose their password should be considered an opportunity to set the tone for how a website approaches security, which originates from solid business requirements.
Defining an appropriate website password policy is a delicate balancing act between security and ease-of-use. Password policies enforce the minimum bar for password guessability, how often they may be changed, what they can’t be, and so on. For example, if a user could choose a four-character password, and they would if they could, using lower case letters (a through z), an attacker could theoretically try them all (26^4 or 456,976) in roughly 24 hours at only 5 guesses a second. This degree of password guessing, also known as a brute-force attack, is plausible with today’s network and computing power and as a result far too weak by most standards.
On the opposite end of the spectrum, a website could enforce 12-character passwords, that must have upper and lowercase letters, special characters, and be changed every 30 days. This password policy definitely makes passwords harder to guess, but would also likely suffer from a user revolt due to increased password recovery actions, customer support calls, not to mention more passwords written down. Obviously this result defeats the purpose of having passwords to begin with. The goal for a website password policy is finding an acceptable level of security to satisfy users and business requirements.
Length Restrictions
In general practice on the Web, passwords should be no shorter than six characters in length, while systems requiring a higher degree of security consider eight or more advisable. There are some websites that limit the length of passwords that their users may choose. While every piece of user-supplied data should possess a maximum length, this behavior is counter-productive. It’s better to let the user choose their password length, then chop the end to a reasonable size when it’s used or stored. This ensures both password strength and a pleasant user experience.
Character-Set Enforcement
Even with proper length restrictions in place, passwords can often still be guessed in a relatively short amount of time with dictionary-style guessing attacks. To reduce the risk of success, it’s advisable to force the user to add at least one uppercase, number, or special character to their password. This exponentially increases the number of passwords available and reduces the risk of a successful guessing attack. For more secure systems, it’s worth considering enforcing more than one of these options. Again, this requirement is a balancing act between security and ease-of-use.
Simple passwords
If you let them, users will choose the simplest and easiest to remember password they can. This will often be their name (perhaps backwards), username, date of birth, email address, website name, something out of the dictionary, or even “password.” Analysis of leaked Hotmail and MySpace passwords show this to be true. As was the case In just about every case, websites should prevent users from selecting these types of passwords as they are the first targets for attackers. As reader @masterts has wisely said, "I'd gladly take "~7HvRi" over "abc12345", but companies need good monitoring (mitigating controls) to catch the brute force attacks."
Strength Meters
During account registration, password recovery, or password changing processes, modern websites assist users by providing a dynamic visual indication of the strength of their passwords. As the user types in their password, a strength meter dynamically adjusts coinciding with the business requirements of the website. The Microsoft Live sign-up Web page is an excellent example:

Notice how the password recommendations are clearly stated to guide the user to selecting a stronger password. As the password is typed in and grows in strength, the meter adjusts:

There are several freely available JavaScript libraries that developers may use to implement this feature.
Normalization
When passwords are entered, any number of user errors may occur that prevent them from being typed in accurately. The most common reason is “caps lock” being turned on, but whitespace is also problematic as well. The trouble is users tend not to notice because their password is often behind asterix characters that defend against malicious shoulder surfers. As a result, users will mistakenly lock their accounts from too many failed guesses, send in customer support emails, or resort to password recovery. Either way it makes for a poor user experience.
What many websites have done is resort to password normalization functions, meaning they’ll automatically lowercase, remove whitespace, and snip the length of passwords before storing them. Some may argue that doing this counteracts the benefits of character-set enforcement. While this is true, it may be worth the tradeoff when it comes to benefiting the user experience. If passwords are still 6 characters in length and must contain a number, then backed by an acceptable degree of anti-brute force and aging measures, there should certainly be enough strength left in the system to thwart guessing attacks.
Storage (Salting)
Storing passwords is a bit more complicated than it appears as important business decisions need to be made. The decision is, “Should the password be stored encrypted or not?” and there are tradeoffs worth considering. Today’s best-practice standards say that passwords should always be stored in a hash digest form where even the website owner cannot retrieve the plain text version. That way in the event that the database is either digitally or physically stolen, the plain text passwords are not. The drawback to this method is that when users forget their passwords they cannot be recovered to be displayed on screen or emailed to them, a feature websites with less strict security requirements enjoy.
If passwords are to be stored, a digest compare model is preferred. To do this we take the user’s plain text password, append it to a random two-character (or greater) salt value, and then hash digest the pair.
password_hash = digest(password + salt) + salt
The resulting password hash, plus the salt appended to the end in plain text, is what will get stored in the user’s password column of the database. This method has the benefit that even if someone stole the password database they cannot retrieve the passwords -- this was not the case when PerlMonks was hacked. The other benefit, made possible by salting, is that no two passwords will have the same digest. This means that someone with access to the password database can’t tell if more than one user has the same password.
Brute-Force Attacks
There are several types of brute-force password-guessing attacks, each designed to break into user accounts. Sometimes they are conducted manually, but these days most are automated.
1. Vertical Brute-Force: One username, guessing with many passwords. The most common attack technique forced on individual accounts with easy to guess passwords.
2. Horizontal Brute-Force: One password, guessing with many usernames. This is a more common technique on websites with millions of users where a significant percentage of them will have identical passwords.
3. Diagonal Brute-Force: Many usernames, guessing with many passwords. Again, a more common technique on websites with millions of users resulting in a higher chance of success.
4. Three Dimensional Brute-Force: Many usernames, guessing with many passwords while periodically changing IP-addresses and other unique identifiers. Similar to a diagonal brute-force attack, but intended to disguise itself from various anti-brute force measures. Yahoo has been show to have been enduring this for of brute-force attack.
Depending on the type of brute-force attack, there are several mitigation options worth considering, each with its own pros and cons. Selecting the appropriate level of anti-brute-force will definitely affect the overall password policy. For example, a strong anti-brute-force system will slow down guessing attacks to the point where password strength enforcement may not need to be so stringent.
Before anti-brute-force systems are triggered, a threshold needs to be set for failed password attempts. On most Web-enabled systems five, 10, or even 20 guesses is acceptable. It provides the user with enough chances to get their password right in case they forget what it is or typo several times. While 20 attempts may seem like too much room for guessing attacks to succeed, provided password strength enforcement is in place, it really is not.
Aging
Passwords should be periodically changed because the longer they are around, the more likely they are to be guessed. The recommended time between password changes will vary from website to website, anywhere from 30-365 days is typical. For most free websites like WebMail, message boards, or social networks, 365 days or never is reasonably acceptable. For eCommerce websites receiving credit cards, every six months to a year is fine. For higher security needs or an administrator account, 30-90 days is more appropriate. All of these recommendations assume an acceptable password strength policy is being enforced.
For example, let’s say you are enforcing a minimalist six-character password, with one numeric, and no simple passwords allowed. In addition, there is a modest anti-brute-force system feasibly limiting the amount of password guessing attempts an attacker can place on a single account to 50,000 per day (1,500,000 per mo. / 18,000,000 per yr.). That means for an attacker to exhaust just 1% of the possible password combinations, 6^36 (a-z and 0-9) or 2,176,782,336, it would take a little over a year to complete.
Minimum Strength Policy
After everything is considered, the following should be considered the absolute minimum standard for a website password policy. If your website requires additional security, these properties can be increased.
Minimum length: six
Choose at least one of the following character-set criteria to enforce:
- Must contain alpha-numeric characters
- Must contain both upper-case and lower-case characters
- Must contain both alpha and special characters
Simple passwords allowed: No
Aging: 365 days
Normalization: yes
Storage: 2-character salted SHA-256 digest
Friday, October 02, 2009
Cloud/SaaS will do for websites what PCI-DSS has not
Make them measurably more secure. If a would-be Cloud/Software-as-a-Service (SaaS) customer is concerned about security, and they should be since their business is on the line, then security should be the vendors concern as well. Unless the Cloud/SaaS vendor is able to meet a customer’s minimum requirements, they risk losing the business to a competitor who can.
This market dynamic encourages the proper alignment of business interests and establishes a truly reasonable minimum security bar. The other significant benefit of Cloud/SaaS business model is that multi-tenant systems are at least as secure as the most demanding customer. Security investments meant to satisfy one customer directly benefits the rest.
Compliance on the other hand is designed to compensate for times when normal market forces fail to provide an adequate alignment of interests. For example, organizations that are in a position to protect data are not responsible for the losses. The payment card industry found itself in one of those situations when it came to cardholder information.
Unfortunately compliance, specifically PCI-DSS, in practice is implemented in a much different way than the aforementioned market forces. Apparently a checklist approach is most common where strategic planning is generally not an incentive. The result of which is performing a bunch of “best-practices” that may or may not affect a better outcome because “security” is not the primary goal. Satisfying audit requirements is.
The interesting thing about SaaS is the last word, “service.” Customers are buying a service and not a product with a lopsided, zero liability end-user licensing agreement (EULA). Customers may demand vendors provide assurances by passing third-party vulnerability assessments, encrypting their data, onsite visits, or taking on contractual liability in the event of a breach or service degradation, etc. This all before signing on the dotted line. This requires vendors implement safeguards customers may not be able to do for themselves without incurring significant expense. These are serious considerations to be made before outsourcing sales force automation, enterprise resource planning, email hosting, and so on.
Sure there are Cloud/SaaS vendors with equally customer-unfriendly EULA and no SLAs or security guarantees to speak of, but I am confident this only opens the door for healthy competition. Customers WANT security, the question is are they willing to pay a premium for it. If so, Cloud/SaaS vendors who view security as a strategic way to differentiate could find themselves as the new market leaders. I believe this form of competition is doing a lot more to improve website security than how PCI is typically applied. At least, so far this has been my experience.
This market dynamic encourages the proper alignment of business interests and establishes a truly reasonable minimum security bar. The other significant benefit of Cloud/SaaS business model is that multi-tenant systems are at least as secure as the most demanding customer. Security investments meant to satisfy one customer directly benefits the rest.
Compliance on the other hand is designed to compensate for times when normal market forces fail to provide an adequate alignment of interests. For example, organizations that are in a position to protect data are not responsible for the losses. The payment card industry found itself in one of those situations when it came to cardholder information.
Unfortunately compliance, specifically PCI-DSS, in practice is implemented in a much different way than the aforementioned market forces. Apparently a checklist approach is most common where strategic planning is generally not an incentive. The result of which is performing a bunch of “best-practices” that may or may not affect a better outcome because “security” is not the primary goal. Satisfying audit requirements is.
The interesting thing about SaaS is the last word, “service.” Customers are buying a service and not a product with a lopsided, zero liability end-user licensing agreement (EULA). Customers may demand vendors provide assurances by passing third-party vulnerability assessments, encrypting their data, onsite visits, or taking on contractual liability in the event of a breach or service degradation, etc. This all before signing on the dotted line. This requires vendors implement safeguards customers may not be able to do for themselves without incurring significant expense. These are serious considerations to be made before outsourcing sales force automation, enterprise resource planning, email hosting, and so on.
Sure there are Cloud/SaaS vendors with equally customer-unfriendly EULA and no SLAs or security guarantees to speak of, but I am confident this only opens the door for healthy competition. Customers WANT security, the question is are they willing to pay a premium for it. If so, Cloud/SaaS vendors who view security as a strategic way to differentiate could find themselves as the new market leaders. I believe this form of competition is doing a lot more to improve website security than how PCI is typically applied. At least, so far this has been my experience.
Best of Application Security (Friday, Oct. 2)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- A Glimpse Into the Future of Browser Security
- OWASP Interview with David Rice
- NSA comparison of source code analysis tools
- Web Application Security at the Edge is More Efficient Than In the Application
- We had some bugs, and it hurt us.
- Input Validation using the OWASP ESAPI
- Factoring Malware Into Your Web Application Design
- Gmail finally added CSRF protection to logins
- A Stick Figure Guide to the Advanced Encryption Standard (AES)
- 13 Things a Web Application Attacker Won't Tell You
Friday, September 25, 2009
Best of Application Security (Friday, Sep. 25)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Strict Transport Security
- ForceHTTPS: Protecting High-Security Web Sites from Network Attacks
- Strict Transport Security in NoScript
- Email-stealing worm slithers across LiveJournal
- CSRF attacks and forensic analysis
- Basic Flaw Reveals Source Code to 3,300 Popular Websites
- New Free Web Application Firewall 'Lives' In The App
- Using Microsoft's AntiXSS Library 3.1
- SQL/JavaScript Hybrid Worms As Two-stage Quines
- Study Shows Open-source Code Quality Improving
Friday, September 18, 2009
Best of Application Security (Friday, Sep. 18)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- SANS The Top Cyber Security Risks
- Mozilla catches half of Firefox users running insecure Flash
- Fortify hands-on demo/session at forthcoming OWASP Northern Virginia Chapter
- Bruce Schneier: The Future of the Security Industry: IT is Rapidly Becoming a Commodity
- Two New Security Tools for your SDL tool belt (Bonus: a “7-easy-steps” whitepaper)
- Tool: New Version Of BeEF Released!
- Whitepaper: Analysis of an unknown malicious JavaScript
- 671% increase of malicious Web sites
- PHPIDS 0.6.2 ready to use
- A Nice Big FriendFeed Bug: Impersonate Anyone!
Friday, September 11, 2009
Best of Application Security (Friday, Sep. 11)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Disclosure standards and why they're critical
- ReDoS (Regular Expression Denial of Service) Revisited
- Binging - Footprinting and Discovery Tool
- RBS WordPay hacked, full database access
- Obfuscating your IP using a Burp/Tor/Prixoy combination
- Identifying Anomalous Behavior
- The Security Implications Of Google Native Client
- SSL Threat Model
- Cross Widget DOM Spying
- New Book "Hacking: The Next Generation"
Friday, September 04, 2009
Best of Application Security (Friday, Sep. 4)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Cross-protocol XSS with non-standard service ports
- Flash Cookie Forensics
- apache.org incident report for 8/28/2009
- Microsoft IIS 5/6 FTP 0Day released
- UK Parliament website hack exposes shoddy passwords
- Outsourcing and Top-Line Security Budget Justification
- Production-Safe Website Scanning Questionnaire
- Revealing Facebook Application XSS Holes
- Flaw In Sears Website Left Database Open To Attack
- Pwning Opera Unite with Inferno’s Eleven
Thursday, September 03, 2009
Outsourcing and Top-Line Security Budget Justification
Very often security budgets are justified through risk management, closely related to loss avoidance or boosting the bottom-line (income after expenses). A security manager might say to the CIO, "If we spend $X on Y, we’ll reduce risk of loss of $A by B%, resulting in an estimated $C financial upside for our organization."
There are indeed a number of things that could negatively impact the bottom-line should an incident occur. Fraud, fines, lawsuits, incident response costs, and downtime are the most common. Heartland for example, the organization at the center of the largest card data breach in U.S. history, said the event has cost the company $32 million so far in 2009.
For the last several years, data compromise has been a key driver for many companies to take Web application security seriously. More hacks translates into an increased security budget. "We must spend $X on Y so that Z never happens again, which would save us an estimated $C in incident related loss." I guess we can thank the mass SQL injection worms for demonstrating why being proactive is important if nothing else.
Recently though, I’m witnessing a shift, perhaps the start of a trend. A shift in which security spending is justified because it directly affects the top-line (income before expenses). "If we spend $X on Y, we’ll make customers happy, which has an estimated financial upside of $C for our organization." Let’s back up and examine this further.
A big part of my job is speaking with WhiteHat Sentinel customers, many of whom are in the business of providing Software-as-a-Service (SaaS) solutions for IT outsourcing -- a fast-growing market as organizations look to cut costs. I’m hearing more stories of their prospective enterprise customers, concerned for the safety of their data, putting these vendors under the security microscope. Enterprises understand it is their butt on the line should anything go wrong, even if the vendor is to blame.
To manage the risks of outsourcing, enterprises are requiring the SaaS vendor to pass a Web application assessment before they sign up. If the vendor already has a reputable third-party firm providing such assessments, such as a WhiteHat Security, then more often than not the reports will satisfy the prospective client, provided the findings are clean. If not, then the enterprise will engage an internal team or third-party (again like WhiteHat) at their expense, which is when things get really interesting.
If serious issues are identified, which is fairly common, the best-case scenario is the sales cycle slows down until the vulnerabilities are fixed. This could easily take weeks of time if not more. More than that it could also initiate disruptive fire drills in which developers are pulled from projects creating new features and instead instructed to resolve vulnerabilities NOW for the sake of winning near-term business. The consequences are real and potentially devastating to a business. On one hand, the account could be lost entirely because a loss of the customer’s confidence. And worse still, if word gets around that your security is subpar, then the ramifications are clear. When sales are lost like this, especially in the current economy, security budgets based on increasing the top-line become really attractive.
For this reason it seems the move to “the cloud” is incentivizing organizations to make a substantive investment in Web application security or risk losing business from savvy customers. Even more amazing is that after vendors put a program in place, the investment can be used as a competitive advantage. They’ll hype the fact to customers by volunteering their security reports and program details upfront. As enterprises shop SaaS payment processors, e-commerce hosting, financial applications, etc. they will expect to receive the same from others companies, who may not be in a position to deliver.
If you are a security manager, take the time to ask the sales department how often “security” is a part of the buying criteria for customer. If it is, that could be an excellent opportunity to align yourself with the business.
Anyone else seeing this trend?
There are indeed a number of things that could negatively impact the bottom-line should an incident occur. Fraud, fines, lawsuits, incident response costs, and downtime are the most common. Heartland for example, the organization at the center of the largest card data breach in U.S. history, said the event has cost the company $32 million so far in 2009.
For the last several years, data compromise has been a key driver for many companies to take Web application security seriously. More hacks translates into an increased security budget. "We must spend $X on Y so that Z never happens again, which would save us an estimated $C in incident related loss." I guess we can thank the mass SQL injection worms for demonstrating why being proactive is important if nothing else.
Recently though, I’m witnessing a shift, perhaps the start of a trend. A shift in which security spending is justified because it directly affects the top-line (income before expenses). "If we spend $X on Y, we’ll make customers happy, which has an estimated financial upside of $C for our organization." Let’s back up and examine this further.
A big part of my job is speaking with WhiteHat Sentinel customers, many of whom are in the business of providing Software-as-a-Service (SaaS) solutions for IT outsourcing -- a fast-growing market as organizations look to cut costs. I’m hearing more stories of their prospective enterprise customers, concerned for the safety of their data, putting these vendors under the security microscope. Enterprises understand it is their butt on the line should anything go wrong, even if the vendor is to blame.
To manage the risks of outsourcing, enterprises are requiring the SaaS vendor to pass a Web application assessment before they sign up. If the vendor already has a reputable third-party firm providing such assessments, such as a WhiteHat Security, then more often than not the reports will satisfy the prospective client, provided the findings are clean. If not, then the enterprise will engage an internal team or third-party (again like WhiteHat) at their expense, which is when things get really interesting.
If serious issues are identified, which is fairly common, the best-case scenario is the sales cycle slows down until the vulnerabilities are fixed. This could easily take weeks of time if not more. More than that it could also initiate disruptive fire drills in which developers are pulled from projects creating new features and instead instructed to resolve vulnerabilities NOW for the sake of winning near-term business. The consequences are real and potentially devastating to a business. On one hand, the account could be lost entirely because a loss of the customer’s confidence. And worse still, if word gets around that your security is subpar, then the ramifications are clear. When sales are lost like this, especially in the current economy, security budgets based on increasing the top-line become really attractive.
For this reason it seems the move to “the cloud” is incentivizing organizations to make a substantive investment in Web application security or risk losing business from savvy customers. Even more amazing is that after vendors put a program in place, the investment can be used as a competitive advantage. They’ll hype the fact to customers by volunteering their security reports and program details upfront. As enterprises shop SaaS payment processors, e-commerce hosting, financial applications, etc. they will expect to receive the same from others companies, who may not be in a position to deliver.
If you are a security manager, take the time to ask the sales department how often “security” is a part of the buying criteria for customer. If it is, that could be an excellent opportunity to align yourself with the business.
Anyone else seeing this trend?
Monday, August 31, 2009
Production-Safe Website Scanning Questionnaire

Even in those websites with the most stringent change control processes, experience shows identical production and preproduction deployments are extremely rare. It is incredibly common to find hidden files and directories containing source code and logs, mismatched security configurations, infrastructure differences, and more, with each impacting the website’s true security posture. Also, for those websites with required to maintain PCI-DSS 6.6 compliance, the standard mandates scanning publicly facing websites. If scanning production websites for vulnerabilities is important to you, and it should be, then production-safety is likely equally important.
The more thorough a website vulnerability scan is, the greater the risk of disruption due to exercising potentially sensitive functionality. For example, an authenticated scanner is capable of testing more areas of an application’s attack surface than one that is unauthenticated. The same is true of a scanner custom-configured to process multi-form work flows (i.e. an online shopping cart). Furthermore, scanners testing for most of the ~26 known website vulnerability classes similarly increase the odds for causing damage. “Damage” may be as minor as flooding customer support with error email all the way up to a denial of service condition.
Clearly production-safe scanning is a legitimate concern. Below is a questionnaire about what organizations ought to know, so they can better understand the risks of production scanning and mitigate them accordingly. Please feel free to use this document to probe vendors about how their offerings ensure production-safe website scanning while achieving required levels of testing coverage and depth. As a guide, I’ve supplied the answers that apply to WhiteHat Sentinels beneath each question. Vendors may choose to do the same in the comments below, on their sites and blogs, or of course privately by customer request.
1) How is the scanner tuned, manually or dynamically, as to not exhaust website resources that could lead to a Denial-of-Service?
Scanners must share resources with website visitors. Open connections, bandwidth, memory, and disk space usage by the scanner can seriously impact operations. High-end scanners can easily generate a load equivalent to a single user, or even up to a hundred or more, unintentionally causing serious events like DoS when resources are exhausted. Each website’s infrastructure should be considered unique in load capacity. Scanning processes should be adjusted accordingly.
WhiteHat Sentinel
WhiteHat Sentinel scans consume the load of a single user. They are single threaded, exceeding no more than a user-defined number of requests per second, and generally do not download static content (i.e.. images) thereby reducing bandwidth consumption. WhiteHat Sentinel also monitors the performance of the website itself. If performance degrades for any reason, scan speed slows down gracefully. If a website looks like it is failing to respond or incapable of creating new authentication sessions, in most cases Sentinel will stop testing and wait until adequate performance returns before resuming.
2) How are multi-form application flows marked as safe-for-testing and/or removed from testing?
Depending on the website, complex application functionality can only be located by filling out multi-form work flows with valid data. Insurance applications, bill payment, password recovery, purchase processes, and more are prime examples. Some application flows are more sensitive than others, including those with a direct monetary cost when activated. Automatically scanning these areas can have consequences.
WhiteHat Sentinel
WhiteHat Sentinel does NOT automatically fill out or test HTML forms. In our experience doing so is extremely dangerous without significant preexisting knowledge about the website. Each HTML form discovered during crawling, including multi-form process flows, are custom configured by our Operations Team with valid data. The Operations team also marks individual forms as being safe-for-testing. Those HTML forms that cannot be tested safely are either tested manually or not at all.
3) Does the scanner send executable attack payloads? If so, how are such tests made safe?
To identify vulnerabilities, scanners may inject executable, even malicious, payloads. For SQL Injection, testing may include executing back-end system commands, elicit errors messages, and/or retrieving/modifying data -- each potentially impacting or halting database performance. Another example is found when scanning for Cross-Site Scripting. Scanners may submit browser interpretable payloads (i.e. live HTML/JavaScript code) that could be returnd in the website code in an unknown number of locations. Should these payloads be encountered by Web visitors, they could easily interfere with or break the user experience entirely.
WhiteHat Sentinel
Sentinel, by default, performs the majority of its tests using proprietary pseudo-code. This enables the scanner to identify vulnerabilities without the payload being interpreted by parsers within the application. This helps ensure that no errant process execution occurs that can negatively impact production software execution. Also, Sentinel does not perform any requests that are non-idempotent, perform write activity, or potentially destructive actions without explicit authorization from either a security engineer or the asset owner.
4) How are links (URLs) that point to sensitive application functionality marked safe-for-testing and/or removed from testing?
According to RFC specification, HTTP GET requests (i.e. from hyperlinks) should be treated as idempotent by applications. Meaning, no significant action other than data retrieval should be taken, even upon multiple link clicks. In practice though, many links (URLs) discovered during (authenticated) crawls can indeed delete, modify, and submit data potentially causing disruption very similarly to non-idempotent POST requests.
WhiteHat Sentinel
As part of the WhiteHat Sentinel assessment process, customers may alert us to functionality that may execute dangerous non-idempotent requests. When such areas are identifiied, they can be ruled out of the scanning process and tested manually. Also, authenticated scans are restricted to special test accounts, so any potential negative impact is restricted to those areas and do not extend to other users.
Friday, August 28, 2009
Best of Application Security (Friday, Aug. 28)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Apache.org Compromised
- Are Web Application Security Testing Tools a Waste of Time and Money?
- When Mass SQL Injection Worms Evolve...Again
- Homegrown Application Security Program
- Mass SQL injection attacks still scaling up
- Research: 80% of Web users running unpatched versions of Flash/Acrobat
- Altered Sears Web Site Offers Grill to 'Cook Babies'
- Businesses Reluctant to Report Online Banking Fraud
- Massive Twitter Cross-Site Scripting Vulnerability
- Flash attack vectors (and worms)
Friday, August 21, 2009
Best of Application Security (Friday, Aug. 21)
Ten of Application Security industry's coolest, most interesting, important, and entertaining links from the past week -- in no particular order. Regularly released until year end. Then the Best of Application Security 2009 will be selected!
- Security bugs crawl all over financial giant’s website (Ameriprise Website Riddled With Security Vulnerabilities For At Least Five Months)
- TJX Hacker Charged With Heartland, Hannaford Breaches
- Adobe Flex 3.3 SDK DOM-Based XSS
- Web Security is about Scalability
- Bypassing OWASP ESAPI XSS Protection inside Javascript
- Facebook personal info leak vulnerability
- Super-safe Web browsing
- Overcoming Objections to an Application Security Program
- Security No-Brainer #9: Application Vulnerability Scanners Should Communicate with Application Firewalls
- WASC WHID 2009 Bi-Annual Report
Wednesday, August 19, 2009
Website VA Vendor Comparison Chart
Update: 09.03.2009: "Production-Safe Website Scanning Questionnaire" posted to add context to the chart and ensuing discussion. Also, new vendors have been added to the sheet.
Update 08.24.2009: Billy Hoffman (HP) and I have been having some email dialog about the production-safe heading. Clearly this is contentious issue. Scanning coverage and depth are directly tied to the risk of production-safety, and every vendor has a slightly different approach to how they address the concerns. Basically I asked if vendors made a production-safe claim, that they have some reasonable verbiage/explanation for how they do so -- no assumption of production safety will be made. Billy publicly posted how HP does so (complete with the highlights of our dialog) and got check mark. Simple. Still for the immediate future I'm going to eliminate the heading from the chart until I can draft up a decent set of criteria that will make things more clear. This of course will be open to public scrutiny. In the meantime, if anyway vendors want to post links about how their achieve "production-safe" they should be feel free to do so.
As you can imagine I spend a good portion of my time keeping a close watch on the movements of website vulnerability assessment market. Part of that requires identifying the different players, who is really offering what (versus what they say they do), how they do it, how well, and for how much. Most of the time it is easier said than done, parsing vague marketing literature, and it is never "done." Every once in a while I post a chart listing the notable SaaS/Cloud/OnDemand/Product vendors and how some of their key features compare, not so much in degree, but at least in kind. If anything is missing or incorrect, which there probably is, please comment and I’ll be happy to update.

Update 08.24.2009: Billy Hoffman (HP) and I have been having some email dialog about the production-safe heading. Clearly this is contentious issue. Scanning coverage and depth are directly tied to the risk of production-safety, and every vendor has a slightly different approach to how they address the concerns. Basically I asked if vendors made a production-safe claim, that they have some reasonable verbiage/explanation for how they do so -- no assumption of production safety will be made. Billy publicly posted how HP does so (complete with the highlights of our dialog) and got check mark. Simple. Still for the immediate future I'm going to eliminate the heading from the chart until I can draft up a decent set of criteria that will make things more clear. This of course will be open to public scrutiny. In the meantime, if anyway vendors want to post links about how their achieve "production-safe" they should be feel free to do so.
As you can imagine I spend a good portion of my time keeping a close watch on the movements of website vulnerability assessment market. Part of that requires identifying the different players, who is really offering what (versus what they say they do), how they do it, how well, and for how much. Most of the time it is easier said than done, parsing vague marketing literature, and it is never "done." Every once in a while I post a chart listing the notable SaaS/Cloud/OnDemand/Product vendors and how some of their key features compare, not so much in degree, but at least in kind. If anything is missing or incorrect, which there probably is, please comment and I’ll be happy to update.

Subscribe to:
Posts (Atom)