Conclusions
According the survey of 140 respondents, a record turn out, their opinion of the current security state of the Web is dismal. In a nutshell here is what the result say to me:
- The vast majority of websites have at least one serious vulnerability
- Many websites are being broken into, but no one knows about them and that’ll increase exponentially over the next few years
- There is NO WAY the average user can protect themselves from being exploited
- The standard mandated by the credit card industry, PCI-DSS, makes little difference to the security of a website
- Web application vulnerability scanners miss just about as many of the most common issues as they find
- About half of respondents look for vulnerabilities in websites not their own from time to time, but are cautious about it for fear of legal issues.
- And if they were to find vulnerabilities, it’s toss up if they’d inform the website owner for the same reasons.
- Interestingly, more than half said they would not sell the vulnerability data.
Description
It’s finally that time again where we stop hearing about what I think and listening to what you think. The Web Application Security Professionals Survey helps us learn more about the web application security industry and the community participants. We attempt to expose various aspects of web application security we previously didn't know, understand, or fully appreciate. From time to time I'll repeat some questions to develop trends. And as always, the more people who submit data, the more representative the will be. The best part is I’ve progressed from my normal email form to using Survey Monkey online.
Guidelines
- Open to anyone working in the web application security field
- If a question doesn’t apply to you or you don’t want to answer, leave it blank
- Comments in relation to any question are welcome. If they are good, they may be published
- Submissions must be received by Oct 31, 2007
Publishing & Privacy Policy
- Results based on aggregate data collected will be published
- Absolutely no names or contact information will be released to anyone, though feel free to self publish your answers anywhere
Results
- Developer, Web Security interested
- security auditor
- Developer
- hacker of course :)
- Contractor to the Government
- I work in the Network Security group for the Ohio State University
- Quality Analyst
- code reviewer
- "Freelancer"
- Internal sec
- Consulting Systems Engineer (not a consultant, but title) Information Security
- mere web developer
- Security Conscious Developer
- I don't actively look for them, but often they're sitting out there in plain sight.
- look for != exploit. No tor+darknet_ip stuff, only something I'm comfortable doing with my own non-anonymous connection.
- Alot of things pique my interest as I am surfing, and I will "poke" at them more than a normal user would.. However, I believe activities like this violate a code of ethics (CISSP), so I'd rather not disclose in the event that someday I decide to get that cert and accept the lame code of ethics.
- Only those that are obvious without deviating from normal usage
- It's rare, non intrusive and I usually have an association with the owner through a real life connection.
- I love this industry, in my free time I look for vulnerabilities with and without permission. Sometimes you have to live on both sides of the tracks to understand better how they work.
- Usually foreign, and usually not intrusive methods, XSS mainly
- Jail bad.
- I admit that I sometimes put a quote or paren into a form field e.g. O'Reilly or (408)343-8300. Converting POST's to GET's has become habit when sending somebody a URL with pre-populated information. The "Forgot my password" feature is often tested several times a day on most sites - but usually because I forgot my password. Sometimes I edit cookies, changing their expiration to a later date - and then protect them with CookieCuller so that the website can't force me to expire them (out of laziness). If I can create two accounts, sometimes I'll see if I can trade session id's, copy objects, steal objects, put objects, etc. It's not all the time.
- Would never do ../ etc tbh, ala Dan Cuthbert. a search like jon'es or ">xss may sometimes be entered. And I dont believe either of those two vectors are illegal, under UK law anyhow.
- Everyday and all the time. Just look at Month of Search Engines Bugs and future Month of Bugs in Capchas. Looking for vulnerabilities in web sites must be freely available (without any permissions). Bad guys will be looking for holes in any case, so no need to restrict good guys.
- In fact, i often take a look at some compromised computers which scan my services. If these bots belongs to official and identiable organization, i try to tell them.
- There are no true white or black hats, merely shades of gray.
- I don't necessarily look for them, sometimes they just jump out on their own. In the past, I've only looked for vulnerabilitie within sites where I plan to do online purchases, to ensure any information I may provide is at least somewhat secure.
- "Looking for vulnerabilities" is limited to a passive examination of the signs that usually indicate risk. Occasionally, I will test a site for cross-site scripting vulnerabilities via very obvious and limited tests.
- By "look for vulnerabilities", I mean that I'll cause a web site to indicate, through its normal behavior, whether it's probably vulnerable.
- I did once after finding one vulnerability by accident on my son's soccer leagues website.
- MOre or less just following up on a error or interesting message that I found, not intentional pen testing or anything like that.
- Very rarely
- I don't have explicit permission from the site owners, but our group has reserved the right to do any scanning of any site or service at the university unannounced and at any time.
- I just dont have the time to look at other people's sites, and dont really want to know what can be found (and you know we alway do)
- Just for learning purposes, not doing any bad.
- ey! i'm human
- it's a tic
- Usually if i'm buying something online i do a quick check just to make sure its not some lame shopping cart full of vulns.
- Looking for vulnerabilities isn't necessarily the same as testing for vulnerabilities. Something like CSRF doesn't really need direct testing. Also, maybe my name really is ' or 1=1 --
- I enjoy the challenge, and the hunt.
- I may on occasion take a peek
- i poke but never leave a trace that shows a malicious intent. Meaning if a good forensics was done. The investigator would be like "damn he just disappeared, I don't get IT"
- We have a process in place to get permission to test applications within the company for vulnerabilities so that there are no issues with production and/or maintainence of the site.
- Pretty rarely. I feel it's crossing the line if you haven't been hired to do it.
- When looking at websites without permission, I am very careful about what I will and will not do. For example, I might throw a single quote into the querystring or analyze a site's session handling mechanism by viewing it with a proxy, but I won't do any serious tampering, especially with big financial websites. It can be pretty tempting though.
- It can be quite tempting to conduct testing on applictions accessible from the web one is not legally allowed to play with. However once a person reminds themselves of just how vague current law are, that should help them see clearly ;). People should know the risks involved and how to mitigate them before moving forward on even the simplest manipulations.
- My mood.
- The issue and who it is I'd be reporting to.
- How it would affect me.
- There isn't a good method of disclosure.
- I disclose them as a private person.
- I never had any legal problems after disclosing web vulnerabilities to the owner or even to the public, I assume this is because most of the time I did not overestimate impact of security problems. What's more, I got my job because of such disclosures (some of them went public) in my current employee's websites.
- anonymously
- depends on the type of vulnerability
- depends if i am responsible for the site's security. if i find something in an another site, i will report it only if i care whether something bad happens to the site.
- Depends on the expected reaction of the owner.. If I can safely assume that the owner is going to be apreciative, then I will disclose. If there is a chance the owner is going to overreact and start accusing me of trying to hack his website, etc, then no. Additionally, if the owner is highly unresponsive and the site is used by millions of people (e.g. Google), I would disclose publically (on my blog) and send the blog article to the owner.
- Dependent on any past history disclosing bugs to that party and potential harm to 3 parties (users of that service). Unless they have been responsive and friendly in the past or the bug could end up causing grandpa's insulin delivery to stop: Full Disclosure!
- How serious is it?
- i disclose it to the owner and never disclose it to the public. i don't need the media attention or reputation.
- criticity of the vulnerability :)
- Not my responsibility, it's the vendors/siteowners responsibility to test his software before shipping/publishing. Only way they learn is to releasing exploits promptly.
- If I'm satisfied I can prove it was not as the result of specific testing (i.e. accidental discover) I will report the issue
- Depends on the severity of the issue, and depends on what I feel the user base is. If its on a site like facebook, it will be reported. If its on a site that has hardly any customer base, then I wouldnt report it.
- I believe it is the responsible way to disclose, diligence ends at notification when it is not my asset and public disclosure does more harm than good in most cases because professionals are lazier than kids on IRC
- If I perceived they would listen
- On whether I think they will react badly by disclosing it to them or not. Default to "no way" unless I know them or their policies.
- I tried, but most of time it does not reach to the security prossional, instead just waste my time with help desk or canned e-mail response
- Small websites is a given. Then dont have th resources alot of the time and always get nice feedback. Larger ones are tricky. ala the ../ in adobe the other day, I know someone who did, no negative feedback from adobe.
- If I have permission, sure; otherwise it's a hornets nest.
- I always disclose security holes to sites' owners. And in most cases disclosed it to Internet community after informing owner. It's my style which I called Advanced responsible disclosure.
- Seriousness and how to explain how I found it 'by accident'
- If i could disclose it anonymously, I would.
- Yes, I have done this in the past. One disclosure resulted in the site being taken down and patched immediately. Another resulted in a response basically stating "We don't care" and yet another resulted in threat of legal action.
- It depends on how I thought they would respond. I'm not going to put my career at risk if someone may not react well.
- Depends on the severity of the vulnerability.
- On the website/company. I don't want to deal with stupid staff.
- Generally not -- the upside is zero, and the downside risk is that I could be unfairly accused of bad behavior. But in instances where the web site owner has a good reputation, or I have a good relationship with them, I have reported issues.
- My son's soccer league required participants to register and pay online. During the registration process, I accidently found a vulnerability (I then performed a scan and there were plenty more). They league though I was overly paranoid, I do not think they took my concern seriously.
- Commercial v Non-commercial site. Open Source v Not. If it was a project page for an open source project (e.g. phpBB) absolutely. If it was Bloomberg's trading platform, depending on the severity (low to super-critical) it'd be hell no to consider sending an anonymous message. People have been prosecuted for shit like that.
- Depends if the owner had commissioned me to perform the analysis and produce findings.
- On how severe the issue is and whether the way I found it was legit.
- What the site is, how "risky" the issue is, etc....
- Really just talking about university sites here since it's my job.
- There's too much risk involved for very little gain.
- On my relationship with the company, and perhaps my access to people who cared. It's just too much effort to go through the whole disclosure thing. May fire off an email to the security@..., but wouldnt expect (or push for) a response. I know that's wrong in a few ways, but companies that make it easy and a pleasure to report issues to are few-and-far (yes, I'm looking at you Google!)
- severity.
- If i have time
- On my relationshipi with the site, how much I care about its security. If I know the owner, I find it more easy to report.
- How well I know the company/owner of the website. I know cases where the guy who informed the owner was blamed as a hacker, because they didn't understand what happened. Additionally we have a new government regulation about hacking and using hacking tools, so the risk is too high to come in trouble.
- its funny?
- type of organization, experiences in the past with owner, type of vulnerability
- Depends on severety of bug
- anonymously
- Too great a risk of mis-interpretation by the site owner.
- could get either me or my company in trouble
- I've called several vendors and ordered over the phone and explained why i was doing so and what they needed to do to fix the issues.
- Too many issues are found to disclose them all. I only disclose issues for which I am personally affected, or it is a high profile site, AND I think the site won't shoot the messenger.
- Depends on if I like the owner.
- Depends on whether they have a disclosure policy that waives my liability, and/or whether they do things like give credit to researchers which pretty much gives me immunity - even if they didn't intend it that way.
- On the site and how I discovered it
- Depends on what kind of testing revealed it. SQL injection & XSS are probably out, unless it's a site with a good reputation for dealing with researchers (Google, Yahoo, etc). CSRF, session handling, etc could be in.
- My experience with contacting vendors has been very poor. Most of the time the vulnerabilities remain in place for an extended period of time, are ignored, or the companies' security representatives are more concerned about pursuing legal action.
- Who the owner is, what their policy is on reported bugs, and how they have been known to deal with such reports in the past
- Because no one reads their abuse@yada.com mail anymore.
- I usually do this, if it requires minimal effort, if they put it in a help queue, then I don't bother pursuing it.
- This is part of my job description
- If it was significant enough.
- On the perceived risk to the user community, the risk I perceive to myself by exposing my discovery to the site owner, and perceived accidental nature of the bug - or in other words, is it sloppy programing, ignorance, or a bug.
- Most sites are too lame/insignificant to even bother. Some solutions like WorldPay Select payment gateway (and lots of similar for-idiots gateways) are insecure by design and nobody cares.
- Depends if I think it'll end up with me in Federal pound me in the ass prison or not.
- risk/severity
- If my personal data was at risk, then definitely. Otherwise I'd have to evaluate the risk of my disclosure being taken as an act of aggression. It's just not worth it sometimes.
- depends on existing relationship, if none exists, depends on response of the owner to the previous published vulnerabilties.
- A lot of people seem to be afraid to disclose issues, but I will almost always disclose issue that I've found, and have even been thanked for it a number of times. The key is to make it clear that you didn't hack into their site, you just noticed a security exposure and want to let them know about it so they can fix it.
- May not be worth it for some web sites
- Depends on potential reaction and legal position.
- For fear of being reprimanded whether it be by: the company that employs me, or the company I just attempted to help.
- Fear of legal action.
- Too much hassle
- Depends on you're definition of serious. XSS can be serious in a right context. If serious = complete 0wn4ge, maybe about half or less.
- I'd say almost all, but there are many sites that just don't worth hacking, so their vullnerabilities are not serious even if they have code injection.
- If XSS or any other browser-side problem is counted as serious, then I don't know any safe website. If you mean server-side injections (from SQLI to RCE), then I think about 30% is highly vulnerable.
- "serious" may be misleading. managing vulns is managing risk, such as the window of opportunity for an exploit. if the vuln is open for a longer period of time then it becomes more serious. i would not consider XSS or XSRF to be a serious vuln if it were patched in a reasonable time.
- XSS is everywhere
- If you consider XSS high. Cant remember not finding it for a long time. Apart from small sites that dont echo.
- Caveat: All or almost all of web applications not necessarily websites. It's rather difficult to find WebAppSec vulnerabilities in HTML although not impossible.
- Where XSS is included in what is considered "serious".
- Almost all, but in my opinion, most of the 'big' sites have vulnerabilities that are hard to find.
- I would say that 2 out of 3 websites I reviewed had significant security vulnerabilities.
- Again, this is the university environment where there a huge number of sites and a wide range of developer abilities.
- If we talk about serious vulnerabilities. But I would say "all or almost all", if we talk about non-serious vulnerabilities as well.
- If you count XSS & CSRF as serious, just about everything except brochure-ware is vulnerable.
- Most websites have some type of issue though the severity of such vulnerabilities is generally anything from mild to critical.
- just would say none because i am not experience enough to count out the possibility. "glass half full kinda guy"
- somewhere, there is one
- I would say that all sites would have some serious vulnerability that would impact the data's confidentiality, availability, and integrity. Some of the issues that seem to be overlooked that are easily resolved in my opinion are application-based DOS attacks
- Based on 6+ years of application pen testing, I'd say it's definitely above the 80% mark. If you consider XSS to be "serious" (I personally don't except in very specific circumstances) then it's more like 100%.
- Security is getting better, and that is raising the bar. People are finally getting over the hump of SQL Injection(mostly). However more sites than I would like tend to be vulnerable to serious Buisness Logic flaws, and no one has really tried to get a grip on CSRF.
- I stay away from this sort of thing.
- Depends on the exploit. Last time I checked though, a site can have a blatant XSS hole, and still be PCI-DSS compliant.
- Theoretically, there should be a difference, because compliance means that people are et least interested and awared about their security.
- "Hacker Safe!"
- PCI-DSS covers attacks that i generally not use such as password strength, weak ciphers, etc.
- My guess is that L1-3 Merchants who perform a code review do indeed have websites that are more difficult to exploit. L4 Merchants are only required to do the self-audit and have an ASV scan their external IP prefixes - so this won't change a thing. Because of this factor (and the fact that PCI DSS doesn't target web applications, only their infrastructure), it is difficult to answer this question.
- Hackers will not play by the rules on PCI-DSS.
- tested one sites which had been certifed by a 'scanner'. took maybe 10 mins more than usual to find an xss.
- It's harder to exploit such sites (it require more time to find security hole). But first try to find site which is fully PCI-DSS compliant :-). Including XSS free and also UXSS free. I had occasions to find PCI-DSS compliant sites which had UXSS holes (and certified auditor's site had such holes also).
- I think it depends on who tested the site for compliance and how well the companies remdediate the findings.
- I do govt contracting so I do not get much exposure to PCI
- After compliancy tests, merchants will fix most issues. So we can say, in general, PCI compliancy test have raised the bar. But not always. For example good vulnerability assessment tools, such as QualysGuard IMHO, cannot test web applications for well known flaws.
- All other things being equal, any standard is better than no standard at all.
- You have to obtain the private encryption key which may make it a little more difficult. It depends on how PCI-DSS is implemented. PCI-DSS does not readily address web application security or network security. As far as I am concerned it primarily addresses the need to encrypt credit card data if the data needs to be maintained. However, this does not mean that the key used to encrypted the data is protected.
- I didn't have to look it up, but then I recognized it.
- all depends on if the site has undergone a review before, and who by. if it's from some "crapscan" vendor, then they are always suprised in what is discovered
- very little harder..
- "PCI-DSS compliant" doesn't seem to be very consistent.
- As I have not gotten a chance to enter a corporate environment yet I am not familiar with the details of a PCI compliant website. I have heard of the term, and have read documents on the matters, but because I do not yet hold any formal certification I have not had experience with this.
- PCI-DSS doesn't put a lot of focus on specific web security at this point... A scan from an ASV barely has to scrape the surface.
- The PCI standard does take a step in the right direction, but it truly does not reduce risk.
- It is not really different when looking at XSS and CRRF, but is harder when dealing with poor protection schemes.
- Most companies relied on automated scanner results at most to secure their websites for PCI. A lot of web application security holes can be missed by these scans.
- I would say no noticable difference, degree of security is a variable that directly correlates to the compentancy and thuroughness of the audit process.
- Meet the new boss, same as the old boss. Same flaws are still there.
- Used properly, AJAX can significantly reduce the attack surface and threat profile of a site. XSS can very easily be eliminated with a good framework/implementation. However, it adds new abstractions and arguably another layer of separation between the developer and the underlying communication and technology.
- *Very* few new attacks. Mostly same old same old, only more of it.
- AJAX is just some RPC subsystem. however, it has not taken the the security features of good RPC mechanism; thus it reintroduces problems. also the same origin policy is extremely limited thus developers are always trying to roll their own solution for the mashup problem; thus causing even more problems.
- Larger attack surface because they are generally more functional. The problem is the functionality and not Ajax.
- Ajax and Ajax like libraires are mostly designed to enhance users "surf experience", thus extend capabilities allowing new services. Its first goal is not to be secured but to fulfill services.
- People like to play "Buzzword Bingo" and start implementing technologies they don't fully understand. They don't take the time to understand the processes involved or how to fully secure their usage. In the end they just want to be able to say "We use AJAX, .NET, Java,
"... - I wouldn't say the attack surface is actually larger, but it has gotten more complicated. This makes it harder for developers to intuitively tell whether they've created a vulnerability.
- Well, duh. Increased complexity & increased attackable surface == increased risk.
- There is now a lot more communication going on between the server and the clients, but there is less control, and less checking that the data is valid. Before, this became part of the web application development. Check everything that comes into your server. Now, the server side has got more holes, and client-side checking is non-existent, or trivial to bypass.
- All "bleeding" edge technologies have holes, since Ajax is one of the newer kids on the block I expect to find silly things done that create problems. As the tech matures it'll get better.
- It's just more requests you have to watch and modify. No real big change in the way you look at "traditional" apps
- I don't think it makes a significant different, except that its another technique that abstracts a developer even further from how the web works and makes it more likely they will do poor threat modeling and have a vuln, especially an architectural one.
- Without the necessary knowledge in sanitation and validation it is very simple to make programming mistakes that can allow for miscellaneous injections.
- not easier to make mistakes exactly... but it makes mistakes not seem like mistakes..i.e. its harder to imagine the ajax behaving as it shouldn't
- Ajax is something that I am quite concerned about due to the movement part of the business logic from behind the firewall to the client. There is a definite need for architectural guidance that current web applications lack.
- New attacks are only with regard to attacking the framework itself or a roll-your-own Ajax implementation. For the most part it just creates a false sense of security on the part of the developer. It's all HTTP anyway so it doesn't require a significantly different approach.
- It's like 1995 again. Ajax security is old skool exploits and REALLY dumb ideas
- Javascript Hijacking wouldn't exist without JSON. User tailored JSON responses would be rare without AJAX.
- It is not necessary to have a larger attack surface, just more commmon.
- At most I would feel that Ajax just adds a layer of obfuscation over the problems currently present on web applications. With Ajax/ With-out Ajax the attack surface is very similar in my opinion.
- They're great at finding .bak and ws_ftp.log files!
- web app scanners are great for finding simple problems. i have never thought web app scanners have been useful in my work besdied training QA teams to find some minimal number of issues. if they find some, they call me and i perform a full audit.
- Authorization bypass: poor, this is for me still a big issue!
- Web Services - Limited
- Ajax support is not ubiquitous in scanners. Testing for XSS when source code is unavailable is their strongest asset. If scanners were to integrate "test spy" functionality similar to ImmunitySec's SQL Hooker - they could do more advanced SQL injection attacks. I think there is a future for logic flaws and CSRF integrated into scanners, but it obviously isn't there today. For forced browsing, a similar "spy" FileMon-like technique could be applied. HRS is performed by most scanners today, but unlike XSS - this is almost certainly better found with source code (code review, automated SCA, smart-fuzz tests, etc). The best reason to use a scanner over other testing methods is to increase time between findings. In the hands of an expert, scanners can be a great tool to verify working exploits in web applications - where other methods have a hard time verifying whether a vulnerability or exploit condition is real.
- Useless for phishing attacks, DDOS attacks, among others
- seem some WAF's break sites before. They do work in some limited siutations at the moment.
- Web application vulnerability scanners are lame. If you want real security ask for security audit by human professional, not by scanner.
- You need a rating below Poor for some of those categories.
- poor on authorization testing
- Vuln scanners are nearly worthless, not because they need to do a better job spotting bugs, but because they need to do a better job quantifying their results. One thing I mean by "quantify" is this: A laundry list of potential vulnerabilities of types x, y & z, with associated high/medium/low risk ratings just doesn't cut it. An analyst still needs to review the list and consider the risk of each bug in the context of the application.
- They are also extremely good at checking for policy violations, 508 accessiblity, etc.
- Never used them really, just did all my stuff manually.
- This is a slightly loaded set. On one hand, I think that AppScan does a terrible job with detecting true blind sql injections (lots of false positives), but SQLiX does a great job detecting them and "exploiting" the flaw to prove its existence.
- You are joking right! Most of the webappscanners IMO are pretty crap at finding anything but the low-hanging fruit. What they are useful for is doing a first pass at a large site so I dont have to test out obvious things on all the pages
- We have bought several licenses of an web application scanner, but it is just one of the tools we use. It saves some time and work, but the most serious vulnerabilities we often found by hand.
- There are some serious issues with web application vulnerability scanners, namely the amount of false positives and the number of false negatives. I have run quite a few of the major brands against the web applications of my company and have had to deal with reports going to the hundreds of pages of false positives while noticing that known vulnerabilities were not found.
- Depends on the biases of the scanner devs. Some are really good at forced browsing, others have zero support. There's no one tool which does it all properly yet.
- Nothing works well without customization.
- get a real list of what actually matters... data layer authorization - poor encryption - poor logging - poor error handling - poor ssl on backend - poor concurrency - poor authentication of backend connections - poor authorization of backend connections - poor validation and encoding of backend connections - poor etc...
- Web application scanners have a long way to go. Even for vulnerabilities that they should be good at finding, such as Non-Persistent XSS and HTTP Response Splitting, I've seen lots of missed vulnerabilities. They're useful tools but shouldn't be relied on as the sole means of testing the security of a web app.
- Flash, who does this? All the RIA content is actually poorly understood by tools...
- Worst Idea ever. As opposed to fixing the code, let's just install a device.
- Most likely never.
- most are so scaled back in config/deployment they are little more than simple proxies
- web applications firewalls is a way stupid idea... and so is PCI-DSS requiring a source code audit or web app firewall. i guess this will be great business for anyone who claims they have a web applications firewall, but it's way too easy to circumvent.
- about 0,1%
- Only ecommerce and some financial websites implement a WAF. Many of those are only using an APIDS, or a WAF in view-only mode. I've never seen an Intranet website implement a WAF.
- cant say ive noticed very often.
- Most of the companies do not invest on web application firewalls.
- In my experience, it's vastly more common for app & database servers to be isolated and hardened than to be protected by an application firewall. And frankly, organizations that invest in hardening their application infrastructure just aren't going to realize much of a benefit from adding an app firewall to the mix. I suppose one could position app firewalls as an alternative to hardening, but that'd just be irresponsible.
- Only 1/4 of web applications I reviewed had a web application firewall
- It depends on the client. It also depends on the contract, some clients have us test both Test/Dev and Prod, in which case Prod is usually more protected for obvious reasons. I suppose this can provide some type of Delta which lets them know if their AppFW or other safeguards are doing their job well or not. Personally I've rarely seen a AppFW in use (maybe 1 in 30 clients).
- I haven't seen one at my university. People have talked about implementing them a few times, but no one has to my knowledge implemented one.
- From my experience the webapp firewalls are either not deployed are use rulesets that are too loose. Even when my pen tests have been noticed its never been before day three of the testing. I should emphasize that no techniques were ever used to hide the attack, I presume that none of the attacks would have ever been picked up if the testing wasn't so loud and noticeable.
- Difficult to tell if it's the WAF blocking or the app.
- never...
- With the exception of some servers utilizing mod_security a majority of the time automated attacks are performed without issue.
- The pen test process that we employ tests the applications from behind and in front of the firewall. There is a risk of relying only on testing through a firewall.
- They "try"
- When WAFs are deployed they are usually configured so loosely that they might as well not be there.
- I tend not to use "standard" attacks, and the WAFs rarely if ever stop me unless I'm being stupid. I'd say less than 5% of my reviews have even mod_security in place let alone something pricey.
- My perspective may be skewed however, I only test from a black box perspective so it is very difficult for me to tell what the clients architecture looks like on the other side.
- I've seen a real-life CSRF attack about a year ago in a small gaming portal. After over a week the developers were still sure it's some coding or database bug on their side, removing user accounts. If IT people can't recognize CSRF, how can average user protect himself against it? NoScript is doing great job against XSS, but how much people are using NoScript?
- Does NoScript count?
- Automatic updates of os/browser helps as does phishing filters, but the average user often does not know how to use these or understand their purpose. Most users react with shock horror when they learn that people can spoof emails, an issue that is far older and more widespread, yet users remain uneducated. I don't see the trend changing now as opposed to the past. These issues will have to be solved by the vendor, not end users.
- If you have ever worked tech support you would understand how many people have issues just navigating a Pc.
- People are even reluctant to download and deal with the minor inconvenience of NoScript...I know people who "used to use it"
- The average web user doesn't know how to turn Javascript off. A significant majority of IT administrators and information security professionals run an outdated, vulnerable version of IE or Firefox with plugins and add-ons that are also outdated and vulnerable. I've never met any person in real life who claims to browse with security in mind, especially regarding the attacks named above.
- But im no convinced attackers are using these vectors a great deal. no at expert on threat intel though. they make targetted attacks easier though.
- Web user will be capable to protect himself if he will be taught (and even security restricted). To not use Internet Explorer (taught) and forced to not use it (restricted). And others aspects of security, "not using IE" is a first step.
- Since most of web user do not develop their own tools but uses those available on the Internet (blog, CMS and so on), their web site/application security level is directly linked to these tools security.
- Even those who know what they are doing have little hope of defending all browser attacks. There are just too many threat vectors to protect against while keeping the web usable.
- It depends on what you mean. Some attacks require user interaction. In those cases, I'd change my answer. On the whole, the user is relying on a broken security model that inherently trusts the site they're visiting and anyone who is able to inject active content into it.
- Today, least-privilege is the best bet for protecting yourself from client-side exploits, but very few people do that. (And of course there's really nothing users can do to protect themselves from XSS/CSRF.) Tomorrow, malware will have adapted itself to the least-privilege environment, and who knows where that will leave us.
- There are a large number of individual's who beleive that if they see a locked lock on a website that everything is secure and there is nothing to fear. At work users are offered some protection by their network administrator; however, I find most users run with limited protection (they might have anti virus protection) as administrators on their own computer at home. I would say 90% of my friends and family have had their computer compromised. I then teach them to have one administrator account on their machine and to use a limited user account for the majority of their work. I also try to alert them to be more aware of what they are doing.
- I browse with No Flash, Javascript, or Cookies, and most sites are damn unusable with no intelligent error message. Why do I need a cookie to save my damn zip code on Circuit City (contrived example but one I run into a lot)?
- But it's getting a lot worse with Web 2.0.
- The MySpace generation and the elderly make for a large portion of individuals who are not capable of deciphering possible malware and related activities. I tend to get phone calls from my grandfather asking why he received a screen (in reality a simple popup from the browser) stating that his computer has a virus, and that he needs to download the rogue software provided in the link in order to remedy the situation.
- The tools that are out there that claim to help, NoScript for example, take away the ease of use that the average user wants... NoScript, no matter how beneficial, is outside the realm of the average user. Some things are getting better... More users are applying updates from Microsoft and more software auto-updates these days.... so things are getting better but not as quickly as they could be.
- The big issue is that the web applications are coded with the assumption that the user will not perform anything malicious. With the discussions I have had with the development staff, this will continue since the courses that teach coding and its associated methodology does not really include security.
- NoScript!!
- I've created anti-CSRF UserJS for my browser and it's more trouble than it's worth. And XSS? There sooo much crappy code out there that it would have even more false positives.
- 15 years on, and the gold standard empirical answer for this question is resoundingly "No!" And more to the point, nor should they have to know how to protect themselves.
- I like to think I know something and even I can't really protect myself while at the same time using services I like.
- Average? No way.
- Not without sacrificing the ability to utilize available functionality on the their favorite Web 2.0 applications.
- Depends on the site, if it's a homebrew/smaller operation I usually avoid it.
- Social networking
- Online Banking
- I have an account at one bank with only a couple hundred dollars in it. I use that bank/account for online purchases. All my other banking/credit cards are with other companies.
- Depends more on the site
- Regular old fraud is more of a concern. Too many vulnerable sites for the bad guys to hit all of `em. (That's what I tell myself, anyway.)
- No online banking!
- i just do it in a separate browser instance. don't forget MOZ_NO_REMOTE=1. :)
- Business that requires SSN or other sensitive identity info.
- Online purchases, registering on websites using valid email addresses date of birth, etc.
- I never browse email links, and look at a lot of stuff through wfetch
- web surveys, financial, personal data
- I won't put my full social security number into a form field.
- No $$$ related transaction over the Web
- only use 2 sites. Dont think they are great but keep my CC data in one place. Willing to pay more for that too
- Cannot comment because of privacy issues :)
- I have to admit I watch these "important" HTTP/S transactions through a local HTTP proxy though.
- Granny Porn is a major area of business that needs more protection. RSnake, Id and the rest of miscreants over at sla.ckers have cornered the market on it and the rest of us can't get our fix!
- We do not conduct any financial transactions online and we keep all sensitive customer data off of our website. Instead, we send customers CDs with that information.
- I refrain from performing financial transactions with smaller businesses who roll their own shopping cart and credit card authorization.
- I do not provide any personal information online. If I have an e-commerce transaction, I use a special credit card that I only have a $500 credit limit on. The only problem is that the credit card company keeps trying to increase the limit without my approval.
- I tend to avoid small mom-and-pop shops, unless they use a larger system like PayPal to handle credit cards.
- Ask for too much info too fast (credit card for shipping charges?) or make me think your website is shit (unusuable pages on OfficeDepot (example again)) with no javascript and I get either turned off or too frustrated to patronize.
- Well maybe it doesn't really stop me as much as I use a low limit CreditCard for online transactions etc....
- Accessing Bank accounts online.
- i could get pick pocketed in the metro, what's the difference?
- Because if I lose money from my bank, I'll go back to them about it - it's not my problem!
- Hell, that's why there's a credit card insurance. Let them pay for their own mistakes!
- Social Networks, Auctions, Online Docs and Backup Services (Google Docs, .Mac etc.), Job Sites -- to name a few
- Buying/selling
- as stated if the website is bunk i pass on using it.
- I refrain from viewing bank account information, PayPal account data, and anything that may be used to identify me personally.
- i refrain but use an account that rarely has small amount of cash
- Trusting the new and unknown
- Err, you mean personal or professional? Pro: spamming and obviously exploitable features.
- Loan / credit applications. Buying pr0n. Clicking ads. Clicking "Remove me" in e-mails.
- I provide a lot of fake information though.
- Rarely use my real name on the internet. Don't open attachments. Don't click links in emails. Don't do widget sites (pageflakes, goowy, etc.)
- firefox with noscript though
- I think in most cases doing business online is at least as secure as doing it offline.
- Had some trouble using paypal lately... For websites such as auction, will never used that!
- all kind of online stuff :)
- I swim in shark infested waters, I live in germ infested enviroments, and I do business using bug infested software. It's a necessary evil at this point.
- I'd sell it to practically anyone (excluding Russian Business Network)
- Selling vulnerabilities is like washing car windows on the crossroads. The car owner did not ask for it, so why do you expect payment? Would you also sell drugs to the DEA?
- I might think about it, but I probably wouldn't do it. If I was a starving college student; in a heartbeat.
- no
- would not sell
- would sell for top dollar
- No.
- No. if the issue is broad enough i go through CERT. i don't need the media attention, fame, or money
- No
- no
- no
- I need the money.
- no
- NO - Inform the product company
- Jeremiah Grossman
- not in the selling biz
- wouldnt sell
- I'm thinking about this idea.
- no
- Not selling, telling the application owner first. Always.
- No. I wouldn't sell it.
- Wouldn't sell it, just report it to the owners.
- no
- I would offer it freely, selling it is getting closer to the line of extortion. If they don't want to buy it then what? Would you sell it to someone else?
- Um... you need a NO option here.
- No
- I'd sell a vulnerability to any reputable purchaser. That definitely excludes anonymous purchasers.
- I doubt I would fined one, but a dollar is a dollar
- you're missing vendor. or not
- Never thought about it.
- I wont sell it. I'll publish it for free on sites like SecurityFocus and milw0rm
- not sure I would sell it, likely follow standard disclosure methods
- Nope
- No
- No
- No one
- nope - id disclose it to the vendor
- I would never sel a vulnerability.
- If I felt that I could potentially earn some money from the information I would sell it to anyone willing to pay a nice sum.
- hell no they should do their own damn work
- Would not sell.
- Depends on the application
- No. Responsible disclosure.
- Selling sploits is unethical
- I'd notify vendor, give reasonable time to resolve, then publish
- no way!
- I wouldn't release it myself, but rather allow some one else to take the credit, and ultimately the liability.
- It should at least be easy to find a contact point to for vulnerability disclosure information.
- I wouldn't trust what they posted.
- Regarding sensitive or e-commerce websites
- Im all for public disclosure (after a fix, the better) but public
- I also want a pony. I'm not holding my breath.
- nah, as long as they fix it in a "reasonable" amount of time (as defined by me), then i'm fine with an implicit policy of fixing stuff fast.
- just fix it, or prevent it.
- I think each website should handle disclosure their own way, just as researchers do. However I wish more websites (companies) would adhere to the http://www.domain.com/secure/ security@domain.com, abuse@domain.com, etc recommended practises ( http://www.rfc-ignorant.org/rfcs/rfc2142.php ) and have these addresses reach a human.
- Not every website has the user base or traffic to necessitate a vuln. policy.
- More companies need to be held legally accountable for their programming and security practices
- Why give script kiddies an easier target ?
- Nobody cares about disclosure. If I had a website, it wouldn't have a vulnerability disclosure policy. Maybe there are some that should.
- Useless. Again hackers does not care what is disclosed or not.
- big sites. Give reference and financial reward. Think thats fine. Time = money. prob cheaper than getting a pen-tester to find it :)
- Current disclosure policies are enough: full disclosure, responsible disclosure and my advanced responsible disclosure (and also not disclosure policy).
- This attitude would change the fear of publicly dealing with security issues and may, on the long run, reassure customers since it means security is a concerned for the web site owner.
- The purpose should be to protect the company and the company's customers. If a disclosure policy existed we would know who to contact in case a vulnerability was discovered. Doesn't mean they'll fix it, but at least they would know about it directly.
- If the vulnerability puts its users at risk, then it should be disclosed. The question is then who makes this decision?
- Caveat: Any website that accepts or retains sensitive customer information.
- That shall prevent big confusions about when to disclose vulnerability issues.
- No -- the disclosure situation w/r/t web site owners is completely different from the mass-market software world. I elaborate on this point here: http://www.webappsec.org/lists/websecurity/archive/2007-10/msg00025.html
- It'd be nice, but for rinky dink operations or mom and pop businesses or things like that it's kind of eh.
- I think organizations should probably publish their recommended disclosure policy, but it need not be public and need not be on every web site. I'd like to see DNS extended to publish a 'policy URL' for each domain... the information would be there.
- address to whom to point bugs
- vulnerability disclosure policy will give birth to possibility of attacker guessing about new flaws based on older ones.
- Maybe organizations as a whole should, but in my case it would be extremely redundant for all of the sites to have their own policy.
- Once a policy is in place I would feel much more comfortable disclosing issues. This way you wont get the "what were you doing looking for that SQL injection issue" from the site admin.
- might be nice, but what difference does it make really. RFP's seems to be pretty standard, but few companies really take is seriously
- A lot of websites are build by people who have no idea of what they are doing (wiziwig), a vulnerabilty disclosure policy is beyond them. I am in favour for one on hosting level.
- depends on the site - if its business related - yes.
- I think there should be a standard disclosure policy that everyone can be happy with.
- Out of all the websites I've looked for vulnerabilities within only one offers $50 incentives for disclosing them directly to the company, but I've only been offered this once.
- Website owners don't care about vulnerabilities within their code so they're less likely to create a disclosure policy for their branch of sites. Researchers AND Web admins should really follow something that is mutually agreeable but that's very difficult to create. RFPolicy doesn't apply to an immediate vulnerability that any child with some desire to poke around can figure out. Web vulns have be resolved QUICKLY and that's just something the regular SDLC doesn't provide room for. How are you going to get the QA done, performance testing, regression testing, etc in time because of one minor change due to a XSS or SQL Injection? Let alone a business logic flaw. We're doomed. :)
- because they all have it and i have worked at very large companies that cover up attacks. but fix them in time. People in charge are selfish because they only think of how it would reflect on them (department) never think about the customer.
- normally it seems that vulnerabilities are due to the lack of secure coding practices, lack of a defined SLDC, poor or no requirements, and lack of security testing using testcases derived from threat modeling. One instance of a vulnerability means a lot of the time that the vulnerability exists elsewhere within the application.
- That question really needs some elaboration as to exactly what you mean.
- Not each site...but big corporate sites, yes.
- It'll never happen. And if it did, it wouldn't be followed.
- not each, but I can think it's a good idea for larger sites
- This would endorse hacking, and make it possible for anyone caught hacking to say they were just testing.
- There should be one common FDP
- Let your QA and Security have some help, it a nice open-source esk. way of trying to approache the problem.
- application audit logging is very poor so I doubt most even know when an event happens.
- As developers become more aware of older attacks (e.g. XSS, SQI, etc) and how to protect against them, and more AppFW are deployed, newer classes of attacks will come out that few (if any) are protected against.
- Increase exponentially? I would've liked an option that was a little less sensationalist. I think they'll increase of the next couple of years... incrementally.
- It depends on the IDS that come out, what new attacks come out, etc.
- web site vulns will prolly increase, but no one will exploit them (even less than now-a-days)
- There is a great gap in the developer education regarding security. I know we like to point our finger at PHP here with the website and books examples often being vulnerable to several attacks. As new frameworks and scripting languages gain momentum in the future we will continue to see repeats of this. The fact that we still see buffer overflow exploits today is a clear indication that we're not getting it.
- All your internets are belong to us.
- zone-h and xssed are clear cut statistical indicators, they cant even keep up
- The WHID statistics are unbelievable, and CWE/CVE doesn't really cover incidents. Breaches in general are being announced more often (CA SB 1386).
- lack of hakcers, no expert on this though.
- As pentest tools get better, attacks won't need high level skills...
- As new technologies get introduced so will new vulnerabilities.
- I don't need a crystal ball. We are often called in after incidents to help clean up. Many of these cases are not disclosed except to a select few customers. Others are severely underplayed in the public announcement.
- I would say that there are not security experts in 2 out of 3 reviews I perform. During these reviews, the individuals responsible for the web application did not know what to look for to determine if there application was being compromised. Additionally, the web applications were completely unsecure. I feel that as the ease of exploiting web vulnerabilities by script kiddies increase so will the attacks. Also, I feel that organized crime and cyber terrorist around the world will also increase their attacks on web applications to fund their operations.
- Not a big decrease, but the low hanging fruit (SQL injection, XSS) should be tuned out to eliminated as libraries and language built-in functions are made the default.
- moneydriven industry
- Open-source tools will get easier to use and therefore more and more people will be able to find these vulns. Script kiddies will become a problem in the web world whereas this stuff was too difficult for them before.
- I don't think we hear about them, but at the same time I think that as assessments occur more often, developers start to adapt and reuse, and enforcement/governing bodies gain strength (ie: PCI, etc) things will stay the same or even start to decline. I think we can draw a hard comparison w/ network VA/PenTest we're much better at automated patching, system hardening, SysAdmin, etc today vs 10 years ago (not to say that we don't find things, just less and addressed more quickly). The same thing will happen with WebApps it's just a matter of time.
- As the bar to attack web sites falls its only natural that the number of attacks will increase.
- I see a lot of vulns in the things I look at, but I dont hear that many attacks (which would lead me to ticking #1 I suppose). However, I'm not sure of how many attackers there actually are out there, and their skill level. It's a limited resource thing - not like viruses/bots/etc where you write one and let it go
- welcome to php worms.
- I think the web app hacks will go up but visibility will go down as vulnerabilities will exploited for more malicious and stealthy attacks.
- I've given up on evil haxors. They're not very ambitious.
- Unfortunately there's no "They will increase steadily"... I think exponentially might be a little much but they will continue to grow... I had a call last night that went: "Our web server was hacked and hosting a phishing site... what can I do?"
- my ball is hazy, but I don't think things will change must. The saddest thing about life, is the moment is always now, and that things, generally, stay the same....
- They are getting increased attention both on the offensive and defensive sides. So I think they'll balance each other out.
- Lots of websites have XSS or SQL Injection holes, but are just not worth hacking (unless it can be done automatically by a worm).
- We'll see more Storm worm type attacks where the attackers are serious about monetizing their attacks rather than just for the fun and fame of it. These worm fleets will use large social networking sites as their patient zero point.
- It's always the same kind of games...
- I have learned so much in so little time, there has to be people out there doing it for personal gain.
- 1) Decompiled a flash to determine the update process for customer details. Exploiting this would result in real world benefits (hijacking private club memberships'n'shit). Not a victimless crime, though.
- I'm not sure if this counts for business logic: Some site had "remember me" option in the login form, which added a special value in the cookie. It was completely random and different for each user (as it should be), but it never changed. Once it was stolen by XSS attack, it was used to take over the account, even if the victim logged out or changed password. This was not a vulnerability alone, but it seriously boosted XSS attacks (the site was vulnerable to them).
- Most are from my current employer. NDA agreements prevent me from telling..
- hmmm, the best stuff is the easiest: 1) Test Credit Card numbers are your friend 2) Persistant XSS->CSRF (Post form submission done with javascript) on unnamed open source e-commerce platform. Admin views the page *presto* we have another admin. 3) filetype: sql
- you'll see the advisory by the end of the year. :)
- Accessing test.aspx logged me in as an administrator to the web application with full access. Test.aspx was used during development because the third-party authentication provider was not in production yet
- No big deal but I got the attention of some people at work when i phished one of their user registration pages including SSNs and CC#s
- rsnake elaborated on and linked to my story here - http://ha.ckers.org/blog/20070122/ip-trust-relationships-xss-and-you/
- well, long day in the city & beer = bad memory! Often tight on time, usually priviledge escalation or filter evasion are about as interesting as it gets. bypassing file upload restrictions is always fun.
- Nice example of creative and clever web application hack is my space-hack filters bypass technique. Which was introduced at Month of MySpace bugs and was showed at Month of Search Engines Bugs.
- Not creative or clever, just common. Reload/refresh is the most common destroyer of business logic and it's simple, and everyone does it e.g. using 'back'.
- Complete web server compromise through a file upload process. Was able to upload a .asp file and execute it, which ultimately allowed me to navigate the C:, read config files, database connection strings, etc...
- Most of the clever hacks include NDAs.
- Story #1: http://www.rachner.us/blog/?p=8 Story #2: I cryptanalyzed a keystream-reuse issue in a single-sign on system, recovered the full keystream, and was thereby able to issue arbitrary credentials to myself for the entire portal.
- In order to avoid the JavaScript security restrictions, I developed pages that allow the web service to redirect responses to local HTML page. Turns out that this redirection is capable of redirecting anywhere. So, I took two of those and pointed them at each other and found that I created a simple denial of service where the server sent requests to itself. Throw a bunch of these at the server from an anonymous connection and the server simply dies.
- Not very good but... Went through the entire purchase process on a (very large) online lingerie retailer and got to the order confirmation page. My order id is in the querystring. Wonder... yup. I can view anyone's order.
- I dunno if this is the most creative/clever but it's the one that a lot of clients can relate to and immediately see the major business issue. Licensing application allows you to pay your annual fees and make a donation to a related charity. Lets say your annual fee is $64 you can make a charitable donation for -$63 reducing your bill to $1. (Can't reduce to 0 or negative).
- E-commerce proprietary solution, PHP, 1,5 years of developpement, include($page.php) on the index.php with allow_url_include=On. Erf.
- in financial apps where you can charge peoples credit cards, don't use a session id that is stored in the cookie validated using an SQL query...and if you do, make sure you can't pass % as the value ;|
- Sorry, NDA's and all that.
- One of the largest banks in my country, managed to find form that allows you to submit financial news. The form also allowed to upload a picture with each story. I've noticed that the file is uploaded to a certain directory, and managed to tamper with the filename/dirctory. I then proceeded to upload my own PHP script, which executed commands on the server. This was in 2002. Until today, my own HTML page, with my company's name/URL in it, is still in their root directory, and they haven't deleted it :-)
- Our management wasn't aware of web application security, so I shouldn't spend much time on this subject. Some month ago I found a XSS vulnerability in one of our most important websites. I was able to add an article, that one of our biggest share holders sells his shares. Additionally I removed an iframe, which holds the share price of our company and added my own iframe with a much lesser share price. I was also able to send messages to any email addresses with any text through this website as well, because the recipient was stored in a hidden field in the mailform. I used this "function" to send my management an official looking email which included a link to my fake article (through XSS). Guess what, I now have a team with some people and we do WebApp Security the whole day. A picture is worth a thousand words :-)
- sorry fella's, contract says talking out of bed is a nono
- All of my stories are olden-days network security exploits :)
- CICS (http://en.wikipedia.org/wiki/CICS) command injection from a crappy Cold-fusion app, through a crappy Cold Fusion->TN3270 middle-ware environment, all the way to the mainframe. In all fairness, it would have been difficult to do without a little inside knowledge.
- The Photobucket 0day I've released utilizes an eventhandler appended to a query string, which is then embedded into the HREF/link elements within the page. With a bit of user interaction the payload then initiates an AJAX request to the account settings area, and proceeds to log the full name, email address, miscellaneous account data, the cookie, and the mobile upload address as well as the PIN number to the account. With the mobile upload address and PIN number it is possible to upload arbitrary images to other individuals' accounts for ambiguous reasons such as attempting to get the account suspended, having the owner of the account be physically investigated for questionable material, or simply to frighten others. During the entire process the user never leaves the Photobucket.com domain, but using the DOM the page's content is changed to that of the actual "Photobucket Maintenance" page before a GET request is made to a third-party logging script through an image. Once the request has been sent the page then relocates to the most recent images that have been uploaded to the site, which should avert any suspicion on the part of victims.
- Nothing interesting I can share but one of my favourites is a website that performs concatenation every time it sees a space. I'm seeing more and more of this... so, for example, 'foo bar' becomes 'foobar', but what the programmers of these Apps fail to make note of is the non-strict nature of HTML... tab and newline work just as well for splitting tags, and very few people remember to filter the tab character.
- One's already been published (MacWorld!) but I usually find some variation of session logic flaw a couple of times a year. These can be as simple as changing a hidden field to "User2" when logged in as "User1" and viewing records to the very complex fifteen-step process of using nearly every trick to showing how a persistent XSS can really be fatal. Are we counting Authentication Control failures as Logic Flaws? I do because they're fundamentally incorrect thought processes on the developers part.
- nothing I'm proud of!
- http://www.elhacker.net/gmailbug/english_version.htm http://sirdarckcat.blogspot.com/2007/09/google-mashups-vulnerability.html Yeah, hacking google r0x
- Where to start. I guess my favorite stories usually revolve around breaking cryptographic implementations where the developer has done something stupid -- either tried to design their own scheme or used a cryptographically strong cipher in a weak way. I've decrypted and manipulated data this way without ever actually cracking the keys, simply by recognizing patterns, and deducing how the cipher was being used. (BH 2006 slides are online if you want more detail).
- My client site had a XSS vulnerability. They also had an exchange server setup with basic authentication protection. I created a XSS exploit that created a login form on the website and then spammed the entire company with a notice that the IT department added a login to exchange server on the main website for ease of use. The exploit code actually pointed to my own box that was setup to prompt for a basic authentication popup, which was then fed into a decode script and outputted into a file. It took less than 15 minutes after I sent the spam message before one of the IT project managers 'logged' into the site and passed me his credentials. Combination of XSS, social engineering, and information gathering (email address harvesting).
- Asked by a newspaper for technical opinion on 2005 Polish Presidental Candidates' websites I've found SQL injection on one of them that by use of CHAR() (bypasses PHP magic quotes) allowed injection of arbitrary content into pages (I haven't disclosed details of it, but imagine how much fun it could have been :)
- But then I'd have to kill you. Seriously, let's just say I got to see a fine film and had wonderful food after finishing my review.
- i mailed you one... indian airline PNR bug
- inject sql from asp.net viewstate
- dunking bar-boxing
- masturbation
- Drumming?
- Hapkido. That's a Korean system, can be compared to the Russian Combat Sambo.
- but only a little... and bits and pieces of others (notably Israeli self-defense and JKD)
- Actually kung fu and sanda, but close enough.
- Drunken West Texan Barroom Brawling :)
- cliche
- Mind Martial Arts: Tao.
- Is unreal tournament a sport?
- I took Krav Maga for a while, my primary method of self-defense is in not being an offensive person, being large and strong, and being very aware of my surroundings
- Pencak Silat Pertempuran
- No, even Anurag Agarwal could kick my scrawny ass.
- What? Do you want to use Kung Fu to fight against hacker? "The Art of War" may be better to use instead.
- Wing Chun, and it beats blue belts hands down :p
- Many years ago I was engaged in cybersport. Now I'm mainly practicing mouse-moving exercises ;-).
- no
- huh ?
- Taijitsu, Aikido, Aikijujitsu
- Aikido
- Eating Doritos is an art form isn't it?
- Judo mostly anymore, I haven't been able to compete since April and I tore up my shoulder.
- I hold a three star dragon belt in comp-fu.
- None...what does this have to do with Web Application security?
- Why is this applicable. No, but I should probably learn some form of martial art or combative sport. After all there are several individuals whose life was made missourable after I discovered web application vulnerabilities on there employers websites. I am sure there may be a few individuals gunning for me.
- nope
- Just Script-Fu
- I've trained up to my Black belt but I don't plan to teach (not on my own anyway) so I haven't bothered to test since it's so freakin expensive. It's basically a whole mortgage payment :( $600US + for World TKD Federation Black Belt Exam....OUCH! [That is unless something drastic has changed in the last 3 years and I missed the news....unlikely]. I've also dabbled in Muay Tai, Kung Fu, Kickboxing, because my instructor takes courses in all kinds of things and gets on a kick for one or the other along with TKD over the years but I've never done any of them separately or tested in them.
- I practiced tai-chi-quan for over 2 years in my teenage.
- no, but I wakeskate, which is WAY cooler!
- No
- Char Siu Bao (the art of making chinese baked pork buns)
- where is perl ninjitsu?
- I'm an ace with a pea-shooter, even if I say so myself
- take it easy dude.
- pwn!
- Google-fu and what ever this guy does (the old one kicking ass at the beginning, but not at the end): http://www.youtube.com/watch?v=gEDaCIDvj6I)
- I've yet to get into one, but I'd like to say, "Bar-fighting".
- "5 ward bees" fist fighting in the hood. :-)
- No, but I do watch the Kung-Fu network :)
- Skateboarding... You ever try to get time on a busy ditch ramp?
- Neko jitsu - my cat tummy rub fu is awesome! I hardly notice the claws digging into the scar tissue any more
- No
- Trying to convince my boss...
- what the hell?
- LOL, I think I might be the only one who isn't from what I can tell. I would love to pick up Boxing, but there just arn't many gyms around where I live.
- I am a 30th level Ninja in my homemade paper RPG.
May 2007
19 comments:
Cool!
It's what I was waiting for ;-).
Will look at this Survey in a short time.
Good to see the surveys back!
For the last question, do you plan to extract some correlation between working in security field and fighting sport? :P
romain, maybe. :) Though in my experience there is a high percentage of infosec people doing combative type sports... I just wanted to know what they were :)
Your application firewall doesn't like XSS jokes in the survey :)
<script>alert('lame!')</script>
It's not a joke or contest that you need malware (aka JavaScript) enabled to complete the form? Is it?
Hahah, I didn't even consider that. Its not my code. :) But if you feel so inclined (paranoid), go right ahead and view source. Heh. :)
site requires JS?! im so glad JS is finally getting around...all these years we've never been able to take online surveys!!
;-P
Training on fighting spots will not help on fighting against hackers, instead learing "The War of Art" will.
See "Applying the Concept from The Art of War" at http://intnetjournal.com/eSecurity/eSecurity_Philosophy/esecurity_philosophy.html
"Training on fighting spots will not help on fighting against hackers..."
I don't think I agree with that 100%. Training in fighting sports (or with a punching/kicking bag) is great stress relief for ANY job.
At the same time I do think "The Art of War" is a good read.
Combat Sports is decent for InfoSec pro's but doesn't always help with client relations! For the perfect balance of testosterone and InfoSec, I recommend a rigorous weight lifting program. I'm up to 280 lbs on my dead lift. Bring in ON gentlemen! :)
@Jim, 405 baby! ;P
Great work on the survey Jeremiah.. I didn't take the survey initially, but for the next survey, I think the following questions need more answer choices:
AJAX question is pretty much, "is it bad, or ugly??" no "good answer" there..
Regarding application 0day selling, you need the "no" choice next to "other".
Martial Arts question needs a "I don't do any physical activity besides breating and eating."
That is all.. Looking forward to the next survey :)
Thanks for the feedback Marcin. Glad you liked it. Yah, every survey gets a little bit better as Im still learning how to put together the right type of questions and answers.
Hey Jeremiah. I enjoyed this edition of the survey. Nice job.
Of particular interest was the "your most clever attack" question, as that used to be one of my favorite questions to ask when I was interviewing people for pen testing positions. A couple great answers from the survey (love the CICS injection attack), but I was a little disappointed by the ones that were just variations on XSS and SQL Injection, or simple authorization bypasses.
To those people citing NDAs as the reason for staying silent: describing the attack technique doesn't violate your NDA. Just don't disclose the customer name or the application or any other specific details of the target. Man up and let's hear those stories. :>
Hey Jeremiah, can I make a small suggestion for the future?
Post the results as a new BLOG article. I would have missed them if I hadn't scrolled down the page to read some comments on some other articles that had on-going discussions.
@chris, thanks much. Yah, I was hoping for more and better stories there, but it was not to be. I think I might be able to improve the answers in future surveys with some more creative questioning. I'll have to think about it.
@kingthorin, thanks for brining that up. I'll do that next time.
Awesome survey, the answers seemed spot-on with my experience. I've noticed a correlation of martial arts in the InfoSec field as well. After all it doesn't matter how strong your encryption is if someone can beat your password out of you (the ol' rubber hose attack ;)
-Tom
Unlike most of the people responding to this survey I'm just a simple business owner and from what I have been reading within these posts indicates that in reality there is no security.
I am waiting for a friend of mine to finally complete his encryption development on some products that have proven to be unique and to date after 10 years of attempts un beatable. NSA has tried for years and cannot bet the encryption.
Its and interesting product and is described as keyless encryption. Most encryption gains if lucky 3% of capability and is looked at as a gold standard when it improves.
This product is well beyond what is available now.
I believe one day this type of discussion will not be taking place. We will all be say "remember when we used to".
The best to all who read this. Security is coming.
Craig
wow, really very nice survey about web application security. Thanks for sharing nice information. Awesome post.
It would be interesting to repeat the survey and see what has changed over time.
Glenn Davis
DataStar
http://www.surveystar.com
Post a Comment