Thursday, June 28, 2007

Take5 Interview

Today I was Chris Hoff's third Take5 interview victim. The last two were Chris Wysopal and Marcus Ranum. Personally, I like blogger style interviews. Of course as long as they ask good hard questions. Bloggers, especially the infosec types, tend to ask highly relevant questions and I enjoy the aopportunity to be more thoughtful about my answers.

Monday, June 25, 2007

C-NET podcast with SPI Dynamics, Watchfire, and WhiteHat

Several weeks ago Joris Evers (CNET) hosted a web security podcast with Danny Allan (Watchfire), Jeremiah Grossman (WhiteHat Security), and Michael Sutton (SPI Dynamics). The question revolved around comparing old vs. new. Microsoft vs. Google/Yahoo, desktop security vs. web security, and traditional attacks vs. Web 2.0 attacks. Basically are the problems the same as always or are we dealing with brand new threats?

For my part I think the best practices are the same ol' input validation issues we've always had. Attacks like XSS, CSRF, and SQL Injection are certainly unique. Also what's different is the scale of the problem and its potential impact. At no time in history has software (web applications) been available 1 billion people.

Chris Hoff gives us an attitude adjustment

Chris Hoff's post, How to Kick Ass in Information Security, really spoke to me. He reminds us about all the things we should already know but for one reason or another tend to forget. Inspiring actually. A snippet from the end...

Think about this stuff. It's not rocket science. Never has been. Most of the greatest business people, strategists, military leaders, and politicians are nothing more than good listeners who can sell, aren't afraid of making mistakes, learn from the ones they make and speak in a language all can relate to and understand. They demonstrate value and think outside of the box; solving classes of problems rather and taking the parochial and pedestrian approach that we mostly see.

You can be great, too. If you feel you can't, then you're in the wrong line of work.

6 reasons why reviewing development code is not the same as assessing production websites

Finding and fixing vulnerabilities during development prior to production release helps lower software development costs and decreases website security risk. Obviously vulnerabilities in development will make their way to production, but what many people miss is that not all vulnerabilities in production once existed in development. At first this might seem counter intuitive, but it happens all the time.

Frequently WhiteHat customers deploy our Sentinel Service earlier in the SDLC (via Satellite Appliance), right around QA time. I mention this because we’ve presently run continuous assessments on a significant number of staging systems, side-by-side with the production “mirrors”. What’s interesting is that the vulnerabilities reported are rarely identical! This comes as quite a shock to many of our customers. Turns out there are a variety of reasons for differences between development/staging and production systems:

1) Code is enabled/disabled prior to release
Before staging code is released to production the push process says to disable debug == 1, admin == true, test credit card numbers, verbose error messages, test/QA accounts, etc. Then enabled extra security add-ons, transaction logging, performance enhancements, etc. This means production will never be a perfect mirror of staging and should any one step be overlooked, vulnerabilities could be introduced.

2) Lack of system configuration and patch level consistency
Even with centralized configuration and patch management, it’s extremely difficult to keep staging and production machines in perfect sync on rapidly changing websites. Non-standard-build web servers are quickly put into rotation, ASP.Net Request Validators are disabled on-the-fly because users need to upload active content, and not to mention random system failures.

3) Last minute feature requests demands
Management says a new feature needs to be pushed NOW! “Security be damned, the company will lose money if it isn’t released. Make the change on production and deal with any security issues later.” Developer’s rushes out new, buggy, and insecure code.

4) Emergency vulnerability hot fixes
Security says a high-severity-easy-to-exploit vulnerability is discovered on the production website. “Developers need to implement a hot fix ASAP because its too risky waiting for the normal dev/QA process.” Developer pushes the fix, but forgets to back port the code. The next scheduled released overwrites the earlier security hot fix.

5) Backup and log files hidden in a public available location
System administrators are trained to make backups of files (*.bak) before they’re overwritten in case something goes wrong. If backup files remain next to the original indefinitely it may never become a problem. However, if the files are located in a Web directory it can be. "login.cgi” becomes “login.cgi.bak” and the source code would be made publicly available by simply requesting “/cgi-bin/login.cgi.bak”.

6) Infrastructure devices
When it comes to infrastructure devices such as proxies, load balancers, firewalls, network storage, databases, and web application firewalls - staging systems are rarely a perfect mirror of production. Often the reason for the difference is cost. These devices have the ability to impact a websites security posture either positively or negatively.

Try as we might successfully maintaining identical staging and production systems is difficult (often impossible) to say the least. Sometimes the issues described above are under developer control, but certainly not always. Push managers, system administrators, and security personnel at the direction of the powers that be all play a role in maintaining a websites and everyone makes mistakes. This illustrates why measuring the security of the actual production website is so important.

Friday, June 22, 2007

Rolling Reviews: Cenzic's Hailstorm

Jordan Wiens of Network Computing (survived the layoffs) has released round two of the Rolling Reviews of Web Applications Scanners. This time up is Cenzic's Hailstorm Enterprise Application Risk Controller and Jordan is again not pulling any punches. He covers the products Ajax support, interface aesthetics, vulnerability identification, false positives, and reporting prioritization.

Thursday, June 21, 2007

Hackers insert footage of nuclear explosion into webcam

Normally, I don't link to and say "check this out", but for this I had to make an exception.

"Czech hackers managed to get access to a webcam and insert realistic footage of nuclear explosion. The incident was broadcasted on the Czech TV programme Panorama on Sunday."


Talk about a "hack"

The web application security market is hot, hot, hot.

Since the acquisitions of SPI Dynamics and Watchfire, several people have asked me the same two questions: When is WhiteHat going to get acquired and what does this mean for the web application security market?

1) Obviously the acquisition question isn’t something I’d be able to comment freely about even if I wanted to. I’ll politely hide behind no comment.

I did want to congratulate both companies, especially the SPI Dynamics founders (and long time employees) whom I’ve known a long time. Caleb, Brian, Brian, and Erik put a lot in - dedicating many years of their lives to building a great company. By all accounts the acquisition will be successful for them, which I believe is also a first for the web application security market. Well-done guys! Don’t spend it all in Vegas during Black Hat :)

2) IBM and HP purchased Watchfire and SPI technology respectively to extend their enterprise software development offerings. The deals are not really seen as a leading to a “security play” by either of acquirers, whom Mike Rothman says IBM and HP don’t have security strategies anyway. As company/product integration takes place, the scanning technology probably will become built-in features of larger enterprise software development packages. Standalone vulnerability scanner innovation taking a backseat (or disappear) in favor of the pure SDLC aspects that’ll be in the hands of developers and QA types.

Solutions for detecting vulnerabilities in web applications are separated into two distinct and complimentary markets, vulnerability assessment (VA) and developer/QA tools for security within the SLDC. VA is served by WhiteHat (SaaS), a myriad of small to large consulting shops, and by some extent Qualys and ScanAlert (SaaS) who’ve recently started their webappsec initiatives. The VA focus will be website security oversight focusing on scale, ease of deployment, and lowering TCO. The tools as always will assist in the production of quality code.

What's clear though is web application security is finally considered a "real" market segment.

Tuesday, June 19, 2007

PCI Certification doesn’t make a website harder to hack

Update 07.14.207: As a result of this post a good discussion has emerged over at PCI Compliance Demystified. I clarified my points in a blog comment over there and duplicated the content here as well (see below).

I’ve posted often about PCI with a particular interest in, “what web application vulnerabilities are ASVs required to identify?” For merchants and ASVs alike this is a very important question. For a website to pass PCI, what depth of testing is sufficient for an ASV to pass their entrance exam? Is the bar low enough for a network scanner or will something comprehensive be necessary? The answer to which benchmarks PCIs level of security assurance for merchants and the rest of the industry.

More than a year ago MasterCard informed the ASVs that they’d drop of 8 of the OWASP Top 10 for the scanning requirements. Leaving only Cross-Site Scripting (XSS) and SQL Injection. Then in a seeming contradictory statement they said, “...there are no plans to make any of the PCI Data Security Standard requirements less robust. Any future enhancements to the standard are intended to foster broad compliance without compromising the underlying security requirements of the current standard." Left in confusion, we didn’t know what to believe, so we waited for the answer.

Recently I looked up the newest PCI 1.1 documents and in the Technical and Operational Requirements for ASVs, it looks like we have the answer. On page 10 it says the following:

Custom Web Application Check
The ASV scanning solution must be able to detect the following application vulnerabilities and configuration issues:
• Unvalidated parameters which lead to SQL injection attacks
• Cross-site scripting (XSS) flaws

How about that! PCI only requires 2 out of the OWASP Top 10 remain, 2 out of the 24 classes of according to the WASC Web Security Threat Classification, and absolutely no mention that the scanner has to be logged in during the scan. Great. So “technically”, the PCI standard itself has NOT been watered down, that much remains the same. What has been lowered is web application security PCI compliance enforcement, which is down to virtually nothing.

My concern is merchants will be getting a clean bill of security health and never informed that their websites are very likely riddled with unreported vulnerabilities that weren’t even tested for. XSS and SQL Injection too! I understand and appreciate the business challenges of cost/performance for the PCI Council to consider, but come on, this sets of very dangerous precedence. Scans conducted like this will do NOTHING to make a website more secure or thwart anyone from finding that one vulnerability they need for exploitation.

Oh well, I guess the bright side is we have our answer and things could be improved upon later. How much later?

Wednesday, June 13, 2007

Moving Forward: CSI Working Group on Web Security Research Law

The CSI Working Group on Web Security Research Law (web security researchers, computer crime law experts, and law enforcement officials) was formed in effort to advance our collective understanding of website security vulnerability discovery and disclosure. The inaugural report explores all aspects of the debate, complete with case studies, and provides a solid resource from bringing the industry up to speed. As part of the working group and having time contemplating what I’ve learned, the big question on my mind is "where to go from here"?

If our goal is to…

1) Protect the security researcher
If a software vendor or website owner is knowingly or unknowingly putting consumers (or their data) at risk, security researchers make it known. A security researchers act of vulnerability discovery/disclosure, which may cross ethical and legal lines, in a sense serves as an industry watchdog. While many argue over specifics, few say vulnerability security researchers do not overall help the greater good. If we wish to continue having security researchers play a role as more software becomes web-based we’ll need:

a) Clear guidance as to what actions are legal or illegal when looking for and disclosing website vulnerabilities.

Today’s climate of legal liability and criminal prosecution has already caused many experienced researchers to curtail website vulnerability discovery and disclosure (at least in the U.S.). Without guidance, those who will suffer the most unfortunately will be the new comers to the information security field who don't know any better. Careers or in some cases lives will seriously impacted before they’ve even begun. For the rest of the people looking for vulnerabilities on websites, the bad guys, it’s a free for all for them one way or the other.

b) Whistleblower protection
Even the most well intended laws sometimes prevent people from serving the great good. We’ve seen this happen in other areas and it’s reaching to the point where this may be required in the information security field as well. Especially with more and more of our most sensitive information under the protection of others. People in a position to know should be able to come forward with at least some expectation of legal protection. Right now there is none.


2) Motivate organizations to better secure their websites

By some estimates over 1 billion people are online with access to over 122 million websites (growing by nearly 4 million per month). The vast majority of these websites that are assessed for security have serious vulnerabilities. So its as no surprise that the most commonly attacked spot is the Web layer because it represents the path of least resistance. With so much commerce being conducted on the Web, it should be in the best interest of website owners to protect the security and privacy of the consumer. Question we all ask is how to do help that.

a) Industry’s self-regulation - the carrot
Industry’s may self-regulate and reward website owners with perks for maintaining a high level of security for consumers. With PCI we’re beginning to see this tend. While the results will not be immediate, over time they will be measurable. For industries who fail to self-regulate on-line security, they’ll continue to suffer massive incidents. If the problem gets bad enough, the risk is of government imposes regulations becomes a reality as has already happened in many other industries.

b) Legal liability - the stick
The government may also to decide consumers deserve to be compensated for breaches of their personal information. For myself I find this route preferable to legislated compliance standards for security. Let the organizations involved properly balance their need for security with the potential of legal liability.

However, maybe within the next 3-5 years as more incidents like TJX occur, we’ll have both remedies.

XSS Attacks book interview with SearchSoftwareQuality

Recently I spent some phone time with Colleen Frye of SearchSoftwareQuality. We started off by taking about our new XSS Attacks book, but one good subject lead to another. During the interview we also chatted about the changing vulnerability discovery/disclosure landscape, SDLC, WAFs, and various other timely industry topics. Below is quick snippet and the rest turned up some good content.

You've been beating the drum about Web application security for some time now. Where has the industry made progress, and where is it still lacking?
Jeremiah Grossman: While the Web is still an insecure place, and most Web sites are still insecure, Web site owners now have the knowledge at hand to secure their Web sites should they choose to. Not totally secure, of course, but to improve the "hackability." It would be nice if the bad guys had to work really hard to find that one fatal flaw. Right now it's just shooting fish in a barrel. We have the tools, the knowledge, and the methodologies and best practices are there. Now it's the job of the other side to implement. On the security vendor side, our job is to make implementing those practices or developing solutions around those practices easier and cheaper.

Monday, June 11, 2007

$1,000,000 CNBC stock trading contest hacked

Described as the American Idol of stockpicking, CNBC's Million Dollar Portfolio Challenge (closed), is a chance for amateur traders to match their skills against the portfolios of the internets best. 375,000 contestants compete in ten one-week challenges for a $10,000 prize and a spot in the final round to go for the cool million. To win all you have to do is make the most money. However, reports are surfacing that several of the finalists (with unbelievably good returns) may have gamed the system. According to BusinessWeek and SecurityFocus picking a sure winner required exploiting a surprisingly simple web application business logic flaw:

“A trader could go to the CNBC Web site and select a number of stocks to buy, but hold off on executing those trades. If you made the selection before the close of regular trading at 4 p.m. EST and left your Web browser open, you could execute those trades after hours and still receive the 4 p.m. closing price. For example, if a company whose stock closed at $20 a share rose to $25 in after-hours trading, you could buy the stock at $20, even though it was already worth 25% more.”

You can almost hear an army of web hackers smacking themselves for missing the opportunity to play. Or maybe they didn’t? :) Obviously this issue something you can’t scan, heck most humans misses it. And talk about difficult to detect. Certainly not the common IDS noise from XSS or SQL Injection. In the end it was seemingly just a bunch of every day stock people who spotted the abnormality. CNBC, with a reputation to protect announced that it had opened an investigation. With that amount of money on the line, they’d better.

This situation raises several interesting and timely questions with respect to the CSI website vulnerability discovery report.

1) Does CNBC or parent owner GE face potential lawsuits from disgruntled participants?

2) Do the people whom exploited the flaw in attempt to win the contest face any civil liability or potentially criminal charges? What about the people who felt cheated and decided to find the glitch on their own accord?

Civil maybe, criminal is hard to say because it seems they still technically “used” the system in the way it was intended. It could easily go the other way and both sides might have that scary discussion with a prosecutor.

CNBC Notice

We have an update on the CNBC Million Dollar Portfolio Challenge. As CNBC first reported on May 30, we were contacted by several contestants alleging unusual trading in violation of rules of the contest, which ended on May 25.


As CNBC said at the time, we immediately launched a thorough investigation of the contest and we are now focusing on three specific areas of concern.


We are investigating whether one or more finalists wrote and executed computer program scripts to bypass the contest's security measures.


Additionally, one or more contestants were able to change their trades after the markets closed at 4 PM ET, but before the trades were processed by CNBC. That way, a contestant could have executed trades after hours, and have the trades priced as of that day's market close.

CNBC has retained two leading consultants in the information security industry to investigate these two computer programming related issues.

In addition, there have been allegations that one or more contestants may have engaged in illegal market manipulation to affect actual prices of stocks represented in their contest portfolios.


We have engaged an independent securities expert to determine whether such activity took place.


As we said previously, the rules state that CNBC has until July 8, 2007 to declare a winner. Although CNBC hopes to announce a winner before that date, it is more important to ensure the individual awarded the Grand Prize is in compliance with the rules.


Integrity is paramount to CNBC. We are taking all allegations of improprieties very seriously. CNBC will provide updates on the air and on CNBC.com as they become available.

Rolling Reviews: SPI Dynamics WebInspect

Last month I blogged that Jordan Wiens of Network Computing would be conducting Rolling Reviews of Web Applications Scanners. First up is the review of SPI Dynamics's WebInspect product. As expected Jordan isn't making this cake walk for vendors. He knows his webappsec stuff and will dig deep into the results, especially around the Ajax claims. Ajax is a tough problem to solve and is likely unsolvable. Ajax is also unlikely to make web applications less secure, but definitely makes them harder to assess. Next up, Cenzic ARC (Application Risk Controller) .

Friday, June 08, 2007

How to rate the value of your websites (Road to Website Security part 2)

Part 1 (How to find your websites) of the series describes a process for website discovery. This piece (part 2) describes a methodology for rating the value of a website to the business that many of our customers have found helpful. Website asset valuation is a necessary step towards overall website security because not all websites are created equal. Some websites host highly sensitive information, others only contain marketing brochure-ware. Some websites transact million of dollars each day, others make no money or maybe a little with Google AdSense. The point is we all have limited security resources (time, money, people) so we need to prioritize and focus on the areas that offer the best risk reducing ROI.

For example, it might be more beneficial to first resolve a medium severity vulnerability on mission critical website rather than a high severity vulnerability on a website of marginal value to the business. Without the intelligence of website valuation and vulnerability severity, effective decision-making is impaired. Another piece of intelligence we’ll discuss later on is what we at WhiteHat call a vulnerability threat rating, or how difficult a particular vulnerability is to exploit. Not all vulnerabilities are created equal either, but we’re getting a little ahead of ourselves.

As a continuation of the end of part 1, visit each website in the asset inventory spreadsheet, then answer a series of questions about them. These answers assist in a subjective value rating process. I say subjective, rather than objective, because I’ve yet to see a generic value rating system for websites that was quantifiable. If you happen to have one specifically for your company, fantastic, use it. Heck if you can, post it so others can learn from it! If you don’t have one don’t worry, over time you should be able to tailor this methodology specifically for your company.

1) What does this website do and who is responsible for it?
Click around the website, exercise the functionality, fill out a few forms, register an account if you need to. Is it a shopping cart? Web bank? Brochure? Who manages the content and/or security?

2) What would the business impact be if the website were to be compromised or suffer more than 24 hours of downtime?

Sometimes unplugging the network cord (not recommended) on a website is the only way to tell if someone cares about it or if its important. Before you do that though, consider if the website were unreachable or suffered a publicly known data/system compromise. Sometimes organizations have downtime in quantifiable terms. If so, great, use it. If not, terms such will suffice such as:
  • Major/moderate/low loss of revenue
  • Major/moderate/low reputation impact and brand damage
3) What are the amount of registered users and/or level of visitor’s traffic?
  • Hundreds/thousands/millions of registered users
  • Thousands/millions/billions of page views / unique visitors per month
  • Unknown
4) What's the most valuable type of data collected and/or distributed?
  • Personal and private information (names, addresses, phone numbers, social security numbers, etc.)
  • Regulated information (credit card numbers, bank account numbers, patient records, attorney privileged data, etc.)
  • Intellectual property (source code, customer lists, business plans and objectives, etc.)
5) How often do the web applications change (not the content)?

a) 1 – 4 times per month
b) Quarterly
c) Annually
d) Never or as needed

Answering these questions takes time and a lot of research, but the spreadsheet you’re building will be of huge benefit to the company, especially those with a substantial Web footprint. Taking these answers into account with the appropriate weighting, assign a rating from 1 – 5 to each website with 5 being the most valuable. Share the document around internally for feedback. After you have completed the process, what you’ll have is something that most do not. A well defined and prioritized website asset inventory.

The spreadsheet will look something like this:

Lets talk vulnerability discovery

Update 06.18.2007: Additional coverage by Robert Lemos of SecurityFocus

Update 06.12.2007
: CSI Working Group on Web Security Research Law Report is available. (Reg Req.) As I said below, this is well worth the read and especially important for the web security crowd.

Last year I began talking about how vulnerability "discovery" is becoming more important than disclosure as we move into the Web 2.0 era. Unlike traditional software, web applications are hosted on someone else's servers. Attempts to find vulnerabilities, even with honest intentions, on computers other than your own is potentially illegal. Eric McCarty and Daniel Cuthbert serve as examples as covered by Robert Lemos from SecurityFocus. Whatever your opinion on the issues, few outside web application security field appreciate the finer points or understand the potential long term affects. People have been listening though.

Starting with Scott Berinato’s The Chilling Effect and most recently Sarah Peters from Computer Security Institute assembled a diverse group of Web security researchers (including myself), computer crime law experts and agents from the U.S. Department of Justice to discuss the situation and create a report. After several collaborative calls and email exchanges amongst the participants, I learned a great deal, but unfortunately left with more concern than I originally started with.

I’ve read the report draft and it’s very well written, Dark Reading has coverage (Laws Threaten Security Researchers) and. I’d like to add that this document should be mandatory reading for everyone in or about to become part of the infosec industry. The final report won’t be posted until next week during CSI where a panel (I’ll be there) is planned to discuss the contents. I’ll update the post then when the link becomes available.

Made InfoWorld CTO 25 for 2007

RSnake spilled the beans, but no one was more surprised than me when the call came from InfoWorld. Honestly, I didn’t what to make of it. Being listed on the InfoWorld CTO 25 next to names from top companies like VeriSign, 3Com, Motorola, and Credit Suisse is surreal. I'd also like to congratulate the other recipients.

As RSnake said, Web application security is FINALLY seen as something to take seriously. Though I’m getting a lot of the credit, which I sincerely appreciate, its taken years of tireless effort from many amazing people in the community from all over the world. The people involved with OWASP, WASC, blogs, sla.ckers.org, SANS and others have made a huge impact in raising awareness. Thanks to them generously donating their time and expertise we have the knowledge and technology required to secure websites.

There is still a lot of work to do, but we've taken the first step.

Wednesday, June 06, 2007

OWASP & WASC Cocktail Party at Black Hat

This year OWASP and WASC are combining their annual meet-ups at Blackhat USA 2007 into a cocktail party! Breach Security is generously sponsoring the event, so cocktails and appetizers will be served to all attendees. With support from both webappsec organizations, their respective members, and the industries top people expected to be in attendance...the meet-up is shaping up to be a conference highlight. I can't wait, this is going to be a lot of fun! Remember, only so many people can fit in the Shadow Bar, send in your RSVP in ASAP.

Monday, June 04, 2007

How to find your websites (Road to Website Security part 1)

I spend a lot of time with companies, mostly large and medium sized, who are interested in finding the vulnerabilities in their websites. Obviously the first step in the VA process is to first FIND the websites. Now this may come as a surprise to many, companies with more than 5 or 6 websites tend not to know what they are, what they do, or who’s responsible for them. And if they don’t know what websites they own, there is no hope of securing them.

Finding all of a company’s websites isn’t exactly a trivial process and doesn’t end with scanning an IP ranges for port 80 and 443. Virtual hosts, redirects, vanity hostnames/domains, partnerships, and legacy are hurdles that must be overcome. Here is a process that should help:

1) Network Discovery
Find a starting point IP address. Most of the time the main website (i.e. www.company.com) works fine. Look up the IP address using dig or some other utility:

>dig www.company.com

Next plug in the IP address in the ARIN whois database to search for the registered netblock(s). Then have a chat with one of the network systems administrators, asking them if this is indeed your netblock and if they know of any more that might have been missed.

Last, nmap scan the netblock ranges on port 80 and 443 looking for web servers. Sure other web servers could be listening on non-standard ports, but those are likely out of “web application security” scope and can be addressed later.

> nmap -sT -p 80,443 x.x.x.0-255

Save all your results in a spreadsheet.

2) DNS and Zone Transfer
Search for web servers based upon domain names by interrogating the name servers. whois works great on the command line, but if not, any other website (godaddy, register.com, etc) will do that provides the service.

> whois company.com

Name Server: nameserver1.com
Name Server: nameserver2.com

Next we’ll attempt a DNS zone-transfer on the off chance that it’s misconfigured. Digital Point Solutions provides a great online utility that does this for you, which loops through each name server attempting the zone-transfer. dig on the command line works fine as well, but I still prefer the Web in this instance.

> dig @ nameserver1.com company.com axfr
> dig @ nameserver2.com company.com axfr


Additionally it doesn’t hurt to have a chat with the person in charge of or has access to the domain registrars account to see what other domain names are owned by the company. If you are lucky they might even save you a lot of work by providing the hostname list from the DNS name servers directly. If you have access to the web servers configuration or the person who does, you could also dump the virtual hostnames and get lists that way as well.

Match up the hostnames to the IP addresses in your spreadsheet and log the domain names.

3) Google and Netcraft
Google is a great resource to locate websites, especially if you know the right search options to use. First restrict search results by domain name:

site:company.com

This should provide a list of results, but also a lot of pages on the same hostnames that need to be widdled down. Once you find a hostname, log it, then restrict it from the search results and try again.

site:company.com -www.company.com

Rinse repeat.

site:company.com -www.company.com -store.company.com

and so on until no more results come up. Log all hostnames found.

Netcraft SearchDNS is also an excellent resource for locating hostnames. Perform a wildcard domain name search for each domain name you have logged:

*.company.com

Log each hostname listed. You’ll probably get a lot of overlap between Google and Netcraft, but that’s OK, better not to miss anything. You also might want to give Fierce (by Rsnake) a try… it locates targets both internal and externally, not just websites though.

4) The grunt work
Visit each website on the list with a web browser and start taking notes. See if the website is up, active, functional, its purpose, redirects to or anything else informational. Click around the website, having a look at the links and the sitemap to see if any other hostnames or domain names are not on your list. Doing this with a logging HTTP proxy helps as well.

Depending on how much websites there are, this can be a painstaking process, but it’s also vital.

Continued...

How to rate the value of your websites (Road to Website Security part 2)