Venture capitalist (Grossman Ventures https://grossman.vc), Internet protector and industry creator. Founded WhiteHat Security & Bit Discovery. BJJ Black Belt.
Thursday, December 29, 2011
Terrified
I’ve been known to physically shake, have shortness of breath and a strained voice, speak far too quickly, be statuesque on stage almost like I’m hiding, and feel just overall completely stressed out. Early on I decided that no matter how terrified I was, my message needed to get out there, and it was more important than letting fear stop me. I think my #1 skill as a public speaker is hiding my fear, my terror. My theory was the more experience I gained the faster I’d overcome it. In the meantime in order to cope I developed a pre-presentation ritual.
I’d prepare heavily for each event, pour over the content in every slide, and seek candid feedback from those I trusted. I’d also commonly ask event organizer for details on audience demographics to specifically tailor my comments. I’d then practice ahead of time for small private groups in order to get the timing and flow down. If something or all of it sucked, I’d throw it out. With the assistance of my wife, I’d even get a plan down for precisely what I was going to wear during at show day. Nothing was left to chance. Finally, I block out an hour before each presentation to check out the stage, be alone with time to center, prepare and calm myself down, and of course continue tweaking slides. Being prepared helped take the edge off my anxiety a lot.
The problem was, or is, that no matter how many times I presented, the anxiety, the fear, and terror never really lessened. That is until this last year. Something changed, but what!? Had I finally overcome? I’m not an introspective person so it wasn’t until very recently that I think I figured it out. In 2011 my public presentations weren’t pushing the envelope as much as in years past. The content was good to be sure, but it also focused on “safe” business level subjects and incrementally advancing work from previous years. In short, I really wasn’t putting myself out there as far as I’m used to. In my case, the feeling or fear and terror arises when pushing forth an idea or a concept and unsure if people will think its uncompelling or totally idiotic. A chance you take.
That’s about when I got a call from the TED offering a speaking slot in TEDxMaui. We got to talking about my work and discussing an idea worth spreading. It didn’t take long. Then all of a sudden I’m thrust right back into fear and terror mode, but now that I understand it, the feeling is almost comforting. It signals that I have an opportunity to take things in my industry, in our industry, to a new level --- or of course drive right off a cliff. Either way it’ll be a good show! :)
Tuesday, October 11, 2011
Web Browser — The Single Most Important [Online] Security Decision You Make
If you are reading this post chances are good that you are doing so with a Web browser. And if you are like most people, you use that very same Web browser to bank, shop, book airline tickets, find directions, read news, keep up with friends and family, and so on. These online activities are extremely important to everyday life and the reason why the Web browser you choose may be the single most important [online] security decision you make. If you are using anything except the one latest browsers, you are putting your computer at risk, and by extension the most intimate details of your life, to viruses and the criminals who author them.
Microsoft understands this better than most, and is launching a program encouraging people to upgrade their Web browser and protect themselves. The next important thing to understand is not all Web browsers are created equal and how safe they keep you online is difficult to compare, even for the experts. For consumers making a good Web browser choice can be even more daunting, even after becoming aware at just how exposed they may be on an outdated platform. To address this predicament, Microsoft is releasing a scoring methodology to assist people in selecting a Web browser that’s right for them.
Microsoft’s approach to this problem is interesting and novel. The score hinges on the presence of browser security features, comparing everything from URL filters to additional security functionality that web application developers can enable. Such a methodology is useful because it allows people to distinguish between Web browsers by which security features are available and most important to them. Packaging up the enhancements into an easy-to-understand score also helps demonstrate why upgrading makes sense — if nothing else it becomes obvious that newer browsers have better security features.
This effort by Microsoft’s is a huge step in the right direction and will serve to help make the Web just that much safer for everyone. For those curious, head over to YourBrowserMatters.org and see how the Web browser you are currently using scores.
Thursday, September 01, 2011
Sometimes Input MUST be Validated Client-Side: o_O
Garden variety Web applications, the things written in Ruby/PHP/Java/.Net/Perl are deployed and executed on Web servers — that is to say, server-side. In Chrome and ChromeOS, users download special applications and extensions written in HTML/JavaScript from the “Chrome web store” where they are installed and executed client-side — in their Web browser. This is an important difference to understand as where an application gets its input, and where its executed matters a great deal to security. An excellent example of this came to light this summer with the “Hacking Google Chrome OS” research conducted by Matt Johansen and Kyle Osborn.
Let’s assume a Chrome/ChromeOS user installs an application from the web store, like an RSS reader, which reads in data from different “untrusted” URLs all across the Web. The RSS application parses, organizes, sometimes even executes, the XML feed data, and finally displays the list of stories to the user. All this data processing takes place client-side, within the browser walls, NOT on any server. In fact, there is no “server” in this Web application deployment model.
Next let’s assume one of the aforementioned RSS data sources suddenly contains malicious content, an XSS payload for example, where it executes in Web browser context when displayed. Depending on the security permissions that the application set for itself, may of which are wide open, XSS payloads can do quite a bit of damage — even from within the browser. As demonstrated by Matt and Kyle, all of a user’s email, contacts, and saved documents is potentially at risk. Their machine can be used to conduct high speed scans of their intranet. Messages from their Google Voice account can also be spoofed. The list goes on and the security sandbox doesn’t provide any protection. Similar attacks can be carried out through applications such as email readers, instant messagers, and so on. In fact, this type of XSS vulnerability is fairly common among Google web store applications and has also been shown to affect Apple iOS applications, like Skype mobile, in a very similar way.
The critical point is that, since the Chrome/ChromeOS (and similar applications) execute client-side and receive untrusted data from servers as opposed to the other way around, the only safe place to perform input-validation is actually client-side! Strange as that may be.
Unfortunately, if you are a user of either platform, there is not much you can do to protect yourself against applications that are vulnerable. If you are a security conscious developer, the guidance for input validation hasn’t changed. Ensure incoming data, wherever is originates from, is what it’s expected to be before using it. Look for correct character sets, data being not too short or too long, proper formatting, etc. Maybe we should update the rule to “All input must be validated.” Done.
Tuesday, June 28, 2011
Follow-up: Secure (Enough) Software, Return-on-Investment Data
“Jeremiah, I saw your blog post today and it reminded me that I hadn’t sent you a couple of recently published papers that talk specifically to the question you raise. The MidAmerican case study provides an example of the ROI companies are realizing as a result of SDL implementation. In addition, an independent Aberdeen study focuses in on the ROI and gives a larger picture derived from a study of multiple companies security practices. I also included the Forrester report I sent you previously.
MidAmerican case study that demonstrates their ROI and security improvements with data by a company who took the Microsoft SDL and built their own process: http://go.microsoft.com/?linkid=9768047
[excerpt]
The effort even took on its own brand inside MidAmerican called the Secure Development Initiative — an initiative, by all accounts, that worked. Total threats as defined by SDL and the Fortify tool did, indeed, fall below 100 by Sept. 30th, 2009. And by 2010, MidAmerican Energy was the only business unit inside the larger holding company that external auditors found to have no security vulnerabilities whatsoever.
But everyone at MidAmerican agreed there were major benefits. SDL-based planning required groups to think more efficiently not only about how they code securely, but how they code in general. “It just becomes like any scarce resource a company has to manage,” Kerber said. “But the message was, we are not throwing up barriers here in IT, we are protecting you.” On balance, the company saw a real gain in the bottom line: Increased efficiencies and fewer fixes resulting from using the SDL-inspired approach netted a productivity gain that could be as high as 20 percent.
[/excerpt]
Aberdeen Group report (which Microsoft didn’t commission or participate in) has demonstrated data behind the SDL practices. What is called “Secure at the Source” is derived from the Simplified SDL process we defined specifically to demonstrate the transferability of the process to other companies: http://go.microsoft.com/?linkid=9768149
The Forrester Report I sent you earlier has some interesting ROI information in it also: http://go.microsoft.com/?linkid=9762340
We’re finally getting an inflow of data that speaks to the business value of software security programs, data that better justifies the resource investment to management. One of things I probably didn’t go a good job articulating in my original post is trying to evaluate and value the various pieces of an SDL, not the SDLs effectiveness overall. To my mind there is no reason to believe that each activity in BSIMM offers the same benefit as another. For example, when the average organization deploys static analysis software testing during QA it generally costs $X and reduces the number of high risk vulnerabilities in production of Y type(s) by Z%. Something just that simple would suffice. If you could only do one right now, which would it be? That answer would and probably SHOULD be custom to each organization.
Wednesday, June 22, 2011
Follow-up: Secure (Enough) Software — Where Are the Requirements?
My recent post entitled, Secure (Enough) Software — Do we really know how?, sparked a very thoughtful comment by Mitja Kolsek (ACROS Security), which read more like a well-written blog post than anything else. Mitja goes onto explain one of the more fundamental challenges between the implicit and explicit (security requirements) forms of software security. He really hits the nail on the head. Such a good blog post is not something seen every day, so with Mitja’s permission, I’m republishing all readers to enjoy.
“A great article, Jeremiah, it nicely describes one of the biggest problems with application security: How do you prove that a piece of code is secure? But wait, let’s go back one step: what does “secure” (or “secure enough”) mean? To me, secure software means software that neither provides nor enables opportunities for breaking security requirements. And what are these security requirements?
In contrast to functional requirements, security requirements are usually not even mentioned in any meaningful way, much less explicitly specified by those ordering the software. So the developers have a clear understanding what the customer (or boss) wants in terms of functionalities while security is left to their own initiative and spare time.
When security experts review software products, we (consciously or less so) always have to build some set of implicit security requirements, based on our experience and our understanding of the product. So we assume that since there is user authentication in the product, it is implied that users should not be able to log in without their credentials. Authorization implies that user A is not supposed to have access to user B’s data unless where required. Presence of personal data implies that this data should be properly encrypted at all times and inaccessible to unauthorized users.
These may sound easy, but a complex product could have hundreds of such “atomic” requirements with many exceptions and conditions. Now how about the defects that allow running arbitrary code inside (or even outside) the product, such as unchecked buffers and unsanitized unparameterized SQL statements and cross-site scripting bugs?
We all understand that these are bad and implicitly forbidden in a secure product, so we add them to our list of security requirements. Finally there are unique and/or previously unknown types of vulnerabilities that one is, by definition, unable to include in any security requirements beforehand. My point is that in order to prove something (in this case security), we need to define it first.
Explicit security requirements seem to be a good way to do so. For many years we’ve been encouraging our customers to write up security requirements (or at least threat models, which can be partly translated into security requirements) and found that they help them understand their security models better, allowed them to see some design flaws in time to inexpensively fix them, and gave their developers useful guidelines for avoiding some likely security errors.
For those reviewing such products for security, these requirements provide useful information about the security model so that they know better what exactly they’re supposed to verify. Only when we define the security for any particular product can we tackle the (undoubtedly harder) process of proving. But even the “negative proof and fix” approach the industry is using today, i.e., subjecting a product to good vulnerability experts and hoping they don’t find anything or fix what they find, can be much improved with the use of explicit security requirements.“
Tuesday, June 21, 2011
How I got my start -- in Brazilian Jiu-Jitsu
The UFC provided a forum, the octagon, to settle the long-standing fight-world debate. Everyone had a theory, but no one really knew for sure. What became crystal clear even today is that every fighter must have a background in Brazilian Jiu-Jitsu or they WILL lose. It’s just that simple. My background was mostly striking, so I wanted to try out this ground fighting stuff.
A co-worker, also interested in the UFC, and I found a local BJJ academy in San Jose taught by black belt instructor Tom Cissero. Tom has a passion for the martial arts and, more importantly, for his students, as he deeply feels that they are a direct reflection upon his life and value as a person. Yes, he takes his craft that seriously, and serious he is. Tom is abrasive, aggressive, and combative, attributes covering up a heart of gold. In the academy Tom will push you hard, harder than any place else, to make you good. Whether you like it or not, and he cares enough to do so. That’s why I stayed with him the better part of a decade.
Anyway, my 6’2” - 300lbs, and let’s face it, seriously fat and way out of shape frame walks in -- admittedly with a little bit of big man ego. I see Tom instantly trying to size me up. Of course he had me figured out in all of 5 seconds as you’ll read in a moment. After signing the waver, doing some drills, and learning a couple of submissions I began to familiarize myself with the basic rules and gym etiquette. Then came sparring time. Tom loves the sparring sessions more than anything else. Probably because it measures your progress in stamina and skill.
Tom pairs me up with, and I kid you not, a 150 lbs or less woman in her mid 40’s and says let’s see what you can do. She’s a purple belt with several years of BJJ experience, but I’m thinking to myself WTF!? She’s half my size! I’m going to squash her! Then of course the whole situation is running counter to my internal man moral code, never fight girls. Not being given a choice, but also not wanting to be disrespectful, I decided to go really easy as I didn’t want to hurt her or anything.
The bells sounds, I come slowly forward towards her, she quickly closes the distance, spider monkeys to my back, chokes me, and forces me to tap out inside of 10 seconds flat. I was shocked and a little upset. Here I am going light and she takes advantage of me. Clearly she’s not playing around. To hell with this, no way I’m going to let that happen again! No more Nr. Nice Guy.
We touch hands, signaling to begin again, but I go harder this time trying to put her back on the mat. She again somehow sneaks around under my arm, like an octopus, and chokes me with the same damn move! To my credit, I lasted a few more seconds that time. This scenario repeats for about 4 to 5 minutes in the session, and for the life of me, as big strong guy, I could not keep this tiny older woman off my back and robbing the oxygen from my brain. Oh, and all the while she is speaking to me in a calm instructive voice. Humiliation is the best word to describe.
At the end of class I’m thinking to myself, there is something to this Brazilian Jiu-Jitsu stuff. However, that wasn’t the most important thing to me at that particular moment. There was no way I could go on about my life happily knowing that a such a women could kick my butt so easily. Call it machoism if you like, I don’t care. It was clear to me that I had to keep training BJJ at least long enough to beat her. It only took three years. Fortunately for me by that time the motivation to simply get better and enjoy myself became my primary driver.
By the way, that woman is still training there. So if you are a big guy, and plan to drop by for a visit, don’t say I didn’t warn you. You could quickly find yourself on a journey to becoming a BJJ black belt.
Tuesday, May 31, 2011
Our Process — How We Do What We Do and Why
A while back I published what became an extremely popular post, looking behind the scenes at WhiteHat Sentinel’s backend infrastructure. On display were massive database storage clusters, high-end virtualization chassis, super fast ethernet backplanes, fat pipes to the internet, near complete system redundancy, round-the-clock physical security, and so on. Seriously cool stuff that, at the time, was to support the 2,000 websites under WhiteHat Sentinel subscription where we performed weekly vulnerability assessments.
Today, only seven months later, that number has nearly doubled to 4,000. A level of success we’re very proud of. I guess we’re doing something right, because no one else, consultancy or SaaS provider, comes anywhere close. This is not said to brag or show off, but to underscore that scalability is a critical part of solving one of the many Web security challenges many companies face, and an area we focus on daily at WhiteHat.
To meet the demand we scaled up basically everything. Sentinel now peaks at over 800 concurrent scans, sends roughly 300 million HTTP requests per month, a subset of which are 3.85 million security checks sent each week, resulting in around 185 thousand potential vulnerabilities that our Threat Research Center (TRC) processes each day (Verified, False-Positives, and Duplicates), and collectively generate 6TBs of data per week. This system of epic proportions has taken millions in R&D and years of effort by many of the top minds in Web security to build.
Clearly Sentinel is not some off-the-shelf, toy, commercial desktop scanner. Nor is it a consultant body shop hiding behind a curtain. Sentinel is a true enterprise class vulnerability assessment platform, leveraging a vast knowledge-base of Web security intelligence.
This is important because a large number of corporations have hundreds, even thousands of websites each, that all need to be protected. Being able to achieve the aforementioned figures, without sacrificing assessment quality, requires not only seriously advanced automation technology, but development of a completely new process of performing website vulnerability assessments. As a security pro and vendor who values transparency, this process, our secret sauce, something radically different than anything else out there, deserves to be better explained.
As a basis for comparison, the typical one-off consultant assessment/pen-test is conducted by a single person using an ad hoc methodology, with one vulnerability scan, and one website at a time. Generally, high-end consultants are be capable of thoroughly assessing roughly twenty websites in a year, each a single time. An annual ratio of 20:1 (assessment to people).
To start off, our highly acclaimed and fast growing Threat Research Center is the department responsible for service delivery. At over 40 people strong, the entire team is located at WhiteHat headquarters in Santa Clara, California. All daily TRC workload is coordinated via a special software-based workflow management system, named “Console,” we purpose-built to shuttle millions of discreet tasks across hundreds/thousands of websites that need to be completed.
Work units include initial scan set-ups, configuring the ideal assessment schedule, URL rule creation, form training, security check customization, business logic flaw testing, vulnerability verification, findings review meetings, customer support, etc. Each of these work units is able to be handled by any available TRC expert, or team of experts, who specialize and are proficient in a specific area of Web security, that might take place during different stages of the assessment process. Once everything is finished, every follow-on assessment becomes automated.
That is the real paradigm buster, a technology-driven website vulnerability assessment process capable of overcoming the arcane one-person-one-assessment-at-a-time model that stifles scalability. It’s as if the efficiency of Henry Ford’s assembly line met the speed of a NASCAR pit crew — this model dramatically decreases man hours necessary per assessment, leverages the available skills of the TRC, and delivers consistently over time. No other technology can do this.
As a long time Web security pro, to see such a symphony of innovation come together is really a sight to behold. And if there is any question about quality, we expect Sentinel PE testing coverage to meet or exceed that of any consultancy anywhere in the world. That is, no vulnerability that exposes the website or users to a real risk of compromise should be missed.
Let’s get down to brass tacks. If all tasks were to be combined, a single member of TRC could effectively perform ongoing vulnerability assessments on 100 websites a year. At 100:1, Sentinel PE is 5x more efficient than the traditional consulting model. Certainly impressive, but this is an apples to oranges comparison. The “100” in the 100:1 ratio is websites NOT assessments like the earlier cited 20:1 consultant ratio. The vast majority of Sentinel customer websites receive weekly assessments, not annual one-time one-offs. So the more accurate calculation would equal 5200:1 (52 weeks). Sentinel also comes in varied flavors of coverage. SE and BE measure in at 220:1 and 400:1 websites to TRC members respectively.
The customer experience perspective
Whenever a new customer website is added to WhiteHat Sentinel, a series of assessment tasks are generated by the system and automatically delegated via a proprietary backend workflow management system — “Console.” Each task is picked up and completed by either a scanner technology component or a member of our Threat Research Center (TRC) — our team of Web security experts responsible for all service delivery.
Scanner tasks include logging-in to acquire session cookies, site crawling, locating forms that need valid data, customizing attack injections, vulnerability identification, etc. Tasks requiring some amount of hands-on work are scan tuning, vulnerability verification, custom test creation, filling out forms with valid data, business logic testing, etc. After every task has been completed and instrumented into Sentinel, a comprehensive assessment can be performed each week in a fully automated fashion, or by whatever frequency the customer preferrers. No additional manual labor is necessary unless a particular website change flags someone in the TRC.
This entire collection of tasks, all of which must be completed when a new website is added to Sentinel, is a process we call “on-boarding.” From start to finish, the full upfront on-boarding process normally takes between 1 – 3 weeks and 2 – 3 scans.
From there, there are people in the TRC purely dedicated to monitoring nearly hundreds of running scans and troubleshooting anything that looks out of place on an ongoing basis. Another team is tasked to simply verify hundreds of thousands of potential scanner flagged vulnerabilities each week such as Cross-Site Scripting, SQL Injection, Information Leakage, and dozens of others. Verified results, also known as false-positive removal, is one of the things our customers say they like best about Sentinel because it means many thousands of findings they didn’t have to waste their time on.
Yet another team’s job is to configure forms with valid data, and marking which are safe for testing. All this diversification of labor frees up time for those who are proficient in business logic flaw testing, allowing them to focus on issues such as Insufficient Authentication, Insufficient Authorization, Abuse of Functionality, and so on. Contrast everything you’ve read so far with a consultant engagement that amounts to a Word or PDF report.
At this point you may be wondering if website size and client-side technology complexity cause us any headaches. The answer is not so much anymore. Over the last seven years we’ve seen and had to adapt to just about every crazy, confusing, and just plain silly website technology implementation the Web has to offer — of which there are painfully many. Then of course we’ve had to add support for Flash, Ajax, Silverlight, JavaScript, Applets, Active X, (broken) HTML(5), CAPTCHAs, etc.
The three most important points here are:
1) Sentinel has been successfully deployed on about 99% of websites we’ve seen. 2) Multi-million page sites are handled regularly without much fanfare. 3) Most boutique consultancies assess maybe a few dozen websites each year. We call this Monday through Friday.
Any questions?
Thursday, May 26, 2011
Being a CTO is a Pretty Cool Job
As Founder and Chief Technology Officer (CTO) of WhiteHat Security, I’ve had the privilege of creating my job description along the way and finding the highest and best use of my time. Over the years my responsibilities have varied widely to include pen-tester, manager, developer, visionary, evangelist, salesman, customer support rep, trainer, slide monkey, blogger, author, janitor, and whatever else that needed to get done. What has been tremendously fun and challenging is witnessing how my role steadily evolved. Even more so now, having “officially” relocated back to Maui. Pause there, I’ll come back to that.
Most recently, before “the move,” my time had been segmented in thirds. The first spent interacting directly with enterprise security executives and software developers where I learn the most about what in Web security seems to work, not work, and where they’d like to improve. This is something I really enjoyed because it keeps my skills fresh and guidance grounded in reality.
The second part of the job is taking my meeting notes back to WhiteHat, exchange ideas with others, develop new attack techniques and defense strategies, honing company vision, and help create solutions to solve real-world problems. If I’ve learned anything at WhiteHat it’s that it takes a team, a brilliant team, to build something great.
To round things out, my job also had me compiling our Web security experience into relatable narratives, travel the globe, and present at conferences to raise awareness. This is obviously the most visible part of the job and what I’m best known for. My badge wall collection serves as a nice record of my mileage.
In the early years I focused my presentations on explaining the basic attacks like Cross-Site Scripting, Cross-Site Request Forgery, SQL Injection, Business Logic Flaws, etc and demonstrating what damage could be done. This was priority one. Later as people came up to speed, my “deep technical” presentations evolved into the Top Ten Web Hacking Techniques series.
Leading up until the last several years there was a strong need for me, THE CTO, to travel heavily and evangelize. As the number of top Web security minds employed at WhiteHat grew and the industry matured, my individual need to be a road warrior became less necessary as a strategic company imperative. As a result, I cut back my conference schedule significantly. This was a welcome change because it allow me to spend more time with my family and focus on another personal passion — security metrics.
My security metrics research grew into the now widely popular WhiteHat’s Website Security Statistics Report and represents one of the accomplishments I’m most proud of. For me, being able to generate and freely share large scale, first-hand, and hard factual data about website vulnerabilities is highly rewarding. I’ve seen the findings cited everywhere. They help push the industry forward, and have served to position WhiteHat as the benchmark for measuring the security posture of production websites. How many people get an opportunity do that!?
To this day, the enterprise need for real-world and trended metrics is growing and is an area of research I’ll continue championing for the foreseeable future. It is that future, WhiteHat’s future, and my future that I want to discuss further.
Being able to view website security from such a uniquely strategic vantage point has enabled WhiteHat to expand into root cause analysis and remediation across the entire software security domain from SDLC to operational controls; from source code to client-side security; and at scale. That’s right, WhiteHat is moving beyond simply “measuring the problem” into providing effective ways that further help solve the Web problem, quantifying cost-savings, and reduce risk of compromise.
As CTO of a fast growing company that’s headed into exciting new territory, I must focus an even greater amount of my time and attention on new and emerging technologies — something I couldn’t execute while traveling non-stop, explaining the finer points of XSS and SQLi.
What many may not know is that my wife and I were both raised in Hawaii. The island of Maui to be exact. Like many of our peers we had to leave that island paradise following high school to pursue better economic opportunities. Through a series of rather remarkable events, WhiteHat was born [near the end of 2001]. We spent practically every vacation back on Maui, our real home away from our bay area home, and of course the kids loved every second.
With a growing family, a more relaxed travel schedule, being CTO of a company in a position of flourishing success (Profitable!), it was a good time to consider a better work/life balance that could accommodate everything going on in my life. Things lined up perfectly, so we’re back on Maui, thankful to be having our cake coconut and eating it too.
As for job duties as CTO, I’ll be attending ten or so conferences a year and offsetting that time to visit more companies and build next-generation Web security technologies.
WhiteHat Security has dozens of some of the best Web security engineers in the world, many of whom are already carrying the torch in presenting on the latest developments in: XSS; CSRF; filter evasion; canonicalization attacks; new RFI and LFI attacks; DAST and SAST and the correlation of both; and much more you can learn from our presentations, read on the blog, or maybe see on display at Black Hat.
In case anyone is concerned. WhiteHat has NOT been acquired, nor are we struggling financially. The truth is quite the opposite. Every single one of the last several quarters of business have broken sales records and we’ve been hiring nonstop all year. I’ve also not retired or checked out to live a life of leisure — though I fear my wife Llana may have. I’m not done with the Web security industry just yet. When I am, I’ll be the first to tell you.
I wish I could share some of the projects we’ve been quietly working on the last several months, but soon enough we’ll make some announcements and shake things up! This is a very exciting time. The most exciting part for me is seeing a company I started reach a point where some of the biggest companies in the world, our customers, are demanding we lead the charge on new innovations, and help them actually solve their Web security pains. It’s a big responsibility, we’re ready for it.
Wednesday, May 25, 2011
4 Tips to Get a Conference “Call for Papers” Submission Accepted
If you’ve done unique research in information security, work that others would be interested in learning, the conference circuit provides an amazing opportunity to travel the world (for free!), advance your career, and share it with others. All you have to do is respond to one of the literally hundreds of Call For Papers (CFPs) that conference organizers publish each year and get selected to present. It sounds simple, but the process can be a bit intimidating, especially those who are new to the industry.
Since 2001, I’ve given precisely 266 public presentations across 5 continents. During that time I’ve responded to a great number of CFPs and while it doesn’t happen so often anymore, I have had many presentations rejected just like everyone else, each of which were taken as learning experience. I’ve also served on the speaker review board for HiTB and now Black Hat. In effort to help others, I’d like to offer a few tips for would-be presenters on how they can increase their chances of getting selected. But this ONLY works if you are willing to invest the energy to research something important, something cool that people should know. That part is all on you.
Because I was willing, I’ve gotten to hang out with Jeff Moss at an 80s nightclub in Singapore and been ferried around in a limousine bus with 30 other hackers, gone on all night drinking marathons with David Litchfield in Amsterdam and navigated through the nightlife scene by street beggars, gone on safari and had a lion incident with the Sensepost crew in South Africa, 4×4 dune bashing in Dubai, chilled with Mozilla at a late night Cookies & Milk Pajama Party in a private Las Vegas suite, done BJJ battle with Chris Hoff in San Francisco, and the list goes on. If you are so inclined, there is absolutely no reason why you can’t as well.
1) The BIO — differentiate and build credibility
Every CFP requires a personal biography (a bio), a text snippet that describes yourself. A bio also presents a key opportunity to differentiate yourself from the other highly qualified presenters who are competing for speaking slots. In no more than 5-6 concise sentences, a bio must convincingly explain why you are an authority on your subject matter and why people should listen to what you have to say. A bio can be authored in different styles. For my bio, I choose to go with simple and professional, describing relevant experience, and an original feel.
2) Title & Abstract — Get to the point!
A conference is a business just like in any other and has a product to sell. That product is YOU! You and your presentation are the reason why people take the time to attend — well that and the parties and free schwag. Your presentations titles and abstracts must clearly describe to the organizers what it is that you are sharing, why attendees will care, why the content is fresh and different, and why they should see it presented live versus simply waiting for the slides to appear later on some archive.
These are the things speaker review boards want to know to make a decision, and the easier you make it on them, the better your odds. The idea of mistakenly passing on a good talk sucks, but reviewers may need to read through dozens, or maybe hundreds, of submissions. So if your abstract gets these points across in 7 – 10 seconds of reading, that’s ideal. Check out the the Black Hat conference archives for really good examples. Locate the ones that are most compelling to you and use them as inspiration.
Bonus Points: if your presentation will draw media attention to the conference, makes a new tool available, or offers scary onstage hacking demonstrations.
Negative Points: if your presentation is a thinly veiled or overt sales pitch. Some presenters will bait and switch, but this will only negatively impact your reputation.
3) Submit early, submit often
Conference organizers like get their speaker lineup formalized and posted as early as possible so they can so they can begin marketing promotions. Also, infosec speakers are notorious for playing chicken with CFP deadlines. This is horribly frustrating for speaker selection committees. If you are well prepared with your material ahead of time, as you should be, prospective speakers who submit earlier stand a higher chance of getting selected because there is less competition. Another benefit is that if you get rejected, it’ll be earlier in the CFP process, allowing you to take a second bite at the apple.
4) Follow-up and be responsive
Conference organizers like to tap speakers who they are familiar with and have had a good prior experience. Known speakers, of course, have an easier time getting a speaking slot than a new and unknown person. If you are rather “new and unknown,” this doesn’t mean conference organizers won’t take a chance on you, but it’s in your best interest to personally followup with a phone call or email to build a relationship. The conference organizer might then give your presentation a closer look. Above all though, be responsive to communication. Few things are more frustrating than having to chase down speakers.
If you are a conference organizer, what other tips do you have?
… now if we could just get the average infosec speaker to get some media training.
Monday, May 16, 2011
Web security content moving to new WhiteHat Security corp blog
Here are some of my most recent posts that you may have missed:
- Mythbusting: Static Analysis Software Testing – 100% Code Coverage
- PROTIP: Security as a Differentiator
- PROTIP: Publish Security Scoreboards Internally
- Recent SQL Injection Hacks – Things You Should Know
- If You Want to Improve Something, Measure It
- An Incident Is a Terrible Thing to Waste (even those of others)
- (CYA) Cover Your Applications – All of Them
- The Necessity of Compliance Alone Is Insufficient for Justifying Security Investment
- Theory: Google will open source their Web server — or should
- Are 20% of developers responsible for 80% of the vulnerabilities?
I'll continue posting here, only at a much lower volume, and exclusively about personal things like my adventures in Brazilian Jiu-Jitsu.
Thursday, April 28, 2011
Are 20% of Developers Responsible for 80% of the Vulnerabilities?
Personal experience would have most of us agreeing that not all developers are equally productive. We’ve all seen where a few developers generate way more useful code in a given amount of time than others. If this be the case, then it may stand to reason that the opposite could be true — where a few developers are responsible for a bulk of the shoddy vulnerable code. Think about it, when vulnerabilities are introduced, are they fairly evenly attributable across the developer population or clumped together within a smaller group?
The answer, backed up by data, would have a profound affect on general software security guidance. It would to more efficiently allocate security resources in developer training, standardized framework controls, software testing, personnel retention, etc. Unfortunately very few people in the industry might have the data to answer this question authoritatively, and then only from within their own organization. Off the top of my head I can only think that Adobe, Google, or Microsoft might have such data handy, but they’ve never published or discussed it.
In the meantime I think we’d all benefit from hearing some personal anecdotes. From your experience, are 20% of developers responsible for 80% of the vulnerabilities?
Wednesday, April 13, 2011
Theory: Google will Open Source Their Web Server — or Should
The first salvo in the “Let’s make the web faster” battle plan was by unveiling the speed demon Google Chrome and escalating the second browser war. At that very moment Microsoft was faced with a difficult choice. Internet Explorer must either match pace with Google Chrome or lag behind and risk losing their dominant market-share position to their sworn enemy. Whatever Microsoft decided would be a win for Google, who could care less about browser market-share because Internet Explorer was slowing things down for years. If Microsoft sped up Internet Explorer, Google Apps would look that much better as opposed to Office. If they did not, browser market-share moves to Chrome or Firefox and Google Apps still looks better. What else could Microsoft do, they jumped into the browser speed game.
My theory is the second stage of the “Let’s make the web faster” plan is about to begin and will take place on the other end of the connection, the Web server. Google has been developing an experimental protocol called SPDY (pronounced “SPeeDY”) meant as a modern day open source alternative to HTTP, TCP, and SSL. SPDY offers unlimited concurrent streams over a single TCP connection, request prioritization, header compression, server-initiated requests, and doesn’t require any changes to be made to the existing networking infrastructure. Reports state SPDY deployments are resulting in a performance boost between 15% and 50%. That’s huge for any website operator. Oh, and if you are a Chrome user, you are already using SPDY on Google.com properties.
To get SPDY widely adopted, Google will likely open source a web server (GWS) that supports the protocol. Work has already begun on an Apache module. Microsoft, with a respectable 18.83% IIS market-share that drives Windows server sales and .Net development, must either support SPDY or risk losing market share to GWS or Apache. And to make the decision just that much harder, Microsoft must also slap SPDY into Internet Explorer. As before, Google would be fine either way, both options make Google Apps look better as compared to Office and gets more ads get delivered.
What does this mean for the Web security professional? Time to learn how to deploy, defend, interrogate, and hack SPDY.
Tuesday, April 05, 2011
The Necessity of Compliance Alone Is Insufficient for Justifying Security Investment
It is often much easier to justify investing security resources that are legally or contractually mandated than basing such investments on the overall value to the company of an adequately funded information risk management group. Given this disparity of funding “must do” tasks while overlooking “should do” tasks, security teams can take the initiative and do the “non-essential” actions that strengthen their organizations’ security as they meet compliance requirements. This approach also provides a security group the information it needs to justify requests for risk management resources.
The compliance standards currently applicable to application security include PCI-DSS, HIPAA, FFIEC, GLBA, ISO 27001/27002, and Sarbanes Oxley. A organization’s failure to comply to their standards can lead to fines, legal action, and sometimes even business shutdown. When executive management is faced with these possibilities, a typical conversation with the company’s security staff usually results in the conclusion that, “The company must spend $A on X compliance mandate because non-compliance with regulation X carries an estimated cost of $B.”
As obvious as it may seem that compliance requirements should convince management to allocate funds for security, the necessity for compliance is often insufficient to get management to actually make the needed investments.
Government and/or industry-mandated regulations can frequently differ in how they impact an organization, and typically can be applied differently to each organization. Some organizations may be able to change specific aspects of how they do business, and thus meet mandated requirements by changing the parameters of compliance. Other organizations, after estimating that the punishment for non-compliance will cost less than the costs necessary to comply, may decide to simply ignore the notification to comply.
At WhiteHat we think a more realistic way to look at compliance issues is that in some instances you MIGHT get hacked, but for ignoring certain regulations you WILL get audited. In either case it is essential to understand the “historical record” of how a particular compliance standard has been applied within an industry; and to then be able to estimate the capital and operational expenditures required for your organization to comply. This way, when management asks you to justify your request for funding based on how NOT DOING SO would impact business, you’ll have the information immediately at hand, which will make the decision-making process far easier.
Sunday, April 03, 2011
(CYA) Cover Your Applications – All of Them
Based on this scenario, what are the industry standards for spending and for best-practice safeguards in application security? Several resources are available to use as minimum standards of due care.
The most well known is “The Payment Card Industry’s Data Security Standard,” and specifically section 6.6. This section refers to the OWASP Top Ten, which is the level of application security that credit card merchants must maintain. Substitute any digital assets that need to be protected for the term “cardholder data,” and Section 6.6 standards can be applied to just about any organization.
For estimating a reasonable security budget to meet industry best-practices, the OWASP Security Spending Benchmarks project and the “State of Web Application Security” report by the Ponemon Institute provide recent data on organization spending habits. The Building Security In Maturity Model (BSIMM) study of thirty, large-scale software security initiatives also details the activities that organizations typically implement to meet security standards.
Overall, however, it is important to remember that although adhering to arbitrary best practices can serve as a starting point for establishing adequate security, best-practices alone are insufficient for building a comprehensive and effective information security program.
Thursday, March 31, 2011
An Incident Is a Terrible Thing to Waste (even those of others)
Hacks happen. The data captured by Verizon’s Data Breach Investigations Report, DataLossDB, and WASC’s Web Hacking Incident Database make this reality painfully obvious. The summary is most incidents, and the bulk of the data lost, is a direct result of vulnerable Web applications being exploited. As further evidence, Forrester’s 2009 research reported, “62% of organizations surveyed experienced breaches in critical applications in a 12 month period.” Dasient, a firm specializing in web-based malware, said “[In 2011] The probability that an average Internet user will hit an infected page after three months of Web browsing is 95 percent.”
These resources and the compromises of Apache.org, Comodo, Gawker, HBGary Federal, MySQL.com, NYSE, Sun.com, Zynga, and countless others are a good excuse to have a conversation with management about your organization’s potential risks.
Despite the facts, the idea of getting hacked is not often a conscious thought in the minds of executives, so of course it’s only a matter of time before the business becomes another statistic. When this happens and the business is suddenly awakened from a culture of security complacency, all eyes will become focused on understanding exactly what happened, why it happened, and how much worse it could have been. In the aftermath of a breach, employee dismissal and business collapses are rare, more often than not security budgets are expanded. Few things free up security dollars faster than a compromise, except for maybe an auditor.The security department will have the full attention of management, the board, and customers who all want to know what steps are being taken to ensure this never happens again. Post breach is an excellent time to put a truly effective security program in place, not just built around point products, but designed around outcomes and to have a lasting impact.
Wednesday, March 30, 2011
If You Want to Improve Something, Measure It
During the past five years WhiteHat Sentinel has performed comprehensive vulnerability assessments on thousands of websites. These assessments are ongoing, as performing continuous assessments is integral to our core values at WhiteHat. With each assessment we analyze the results − discovering new facts about Web application security, share that knowledge through reports, and receive feedback from companies about which information has done the best job helping them secure their websites.
Now that we’ve gathered this large amount of data on website security, we believe that any security professional can use the seven metrics − all measurable and mostly automatable − listed below to track the performance of their application security program and the software development lifecycle that drives it.
The Seven Key Application Security Metrics
1. Discoverability – The Attacker Profile
What are the risks of a vulnerability being discovered, and what type of attack − Opportunistic, Directed Opportunistic, or Fully Targeted – will the attacker use? Does the attacker need to be authenticated in order to breach security? Is the damage that the attacker can cause within the risk tolerance of the organization attacked?
2. Exploitability – The Difficulty of Attacking
Are the site’s vulnerabilities possible to exploit? Theoretically? Possibly? Probably? Easily? Are proof-of-concept exploits available for demonstration purposes? Is it becoming more difficult for attackers to exploit issues previously identified? That is, from day to day are there less trivial vulnerabilities per application, and are the remaining vulnerabilities less likely to be exploited?
3. Technical and Business Impact – The Severity of the Attack
What negative impacts can the vulnerabilities have on the business – both technically and financially? If a vulnerability is exploited, is the breach communicated within the organization and is an estimate made of the possible damage – quickly? Once an attack is discovered will business stakeholders and development groups within the company prioritize the risk and take action to remediate the problem accordingly?
4. Vulnerabilities-per-Input – The Attack Surface
How common and how deep across the codebase are the vulnerabilities exploitable? Does the codebase become more secure – or less secure – as new code and acquired code are added; as legacy code requirements increase; and as new technologies are introduced?
5. Window of Exposure – The Exposure to Attacks
How often is the organization exposed to exploitable vulnerabilities: Always, Frequently, Regularly, Occasionally, or Rarely? How effective are the quarterly or annual remediation efforts? What is the best method for shortening the window of exposure time, i.e., the risk of compromise? Is the risk diminished by reducing the number of vulnerabilities being introduced? By increasing remediation speeds? By fixing more of the system security issues?
6. Remediation Costs – The Opportunity for Greater Savings
What are the costs of fixing the vulnerabilities that are discovered? Over time, is the cost-per-defect to remediate the vulnerabilities decreasing? Is it possible to decrease “lost opportunity costs,” i.e., to reduce the time that developers spend “fixing” security issues rather working on new business projects? And how can the efficiency of developers’ remediation efforts be improved?
7. Defeating Vulnerabilities – The Focus on Identifying Their Source
What type of code or which business unit is introducing the vulnerabilities? Are the vulnerabilities internally developed, third-party developed, from acquired software, etc.? When multiple vulnerabilities are discovered, where should the focus of attention be on immediately? Can the level of software security be improved through developer training, contractual obligations, software acceptance policies, etc.?
“To Improve the Process, Measure the Process”
In summary, corporate security teams often tell us that their businesses need to become much more concerned about their Web application security. I’m sure many of you readers can relate to such comments. At WhiteHat we’ve learned that the three primary obstacles to establishing a successful website security program are: (1) compliance issues, (2) company policies, and (3) a lack of data that will support the need for greater security.
While compliance issues and company policies may sometimes work to get management’s attention, I believe that data – hard numbers – work much more effectively.
When management understands how secure – or unsecure – its websites are, how secure they have been in the past, and how the company’s current Web security compares to the security of its competitors, then almost certainly the resistance to devoting more resources to Web security will diminish. Furthermore, there will be new respect within management for both the Web security services provided and the teams providing them.
As the old saying goes, “If you want to improve something, measure it.” Because almost anything that can be measured will improve. Which of the 7 metrics discussed here are you measuring today? Which of these metrics are you currently unable to measure at all? And are any of the important metrics presented here missing from your list? Or maybe you have new ones to add!