Tuesday, July 17, 2018

The evolutionary waves of the penetration-testing / vulnerability assessment market

Over the last two decades the penetration-testing / vulnerability assessment market went through a series of evolutionary waves that went like this…

1st Wave: “You think we have vulnerabilities and want to hire an employee to find them? You’re out of your mind!"

The business got over it and InfoSec people were hired for the job.

2nd Wave: "You want us to contract with someone outside the company, a consultant, to come onsite and test our security? You’re out of your mind!"

The business got over it and consultant pen-testing took over.

3rd Wave: "You want us to hire a third-party company, a scanning service, to test our security and store the vulnerabilities off-site? You’re out of your mind!’

The business got over it and SaaS-based vulnerability assessments took over.

4th Wave: "You want us to allow anyone in the world to test our security, tell us about our vulnerabilities, and then reward them with money? You’re out of your mind!"

Businesses are getting over it and the crowd-sourcing model is taking over.

The evolution reminds us of how the market for ‘driving’ and ‘drivers’ changed over the last century. People first drove their own cars around, then many hired personal drivers, then came along cars-for-hire services (cabs / limos) with ‘professional’ drivers that you didn’t personally know, and now to Uber/Lyft where you basically jump into some complete stranger’s car. Soon, we’ll jump into self-drivers cars without a second thought.

As we see, each new wave doesn't necessarily replace the last -- it's additive. Provided there is an economically superior ROI and value proposition, people also typically get over their fears of the unknown and will adopt something new and better. It just takes time.


Monday, May 07, 2018

All these vulnerabilities, rarely matter.

There is a serious misalignment of interests between Application Security vulnerability assessment vendors and their customers. Vendors are incentivized to report everything they possible can, even issues that rarely matter. On the other hand, customers just want the vulnerability reports that are likely to get them hacked. Every finding beyond that is a waste of time, money, and energy, which is precisely what’s happening every day. Let’s begin exploring this with some context:

Within any Application Security vulnerability statistics report published over the last 10 years, they’ll state that the vast majority of websites contain one or more serious issues — typically dozens. To be clear, we’re NOT talking about website infected with malvertizements or network based vulnerabilities that can trivially found via Shodan and the like. Those are separate problems. I’m talking exclusively about Web application vulnerabilities such as SQL Injection, Cross-Site Scripting, Cross-Site Request Forgery, and several dozen more classes. The data shows only half of those reported vulnerabilities ever get fixed and doing so take many months. Pair this with Netcraft’s data that states there’s over 1.7B sites on the Web. Simple multiplication tells us that’s A LOT of vulnerabilities in the ecosystem laying exposed. 

The most interesting and unexplored question to me these days is NOT the sheer size of the vulnerability problem, or why so many issue remain unresolved, but instead figuring out why all those ‘serious’ website vulnerabilities are NOT exploited. Don’t get me wrong, a lot of websites certainly do get exploited, perhaps on the order of millions per year, but it’s certainly not in the realm of tens or even hundreds of millions like the data suggests it could be. And the fact is, for some reason, the vast majority of plainly vulnerable websites with these exact issues remain unexploited for years upon years. 

Some possible theories as to why are:
  1. These ‘vulnerabilities’ are not really vulnerabilities in the directly exploitable sense.
  2. The vulnerabilities are too difficult for the majority of attackers to find and exploit.
  3. The vulnerabilities are only exploitable by insiders.
  4. There aren’t enough attackers to exploit all or even most of the vulnerabilities.
  5. There are more attractive targets or exploit vectors for attackers to focus on.
Other plausible theories?

As someone who worked in the Application Security vulnerability assessment vendor for 15+ years, here is something to consider that speaks to theory #1 and #2 above. 

During the typical sales process, ‘free’ competitive bakeoffs with multiple vendors is standard practice. 9 out of 10 times, the vendor who produces the best results in terms of high-severity vulnerabilities with low false-positives will win the deal. As such, every vendor is heavily incentivized to identify as many vulnerabilities as they can to demonstrate their skill and overall value. Predictively then, every little issue will be reported, from the most basic information disclosure issues to the extremely esoteric and difficult to exploit. No vendor wants to be the one who missed or didn’t report something that another vendor did and risk losing a deal. More is always better. As further evidence, ask any customer about the size and fluff of their assessment reports.

Understanding this, the top vulnerability assessment vendors invest millions upon millions of dollars each year in R&D to improve their scanning technology and assessment methodology to uncover every possible issue. And it makes sense because this is primarily how vendors win deals and grow their business.

Before going further, let’s briefly discuss the reason why we do vulnerability assessments in the first place. When it comes to Dynamic Application Security Testing (DAST), specifically testing in production, the whole point is to find and fix vulnerabilities BEFORE an attacker will find and exploit them. It’s just that simple. And technically, it just takes the exploitation of one vulnerability for the attacker to succeed.

Here’s the thing: if attackers really aren’t finding, exploiting, or even caring about these vulnerabilities as we can infer from the supplied data — the value in discovering them in the first place becomes questionable. The application security industry industry is heavily incentivized to find vulnerabilities that for one reason or another have little chance of actual exploitation. If that’s the case, then all those vulnerabilities that DAST is finding rarely matter much and we’re collectively wasting precious time and resources focusing on them. 

Let’s tackle Static Application Security Testing (SAST) next. 

The primary purpose of SAST is to find vulnerabilities during the software development process BEFORE they land in production where they’ll eventually be found by DAST and/or exploited by attackers. With this in mind, we must then ask what the overlap is between vulnerabilities found by SAST and DAST. If you ask someone who is an expert in both SAST and DAST, specifically those with experience in this area of vulnerability correlation, they’ll tell you the overlap is around 5-15%. Let’s state that more clearly, somewhere between 5-15% of the vulnerabilities reported by SAST are found by DAST. And let’s remember, from an I-dont-want-to-be-hacked perspective, DAST or attacker-found vulnerabilities are really the only vulnerabilities that matter. Conceptually, SAST helps find them those issues earlier. But, does it really? I challenge anyone, particularly the vendors, to show actual broad field evidence.

Anyway, what then are all those OTHER vulnerabilities that SAST is finding, which DAST / attackers are not?  Obviously, it’ll be some combination of theories #1 - #3 above. They’re not really vulnerabilities, they’re too difficult to remotely find/exploit, or attackers don’t care about them. In either case, what’s the real value for the other 85-95% of vulnerabilities reported by SAST? A: Not much. If you want to know why so many reported 'vulnerabilities' aren’t fixed, this is your long-winded answer. 

This is also why cyber-insurance firms feel comfortable writing policies all day long, even if they know full well their clients are technically riddled with vulnerabilities, because statistically they know those issues are unlikely to be exploited or lead to claims. That last part is key — claims. Exploitation of a vulnerability does not automatically result in a ‘breach,’ which does not necessarily equate to a ‘material business loss,’ and loss is the only thing the business or their insurance carrier truly cares about. Many breaches do not result is losses. This is an crucial point that many InfoSec pros are unable to distinguish between — breach and loss. They are NOT the same thing.

So far we’ve discussed the misalignment of interests between Application Security vulnerability assessment vendors and their customers. The net-result of which is that that we’re wasting huge amounts of time, money, and energy finding and fixing vulnerabilities that rarely matter. If so, the first thing we need to do is come up with a better way to prioritize and justify remediation, or not, of the vulnerabilities we already know exist and should care about. Secondly, we must more efficiently invest our resources in the application security testing process. 

We’ll begin with the simplest risk formula: probability (of breach) x loss (expected) = risk.

Let’s make up some completely bogus numbers to fill in the variables. In a given website we know there’s a vanilla SQL Injection vulnerability in a non-authenticated portion of the application, which has a 50% likelihood of being exploited over a year period. If exploitation results in a material breach, the expected loss is $1,000,000 for incident handling and clean up. Applying our formula:

$1,000,000 (expected loss) x 0.5 (probability of breach) = $500,000 (risk)

In which case, in can be argued that if the SQL injection vulnerability in question costs less than $500,000 to fix, then that’s the reasonable choice. And, the sooner the better. If remediation costs more than $500,000, and I can’t imagine why, then leave it as is. The lesson is that the less a vulnerability costs to fix the more sense it makes to do so. Next, let’s change the variables to the other extreme. We’ll cut the expected loss figure in half and reduce the likelihood of breach to 1% over a year.

$500,000 (expected loss) x 0.01 (probability of breach) = $5,000 (risk)

Now, if vulnerability remediation of the SQL Injection vulnerability costs less than $5,000, it makes sense to fix it. If more, or far more, then one could argue it makes business sense not to. This is the kind of decision that makes the vast majority of information security professionals extremely uncomfortable and instead why they like to ask the business to, “accept the risk.” This way their hands are clean, don’t have to expose their inability to do risk management, and can safely pull an, “I told you so,” should an incident occur. Stating plainly, if your position is recommending that the business should fix each and every vulnerability immediately regardless of the cost, then you’re really not on the side of the business and you will continue being ignored.

What’s needed to enable better decision-making, specifically how to decide what known vulnerabilities to fix or not to fix, is a purpose-built risk matrix specifically for application security. A matrix that takes each vulnerability class, assigns a likelihood of actual exploitation using whatever available data, and containing an expected loss range. Where things will get far more complicated is that the matrix should take into account the authentication status of the vulnerability, any mitigating controls, the industry, resident data volume and type, insider vs external threat actor, a few other things to improve accuracy. 

While never perfect, as risk modeling never is, I’m certain we could begin with something incredibly simple that would far outperform our the way we currently do things — HIGH, MEDIUM, LOW (BLEH!). When it comes to vulnerability remediation, how exactly is a business supposed to make good informed decisions about remediation using traffic light signals? As we’ve seen, and as all previous data indicates, they don’t. Everyone just guesses and 50% of issues go unfixed.

InfoSec's version of the traffic light: This light is green, because in most places where we put this light it makes sense to be green, but we're not taking into account anything about the current street’s situation, location or traffic patterns. Should you trust that light has your best interest at heart?  No.  Should you obey it anyway?  Yes. Because once you install something like that you end up having to follow it, no matter how stupid it is.

Assuming for a moment the aforementioned matrix is created, all of a sudden it fuels the solution to the lack of efficiency in the application security testing process. Since we’ll know exactly what types of vulnerabilities we care about in terms of actual business risk and financial loss, investment can be prioritized to only look for those and ignore all the other worthless junk. Those bulky vulnerability assessment reports would likely dramatically decrease in size and increase in value.

If we really want to push forward our collective understanding of application security and increase the value of our work, we need to completely change the way we think. We need to connect pools of data. Yes, we need to know what vulnerabilities websites currently have — that matter. We need to know what vulnerabilities various application security testing methodologies actually test for. Then we need to overlap this data set with what vulnerabilities attackers predominately find and exploit. And finally, within that data set, which exploited vulnerabilities lead to the largest dollar losses.

If we can successfully do that, we’ll increase the remediation rates of the truly important vulnerabilities, decrease breaches AND losses, and more efficiently invest our vulnerability assessment dollars. Or, we can leave the status quo for the next 10 years and have the same conversations in 2028. We have work to do and a choice to make. 


Tuesday, March 27, 2018

My next start-up, Bit Discovery



The biggest and most important unsolved problem in Information Security, arguably all of IT, is asset inventory. Rather, the lack of an up-to-date asset inventory that includes all websites, servers, databases, desktops, laptops, data, and so on. Strange as it sounds, the vast majority of organizations with more than even a handful of websites simply do not know what they are, where they are, what they do, or who is responsible for them. This is also strange because an asset inventory is the first step of every security standard and recommended by every expert.

After many of years of research, it turns out the reason why is rather simple: There are currently no enterprise-grade products, or at least anything widely adopted, that solves this problem. This is important because obviously it’s impossible to secure what you don’t know you own. And, without an up-to-day asset inventory, the most basic and reasonable security questions simply can’t be answered:
  • What percentage of our websites have been tested for vulnerabilities?
  • Which of our websites have GDPR, PCI-DSS, or other compliance concerns?
  • Which of our websites are up-to-date on their patches, or not?
  • An organization has been acquired, what IT assets do they have?
As of today, with Bit Discovery, all of this is about to change. BitDiscovery is a website asset inventory solution designed to be lightning fast, super simple, and incredibly comprehensive.

While identifying the websites owned by a particular organization may sound simple at first blush, let me tell you, it’s not. In fact, asset inventory is probably the most challenging technical problem I’ve ever worked on in my entire career. As Robert ‘RSnake’ Hansen’s, member of Bit Discovery’s founding team describes in glorious detail, the variety of challenges are absolutely astounding. Just in terms of cpu, memory, disk, bandwidth, software and scalability in general, we’re talking about a legitimate big data problem.

Then there’s the challenges that websites may exist on different IP-ranges, domains, hosting providers, fall under a variety of marketing brands, managed by various subsidiaries and partners, confused by domain typo-squatters and phishing scams, and may come and go without warning. Historically, finding all of an organizations websites is typically conducted through on-demand scanning seeded by a domain name or IP-address range. For anyone who has ever tried this model, they know it’s tedious, time consuming (hours, days, etc), and false-positive and false-negative prone. It became clear that solving the asset inventory problem required a completely different approach.

Bit Discovery, thanks to the acquisition and integration of OutsideIntel, is unique because we take routine snapshots of the entire Internet, organizing massive amounts of information (WHOIS, passive DNS, netblock info, port scans, web crawling, etc.), extract metadata, and distil it down to simple and elegant asset inventory tracking. As a completely web-based application, this is what gives Bit Discovery its incredible speed and comprehensiveness. Instead of waiting days or weeks for an asset discovery scan to complete, searches take just seconds or less.

After years of hard work and months private beta product testing with dozens of Fortune 500 companies, we’re finally ready to officially announce Bit Discovery and just weeks away from our first full production release. I’m particularly proud and personally honored to be joined by an absolutely world-class founding team. As an entrepreneur you couldn’t ask for a better, more experienced, or inspiring group of people. All of us have worked together for many years on a variety of projects, and we’re ready for our next adventure! Our vision is that every organization in the world needs an asset inventory, which includes what we like to say, “Every. Little. Bit.”

Founding Team (5):

Investment ($2,700,000, led by Aligned Partners):
As you can see, our goals at Bit Discovery are extremely ambitious and we need strong financial backing fully realize them. As part of the company launch, we’re also thrilled to announce a $2,700,000 early stage round led by Susan Mason (Managing Partner, Aligned Partners).

During our fund raising process, we interviewed well over a dozen exceptional venture capitalist firms, and we were very picky in the process. Aligned’s experience, style, and investment approach matched with us perfectly. Their team specializes in experienced founding teams who have been-there-and-done-that, who operate companies in a capital efficient manner, who know their market and customers well, and where the founders and investors interests are in alignment. That’s us and we couldn’t be happier with the partnership.

And, as Steve Jobs would say, “one more thing.” Every company can benefit from the assistance and personal backing by other highly experienced industry professionals. The funding round includes individual investments by Alex Stamos (Chief of Information Security, Facebook), Jeff Moss (Founder, Black Hat and Defcon), JimManico (Founder, Manicode Security), and Brian Mulvey (Managing Partner, PeakSpan Capital).

Collectively, between Bit Discovery’s founding team and investor group, I’ve never seen or heard of a more experienced and accomplished team that brings everything together for a company launch. We have everything we need for a runaway success story. We have the right team, the right product, the right financial partners, and we’re at the right time in the market. All we have to do is put in the work, serve our customers well, and the rest will take care of itself.

Finally, the Bit Discovery team wants to personally thank all the many people who helped us along the way and behind the scenes. We sincerely appreciate everyone’s help. We couldn’t have gotten this far without you. Look out world, we’re ready to do this!

Friday, March 09, 2018

SentinelOne and My New Role

Two years ago, I joined SentinelOne as Chief of Security Strategy to help in the fight against malware and ransomware. I’d been following the evolution of ransomware for several years prior, and like a few others, saw that all the ingredients were in place for this area of cyber-crime to explode.

We knew it was likely that a lot of people were going to get hurt, that significant damage could be inflicted, and something needed to be done. The current anti-malware solutions, even the most popular, were ill-equipped to handle the onslaught. Unfortunately, we weren’t wrong, and that was about the time I was first introduced to SentinelOne.

When I met SentinelOne, it was just a tiny Silicon Valley start-up. It was quickly apparent to me that they had the right team, the right technology, and most importantly – the right vision necessary to make a meaningful difference in the world. SentinelOne is something special, a place poised for greatness, and an opportunity where I knew I could make a personal impact. The time was right for me, so I made the leap! Today, only a short while later, SentinelOne is a major player in the endpoint protection with super high aspirations.

Since joining I have had a front row seat to several global ransomware outbreaks including WannaCry, nPetya, and other lesser-known malware events as the SentinelOne team "laid the hardcore smackdown" on all of them. One particularly memorable event was WannaCry launching at the exact moment I was on stage giving a keynote presentation to raise awareness about ransomware. Quite an experience, but also a proud moment as all of our customers remained completely protected. One can't hope for better than that!

On SentinelOne's behalf, I have had the unique opportunity to participate in the global malware dialog, learn a ton more about the information security industry, continue helping protect hundreds of companies, and something I’m personally proud of: launch the first ever product warranty against ransomware ($1,000,000). I contributed to some cutting-edge research alongside some truly brilliant and passionate people. It’s been a tremendous experience, one which I’m truly thankful for.

I wish I had all the time in the world to pursue all of my many interests, which as an entrepreneur, is one of my greatest challenges. For me, it will soon be time to announce and launch my next adventure -- a new startup! I’ll share more details in a few weeks, but it’s something my co-founders and I have been quietly working on for years.

The best part is that I don’t have to say goodbye to SentinelOne. I’ll be moving into a company advisory role. This way I still get to remain connected, in-the-know and continue helping SentinelOne achieve its full potential.

For now, a very special thank you to everyone at SentinelOne, especially Tomer Weingarten (Co-Founder, CEO) for leading the charge and allowing me to be a part of the journey.