Wednesday, August 29, 2018

Evolution of The Press

Below is a working theory on the evolution of The Press in the United States as it relates to their relationship with the government and the people. I expect to continue refining the theory as new perspectives and competing ideas are discussed.

Phase 1) TL/DR; The press’s primary value in the system is transmitting a message from the government to the people. The press’s customers are their subscribers who purchase news. 

Consider the early days of United States of America throughout the late 1700s and 1800s. As elected officials governed and managed the business of a young country, operationally it was crucial they had a way to broadly communicate with their citizens. They needed to let the everyone know that there was a strong hand was on the tiller, that the people are safe, and they can sleep well at night.

Imagine government’s options to communicate across the country. Think about the technology that was available. How ideas and thoughts were recorded and how they were transmitted. There was no radio. There was no television. There certainly wasn’t an Internet. Ink and paper was the state of the art. While the government could physically write down their message, outside of standing at podiums surrounded by small local gatherings of people or leafletting, they did not have a scalable means of transmitting their message to the masses. So, the government and the country needed assistance. This need is where an entity called “The Press” established it’s value in the larger system — transmission of the government’s messages.

The press had journalists with the necessary tools to record the government’s message down on paper, who would perform some amount of fact checking, and then package the information as a cohesive and largely transcribed story. The press also had access to a new invention called the printing press enabling them to productize the message, such as a newspaper. And most importantly, the press created channels of distribution, such as horses automobiles, and the telephone to deliver the message to a variety of locations where it could be easily purchased. Put simply, the process was the press would be invited in by the government to document their message, print a large number of copies of newspapers, and then make the materials widely available to the people where the had the opportunity to buy it. 

This predominately was the value the press provided to the system — transmission. Of course it was important for the press to be mindful about what they printed, particularly the accuracy and relevancy of the message, otherwise people might stop paying for it in favor of another newspaper. The people depended upon the credibility of the press to tell the story right. Let’s not forget this. This dynamic between the government, the press, and the people carried through until about the 40s and 50s when the radio and television began changing the paradigm.

Phase 2) TL/DR; The press’s value proposition in split between transmitting the government’s message to the people to telling them how to think about the message. The press’s customers are their subscribers and advertisers.

Over time communications technology advanced and became far more affordable. Radio became common place in society and television sets started appearing in the average U.S. household in the early 1950s. With these modern tools the government could transmit their message directly to the people across the country and cut out the middleman — the press. The government no longer exclusively needed the press to get its message out to the masses. 

And since the government could bring their message directly to the people, and the country was in a more stable position, they didn’t necessarily have to always help people sleep at night. In fact, often the opposite was true. Causing some amount of fear actually helped the government further consolidate their power. As a result, the press needed to find a new way to provide value to the system, beyond just message transmission, in order to maintain their survival. 

During this period the press began shifting their value proposition from solely message transmission to telling people how to think about the government’s message. The press would take the governments message, create a compelling narrative to help people interpret the story, and transmit their product to the masses over the television and radio airwaves. As a product, this method of news packaging and delivery was attractive to people. There had become a significant increase of information to parse from a variety of sources, too much for any one individual to decide what was important to consume. The Ted Koppel’s and Tom Brokaw’s of the television news world became the credible sources of the press and filled a void left by the government to help the country sleep well at night.

There were a couple of problems the press needed overcome though. For example, it was not possible for the press to make money with electronically broadcast news in the same way they did with print media. It was not mechanically possible to charge viewers or listeners for news transmitted electronically. The press’s solution was sponsored advertising. News content accompanied by commercials. As such, the more people that watched and listened, and the longer they did so, the more valuable their advertising slots became. Another challenge the press needed to overcome with television and radio was that the physical time available to watch or listen to content was more limited. There is far more space to pack in far more content into the pages of a daily newspaper than what’s possible in a couple of hours of daily broadcast news spots.

Collectively, the new adversing-based business model and a limited amount of space for content changed how the press covered the government’s message in two profound ways. First, it shifted the priority for the transmission and accuracy of the message as their main value proposition in favor of whatever kept people watching and listening. And secondly, the press had to be more choosey with what message and narrative filled the available time and what didn’t. Furthermore, the press had to narrowly cater to a particular demographic of person with their content than what was originally necessary with print. In television and radio the more the news captures emotions and attention, the better the press does financially.

Fast forward several decades under these conditions and the people begin to clearly see a lot of bias in the press and an agenda. And while bias and agenda is certainly present, how could there NOT be, but in this context it’s best not to think the press is taking a principled stand. They’re not. Instead think of their bias and agenda as simply the press’s way of focusing their product at a particular customer like any business would. The press is drawing a circle around a suitable demographic for their product and value proposition, which again is to both transmit the government’s message and tell people how to think about in a way that helps to maximize ears and eyeballs. For example, there effectively isn’t a left-wing or right-wind press in a truly principled manner. The exact opposite is true. There are left-wing and right-wing people where the press tailor makes a narrative based on the government’s message that is compelling to them.

Phase 3) TL/DR; The press’s value proposition is telling people how to think about the government’s  message. The press’s customers are advertisers.

Enter the Internet in the early 1990s where transmission of information had become easy and inexpensive for everyone, and not just within the United States, but the entire modern world. The government no longer needed the press to transmit their message to the people at all. The government could transmit directly to the people or the people could go directly to the government. No middleman required. Without anyone needing the press for message transmission, as a business, print media fell off a cliff in under two decades. For survival sake, the press had to complete the transition away from transmission of the government’s message as a value proposition to nearly exclusively telling people how to think about it. That’s all of value they offer and in doing so message accuracy can be sacrificed whenever necessary. And of course the press’s content is heavily layered with advertisements.

As is turns out, the best way to attract more viewers for longer is to connect on a deep emotional level. Do whatever you can to rile up your viewers and they’ll continue coming back for more, even share the content forward to others in their social group, where even more ads can be lucratively served. Press outlets such as Fox News, CNN, MSNBC, and more all cross the political spectrum have strongly adopted this approach. The press outlets that didn’t adapt, died.  

As a product, these sources offer people a compelling and packaged way to validate their worldview — and THAT’s what keep the press ultimately credible and trustworthy in their minds. As evidence notice how the Ted Koppel’s and Tom Brokaw’s of the press have been replaced by Alex Jones, Bill O’Reilly, Keith Olbermann and Don Lemon’s. Is this change of their starting lineup designed to give viewers access to more accurate news or instead get people emotionality invested? Even when the press is demonstrably biased, factually incorrect, call it ‘Fake News’ if you like, it’s extremely difficult for people to suddenly distrust the press they decided to loyally watch for so long and find another compelling source. Perception becomes reality and exists long after the occasional and quietly posted retraction.

Phase 4) TL/DR; If via the Internet people once again adopt a direct paid-for news model, the press’s primary value become providing people with an individually relevant, timely, and accurate news source of the government’s message.

Going forward into the future, many feel there is a demand for relevant, timely, and accurate news sources. News that’s devoid of the influences of advertisements and paid directly by the people. Several press outlets have set-up paywalls and the business model is showing signs of success. All people have to do is register an account on a website or mobile application and supply a credit card online to become a subscriber. Another business model is micro-payments, where viewers pay for their content a la carte — by the article. A relatively new web browser named Brave, which includes ad blocking, offers native push button micro-payment functionality which supports participating content publishers. 

Here’s the thing: If any transition back to directly paid-for news truly starts gaining enough traction to threaten to the ad-based model, fierce resistance by the advertising industry is sure to follow. Google and Facebook, which dominate the online advertising industry, who along side many others who make all their billions annually off ‘free’ content, will do everything they can to prevent the transition. Their livelihoods depend on it. Regardless, if it so happens that the paid-for model once again takes hold, many positive externalities may also come with it. Fake news goes away. Click-bait headlines go away. Online spam goes away. Privacy invading ads go away. All of these shady practices found on the Internet depend wholly on advertisements to function. The adoption of ad blockers, which now stands over 20% marketshare, indicates that people are making a choice, even if they aren’t yet paying for their content. Broad access to new technology is once again causing a shift in the press and how the government communicates it’s message.

Tuesday, July 17, 2018

The evolutionary waves of the penetration-testing / vulnerability assessment market

Over the last two decades the penetration-testing / vulnerability assessment market went through a series of evolutionary waves that went like this…

1st Wave: “You think we have vulnerabilities and want to hire an employee to find them? You’re out of your mind!"

The business got over it and InfoSec people were hired for the job.

2nd Wave: "You want us to contract with someone outside the company, a consultant, to come onsite and test our security? You’re out of your mind!"

The business got over it and consultant pen-testing took over.

3rd Wave: "You want us to hire a third-party company, a scanning service, to test our security and store the vulnerabilities off-site? You’re out of your mind!’

The business got over it and SaaS-based vulnerability assessments took over.

4th Wave: "You want us to allow anyone in the world to test our security, tell us about our vulnerabilities, and then reward them with money? You’re out of your mind!"

Businesses are getting over it and the crowd-sourcing model is taking over.

The evolution reminds us of how the market for ‘driving’ and ‘drivers’ changed over the last century. People first drove their own cars around, then many hired personal drivers, then came along cars-for-hire services (cabs / limos) with ‘professional’ drivers that you didn’t personally know, and now to Uber/Lyft where you basically jump into some complete stranger’s car. Soon, we’ll jump into self-drivers cars without a second thought.

As we see, each new wave doesn't necessarily replace the last -- it's additive. Provided there is an economically superior ROI and value proposition, people also typically get over their fears of the unknown and will adopt something new and better. It just takes time.

Monday, May 07, 2018

All these vulnerabilities, rarely matter.

There is a serious misalignment of interests between Application Security vulnerability assessment vendors and their customers. Vendors are incentivized to report everything they possible can, even issues that rarely matter. On the other hand, customers just want the vulnerability reports that are likely to get them hacked. Every finding beyond that is a waste of time, money, and energy, which is precisely what’s happening every day. Let’s begin exploring this with some context:

Within any Application Security vulnerability statistics report published over the last 10 years, they’ll state that the vast majority of websites contain one or more serious issues — typically dozens. To be clear, we’re NOT talking about website infected with malvertizements or network based vulnerabilities that can trivially found via Shodan and the like. Those are separate problems. I’m talking exclusively about Web application vulnerabilities such as SQL Injection, Cross-Site Scripting, Cross-Site Request Forgery, and several dozen more classes. The data shows only half of those reported vulnerabilities ever get fixed and doing so take many months. Pair this with Netcraft’s data that states there’s over 1.7B sites on the Web. Simple multiplication tells us that’s A LOT of vulnerabilities in the ecosystem laying exposed. 

The most interesting and unexplored question to me these days is NOT the sheer size of the vulnerability problem, or why so many issue remain unresolved, but instead figuring out why all those ‘serious’ website vulnerabilities are NOT exploited. Don’t get me wrong, a lot of websites certainly do get exploited, perhaps on the order of millions per year, but it’s certainly not in the realm of tens or even hundreds of millions like the data suggests it could be. And the fact is, for some reason, the vast majority of plainly vulnerable websites with these exact issues remain unexploited for years upon years. 

Some possible theories as to why are:
  1. These ‘vulnerabilities’ are not really vulnerabilities in the directly exploitable sense.
  2. The vulnerabilities are too difficult for the majority of attackers to find and exploit.
  3. The vulnerabilities are only exploitable by insiders.
  4. There aren’t enough attackers to exploit all or even most of the vulnerabilities.
  5. There are more attractive targets or exploit vectors for attackers to focus on.
Other plausible theories?

As someone who worked in the Application Security vulnerability assessment vendor for 15+ years, here is something to consider that speaks to theory #1 and #2 above. 

During the typical sales process, ‘free’ competitive bakeoffs with multiple vendors is standard practice. 9 out of 10 times, the vendor who produces the best results in terms of high-severity vulnerabilities with low false-positives will win the deal. As such, every vendor is heavily incentivized to identify as many vulnerabilities as they can to demonstrate their skill and overall value. Predictively then, every little issue will be reported, from the most basic information disclosure issues to the extremely esoteric and difficult to exploit. No vendor wants to be the one who missed or didn’t report something that another vendor did and risk losing a deal. More is always better. As further evidence, ask any customer about the size and fluff of their assessment reports.

Understanding this, the top vulnerability assessment vendors invest millions upon millions of dollars each year in R&D to improve their scanning technology and assessment methodology to uncover every possible issue. And it makes sense because this is primarily how vendors win deals and grow their business.

Before going further, let’s briefly discuss the reason why we do vulnerability assessments in the first place. When it comes to Dynamic Application Security Testing (DAST), specifically testing in production, the whole point is to find and fix vulnerabilities BEFORE an attacker will find and exploit them. It’s just that simple. And technically, it just takes the exploitation of one vulnerability for the attacker to succeed.

Here’s the thing: if attackers really aren’t finding, exploiting, or even caring about these vulnerabilities as we can infer from the supplied data — the value in discovering them in the first place becomes questionable. The application security industry industry is heavily incentivized to find vulnerabilities that for one reason or another have little chance of actual exploitation. If that’s the case, then all those vulnerabilities that DAST is finding rarely matter much and we’re collectively wasting precious time and resources focusing on them. 

Let’s tackle Static Application Security Testing (SAST) next. 

The primary purpose of SAST is to find vulnerabilities during the software development process BEFORE they land in production where they’ll eventually be found by DAST and/or exploited by attackers. With this in mind, we must then ask what the overlap is between vulnerabilities found by SAST and DAST. If you ask someone who is an expert in both SAST and DAST, specifically those with experience in this area of vulnerability correlation, they’ll tell you the overlap is around 5-15%. Let’s state that more clearly, somewhere between 5-15% of the vulnerabilities reported by SAST are found by DAST. And let’s remember, from an I-dont-want-to-be-hacked perspective, DAST or attacker-found vulnerabilities are really the only vulnerabilities that matter. Conceptually, SAST helps find them those issues earlier. But, does it really? I challenge anyone, particularly the vendors, to show actual broad field evidence.

Anyway, what then are all those OTHER vulnerabilities that SAST is finding, which DAST / attackers are not?  Obviously, it’ll be some combination of theories #1 - #3 above. They’re not really vulnerabilities, they’re too difficult to remotely find/exploit, or attackers don’t care about them. In either case, what’s the real value for the other 85-95% of vulnerabilities reported by SAST? A: Not much. If you want to know why so many reported 'vulnerabilities' aren’t fixed, this is your long-winded answer. 

This is also why cyber-insurance firms feel comfortable writing policies all day long, even if they know full well their clients are technically riddled with vulnerabilities, because statistically they know those issues are unlikely to be exploited or lead to claims. That last part is key — claims. Exploitation of a vulnerability does not automatically result in a ‘breach,’ which does not necessarily equate to a ‘material business loss,’ and loss is the only thing the business or their insurance carrier truly cares about. Many breaches do not result is losses. This is an crucial point that many InfoSec pros are unable to distinguish between — breach and loss. They are NOT the same thing.

So far we’ve discussed the misalignment of interests between Application Security vulnerability assessment vendors and their customers. The net-result of which is that that we’re wasting huge amounts of time, money, and energy finding and fixing vulnerabilities that rarely matter. If so, the first thing we need to do is come up with a better way to prioritize and justify remediation, or not, of the vulnerabilities we already know exist and should care about. Secondly, we must more efficiently invest our resources in the application security testing process. 

We’ll begin with the simplest risk formula: probability (of breach) x loss (expected) = risk.

Let’s make up some completely bogus numbers to fill in the variables. In a given website we know there’s a vanilla SQL Injection vulnerability in a non-authenticated portion of the application, which has a 50% likelihood of being exploited over a year period. If exploitation results in a material breach, the expected loss is $1,000,000 for incident handling and clean up. Applying our formula:

$1,000,000 (expected loss) x 0.5 (probability of breach) = $500,000 (risk)

In which case, in can be argued that if the SQL injection vulnerability in question costs less than $500,000 to fix, then that’s the reasonable choice. And, the sooner the better. If remediation costs more than $500,000, and I can’t imagine why, then leave it as is. The lesson is that the less a vulnerability costs to fix the more sense it makes to do so. Next, let’s change the variables to the other extreme. We’ll cut the expected loss figure in half and reduce the likelihood of breach to 1% over a year.

$500,000 (expected loss) x 0.01 (probability of breach) = $5,000 (risk)

Now, if vulnerability remediation of the SQL Injection vulnerability costs less than $5,000, it makes sense to fix it. If more, or far more, then one could argue it makes business sense not to. This is the kind of decision that makes the vast majority of information security professionals extremely uncomfortable and instead why they like to ask the business to, “accept the risk.” This way their hands are clean, don’t have to expose their inability to do risk management, and can safely pull an, “I told you so,” should an incident occur. Stating plainly, if your position is recommending that the business should fix each and every vulnerability immediately regardless of the cost, then you’re really not on the side of the business and you will continue being ignored.

What’s needed to enable better decision-making, specifically how to decide what known vulnerabilities to fix or not to fix, is a purpose-built risk matrix specifically for application security. A matrix that takes each vulnerability class, assigns a likelihood of actual exploitation using whatever available data, and containing an expected loss range. Where things will get far more complicated is that the matrix should take into account the authentication status of the vulnerability, any mitigating controls, the industry, resident data volume and type, insider vs external threat actor, a few other things to improve accuracy. 

While never perfect, as risk modeling never is, I’m certain we could begin with something incredibly simple that would far outperform our the way we currently do things — HIGH, MEDIUM, LOW (BLEH!). When it comes to vulnerability remediation, how exactly is a business supposed to make good informed decisions about remediation using traffic light signals? As we’ve seen, and as all previous data indicates, they don’t. Everyone just guesses and 50% of issues go unfixed.

InfoSec's version of the traffic light: This light is green, because in most places where we put this light it makes sense to be green, but we're not taking into account anything about the current street’s situation, location or traffic patterns. Should you trust that light has your best interest at heart?  No.  Should you obey it anyway?  Yes. Because once you install something like that you end up having to follow it, no matter how stupid it is.

Assuming for a moment the aforementioned matrix is created, all of a sudden it fuels the solution to the lack of efficiency in the application security testing process. Since we’ll know exactly what types of vulnerabilities we care about in terms of actual business risk and financial loss, investment can be prioritized to only look for those and ignore all the other worthless junk. Those bulky vulnerability assessment reports would likely dramatically decrease in size and increase in value.

If we really want to push forward our collective understanding of application security and increase the value of our work, we need to completely change the way we think. We need to connect pools of data. Yes, we need to know what vulnerabilities websites currently have — that matter. We need to know what vulnerabilities various application security testing methodologies actually test for. Then we need to overlap this data set with what vulnerabilities attackers predominately find and exploit. And finally, within that data set, which exploited vulnerabilities lead to the largest dollar losses.

If we can successfully do that, we’ll increase the remediation rates of the truly important vulnerabilities, decrease breaches AND losses, and more efficiently invest our vulnerability assessment dollars. Or, we can leave the status quo for the next 10 years and have the same conversations in 2028. We have work to do and a choice to make. 

Tuesday, March 27, 2018

My next start-up, Bit Discovery

The biggest and most important unsolved problem in Information Security, arguably all of IT, is asset inventory. Rather, the lack of an up-to-date asset inventory that includes all websites, servers, databases, desktops, laptops, data, and so on. Strange as it sounds, the vast majority of organizations with more than even a handful of websites simply do not know what they are, where they are, what they do, or who is responsible for them. This is also strange because an asset inventory is the first step of every security standard and recommended by every expert.

After many of years of research, it turns out the reason why is rather simple: There are currently no enterprise-grade products, or at least anything widely adopted, that solves this problem. This is important because obviously it’s impossible to secure what you don’t know you own. And, without an up-to-day asset inventory, the most basic and reasonable security questions simply can’t be answered:
  • What percentage of our websites have been tested for vulnerabilities?
  • Which of our websites have GDPR, PCI-DSS, or other compliance concerns?
  • Which of our websites are up-to-date on their patches, or not?
  • An organization has been acquired, what IT assets do they have?
As of today, with Bit Discovery, all of this is about to change. BitDiscovery is a website asset inventory solution designed to be lightning fast, super simple, and incredibly comprehensive.

While identifying the websites owned by a particular organization may sound simple at first blush, let me tell you, it’s not. In fact, asset inventory is probably the most challenging technical problem I’ve ever worked on in my entire career. As Robert ‘RSnake’ Hansen’s, member of Bit Discovery’s founding team describes in glorious detail, the variety of challenges are absolutely astounding. Just in terms of cpu, memory, disk, bandwidth, software and scalability in general, we’re talking about a legitimate big data problem.

Then there’s the challenges that websites may exist on different IP-ranges, domains, hosting providers, fall under a variety of marketing brands, managed by various subsidiaries and partners, confused by domain typo-squatters and phishing scams, and may come and go without warning. Historically, finding all of an organizations websites is typically conducted through on-demand scanning seeded by a domain name or IP-address range. For anyone who has ever tried this model, they know it’s tedious, time consuming (hours, days, etc), and false-positive and false-negative prone. It became clear that solving the asset inventory problem required a completely different approach.

Bit Discovery, thanks to the acquisition and integration of OutsideIntel, is unique because we take routine snapshots of the entire Internet, organizing massive amounts of information (WHOIS, passive DNS, netblock info, port scans, web crawling, etc.), extract metadata, and distil it down to simple and elegant asset inventory tracking. As a completely web-based application, this is what gives Bit Discovery its incredible speed and comprehensiveness. Instead of waiting days or weeks for an asset discovery scan to complete, searches take just seconds or less.

After years of hard work and months private beta product testing with dozens of Fortune 500 companies, we’re finally ready to officially announce Bit Discovery and just weeks away from our first full production release. I’m particularly proud and personally honored to be joined by an absolutely world-class founding team. As an entrepreneur you couldn’t ask for a better, more experienced, or inspiring group of people. All of us have worked together for many years on a variety of projects, and we’re ready for our next adventure! Our vision is that every organization in the world needs an asset inventory, which includes what we like to say, “Every. Little. Bit.”

Founding Team (5):

Investment ($2,700,000, led by Aligned Partners):
As you can see, our goals at Bit Discovery are extremely ambitious and we need strong financial backing fully realize them. As part of the company launch, we’re also thrilled to announce a $2,700,000 early stage round led by Susan Mason (Managing Partner, Aligned Partners).

During our fund raising process, we interviewed well over a dozen exceptional venture capitalist firms, and we were very picky in the process. Aligned’s experience, style, and investment approach matched with us perfectly. Their team specializes in experienced founding teams who have been-there-and-done-that, who operate companies in a capital efficient manner, who know their market and customers well, and where the founders and investors interests are in alignment. That’s us and we couldn’t be happier with the partnership.

And, as Steve Jobs would say, “one more thing.” Every company can benefit from the assistance and personal backing by other highly experienced industry professionals. The funding round includes individual investments by Alex Stamos (Chief of Information Security, Facebook), Jeff Moss (Founder, Black Hat and Defcon), JimManico (Founder, Manicode Security), and Brian Mulvey (Managing Partner, PeakSpan Capital).

Collectively, between Bit Discovery’s founding team and investor group, I’ve never seen or heard of a more experienced and accomplished team that brings everything together for a company launch. We have everything we need for a runaway success story. We have the right team, the right product, the right financial partners, and we’re at the right time in the market. All we have to do is put in the work, serve our customers well, and the rest will take care of itself.

Finally, the Bit Discovery team wants to personally thank all the many people who helped us along the way and behind the scenes. We sincerely appreciate everyone’s help. We couldn’t have gotten this far without you. Look out world, we’re ready to do this!

Friday, March 09, 2018

SentinelOne and My New Role

Two years ago, I joined SentinelOne as Chief of Security Strategy to help in the fight against malware and ransomware. I’d been following the evolution of ransomware for several years prior, and like a few others, saw that all the ingredients were in place for this area of cyber-crime to explode.

We knew it was likely that a lot of people were going to get hurt, that significant damage could be inflicted, and something needed to be done. The current anti-malware solutions, even the most popular, were ill-equipped to handle the onslaught. Unfortunately, we weren’t wrong, and that was about the time I was first introduced to SentinelOne.

When I met SentinelOne, it was just a tiny Silicon Valley start-up. It was quickly apparent to me that they had the right team, the right technology, and most importantly – the right vision necessary to make a meaningful difference in the world. SentinelOne is something special, a place poised for greatness, and an opportunity where I knew I could make a personal impact. The time was right for me, so I made the leap! Today, only a short while later, SentinelOne is a major player in the endpoint protection with super high aspirations.

Since joining I have had a front row seat to several global ransomware outbreaks including WannaCry, nPetya, and other lesser-known malware events as the SentinelOne team "laid the hardcore smackdown" on all of them. One particularly memorable event was WannaCry launching at the exact moment I was on stage giving a keynote presentation to raise awareness about ransomware. Quite an experience, but also a proud moment as all of our customers remained completely protected. One can't hope for better than that!

On SentinelOne's behalf, I have had the unique opportunity to participate in the global malware dialog, learn a ton more about the information security industry, continue helping protect hundreds of companies, and something I’m personally proud of: launch the first ever product warranty against ransomware ($1,000,000). I contributed to some cutting-edge research alongside some truly brilliant and passionate people. It’s been a tremendous experience, one which I’m truly thankful for.

I wish I had all the time in the world to pursue all of my many interests, which as an entrepreneur, is one of my greatest challenges. For me, it will soon be time to announce and launch my next adventure -- a new startup! I’ll share more details in a few weeks, but it’s something my co-founders and I have been quietly working on for years.

The best part is that I don’t have to say goodbye to SentinelOne. I’ll be moving into a company advisory role. This way I still get to remain connected, in-the-know and continue helping SentinelOne achieve its full potential.

For now, a very special thank you to everyone at SentinelOne, especially Tomer Weingarten (Co-Founder, CEO) for leading the charge and allowing me to be a part of the journey.