Friday, May 30, 2008

Comcast.net hacker vows not break the third rule, again

According the press reports two hackers hi-jacked the comcast.net domain by breaking into the owners online web-account at Network Solutions. OK, for all those who are still not clear aboute the rules of the game:

1. DO NOT hack anything without written consent
2. DO NOT brag to anyone about your "illegal" hack, including the press.
3. DO NOT get raided by the feds unless you are fully clothed.

Apparently, Defiant has learned his lesson about #3, unfortunately he's still not clear on #1 and #2.

"I slept in my clothes, because the last time they came, I was in my underwear with my
dong hanging out and sh*t," Defiant said of a past raid.


Thursday, May 29, 2008

HP goes SaaS with WebAppSec VA

Web application security vulnerability assessment delivered via Software-as-a-Service has really caught on. HP just announced a SaaS play with their Assessment Management Platform (AMP), acquired through SPI Dynamics, apparently web-hosted and fitted with a shiny new web-based GUI. They’ve probably had to do a lot of backend work ensuring the platform can handle the scan load of assessing thousands of extremely large sites at once like we have. This solution goes head to head against IBM AppScan Enterprise Edition OnDemand, acquired through Watchfire, and our WhiteHat Sentinel Service offering. Qualys will likely jump into the sandbox soon enough.

So all of this is very exciting stuff. Big players are jumping in which generates interest, shows a prosperous, emerging industry, and validates the market and business model. Competition is a good thing, especially for the customer. It can’t all be roses right? Well I found a TechTarget editorial by Neil Roiter, which pretty much just sticks to the facts, but included some very odd “expert” quotes mostly by Chenxi Wang of Forrester.

WARNING: VENDOR BIAS ALERT!

“The other significant application scanning SaaS player, WhiteHat Security, offers a very different model. “

Sure if by different you mean easier to deploy, more comprehensive, accurate, and scalable. Scanner-on-a-stick we are not.

“HP and IBM's offerings are designed primarily to leave application security primarily in the hands of the customer, complementing their internal software development lifecycle processes with their consulting and professional services expertise to help customers deploy and get the most out of their investment.”

Oh, I get it. They host the scanning servers for you! Customers still have to do all the VA work and then have to pay more for consultants to teach them how to use the vulnerability data. OK, so they are very different model. Excuse me.

“WhiteHat is a pureplay scanning service, conducting daily automated scans supported by human review.”

Oversimplified, but OK.

"HP and IBM will be working with companies that already have solid internal expertise on solving application security issues and outsource some scanning tasks," said Chenxi Wang, principal analyst at Cambridge, Mass.-based Forrester Research Inc.

So only customers that know what they are doing, have headcount at the ready, with everything under control will be able to take advantage of the solution. Ooooh Kaaaay.

“Wang said the HP and IBM models could scale better than WhiteHat's, whose human review element improves accuracy and reduces false positives, but, she said it is not as well-suited to deal with thousands of applications daily. IBM and, to a lesser extent, HP, have the huge consulting resources to meet that kind of demand.“

Did anyone else notice that this doesn’t make any sense? I think Chenxi said they are able to scale better than WhiteHat because we have humans validate the results, while they have A LOT of consultants that can come onsite. Er!? Perhaps the hundreds of assessments that we’re already performing every week without sending anyone onsite doesn’t count.

Gotta love infosec marketing. :)

Wednesday, May 28, 2008

You could be a felon if you've done any of the following

  • Signed-up for an online account and provide fictitious information
  • Uttered profane statements to someone over email/IM/blog
  • Sent someone a link pointing to racy content
  • Used someone else’s WiFi without permission
  • Visited website for personal reasons while at work

If so, you could be charged with a felony having broken the U.S. Federal Computer Fraud and Abuse Act (18 U.S.C. 1030) – the very same law that famous computer hacker and formerly on the FBIs most wanted list Kevin Mitnick was convicted of. Crazy right? Ridiculous even? I agree, but not according to the U.S. Attorney for the Central District of California. Let me explain as this goes back to a rather tragic case that occurred about a year and a half ago on MySpace.

“On October 16, 2006, 13-year-old Megan Meier fled from her family's computer, distraught over the cutting comments of her supposed "friends" on MySpace. Twenty minutes later, the troubled teen was dead; she had hung herself in her closet.” … “The twist that Lori Drew, a 47-year-old neighbor and mother of a former friend of Megan's, had allegedly created the fake persona of a 16-year-old boy to befriend and later torment the girl brought outrage. Yet, state investigators could not find a law under which Drew could be charged.”

But now they’ve found a way and if it stands could seriously negatively affect the rest of us. All of us. Everyone online is turned into a potential felon HACKER. They’re trying to stretch the definition of "unauthorized access” to include violating of Terms of Service. I skimmed the ToSs posted by Google, Yahoo, Microsoft, MySpace, Facebook, AT&T, etc and the above items are just a small sampling of what few of us have read about the services we use online. If fact no one could probably even get online without agreeing to these ToSs and the many others like them.

I don’t know what to say here. This better not stand up and one would hope its dismissed quickly. Otherwise we all could be in big trouble if legal precedence is set.

Friday, May 23, 2008

Haroon from Sensepost proves his leetness yet again

Check out this ActiveX attack on a Juniper SSL-VPN. Extremely clever and yet so simple when you really step back and take a look at how things work. A little bit of everything is involved. Some web app, predictable resource location, command execution, etc. Sheesh, what more to do you want!? :)

PCI 6.6 Countdown Clock

On the right side column I built a PCI-DSS 6.6. countdown clock. If you want to use it, and dare trust my JavaScript, here's the line:

<* script src="http://www.webappsec.org/inc/js/pci6.6_countdown.js"><* /script>

Incidentally, you have 38 days left to sort things out.

Tuesday, May 20, 2008

WebAppSec meets the NFL

Ryan Barnett of Breach Security has a great post up on how to think about outcome-based metrics in a web application security world, instead of always being input-centric.

“We are focusing too much on whether a web application's code was either manually or automatically reviewed or if it was scanned with vendor X's scanner, rather than focusing on what is really important - did these activities actually prevent someone from breaking into the web application? If the answer is No, then who really cares what process you followed. More specifically, the fact that your site was PCI compliant at the time of the hack is going to be of little consequence.”

Spoken like a man who’s actually had to defend a website before, the U.S. federal ATF website incidentally. I bet he has some great stories he can never tell either. :) Ryan’s NFL analogies are borrowed from Richard Bejtlich, but I loved how he expounded upon them with his own.

“…vulnerability scanning in dev environments is akin to running an Intra-squad scrimmage.”


“Running actual zero-knowledge penetration tests is like Pre-season games in the NFL.”


“Web application firewalls, that are running in Detection Only modes, are like trying to have a real football game but only doing two-hand touch.”


LOL. Brilliant!

the nature of things

I agree with RSnake. It seems the inescapable curse that the more successful one becomes in the infosec industry the more interesting information they may access and the less they can share about it. It’s terribly frustrating and unfortunate because so much is lost as a result. When my blog started it was simply a place where I could speak openly about personal webappsec interests, meet others in the community, and converse on a wide variety of technologically cutting-edge and conceptual topics. I had no idea if anyone would even care to read it. Still I’ve always tried to be completely open with what I believe and whom I’m work for because bias will surely creep in.

Almost tens years in the industry, some two years of blogging, and 500 posts later never did I dream that I’d get to meet so many great people whom I learn a lot from and receive such a tremendous readership. For that I’m grateful and have always been committed to giving back by sharing what I know and assisting others where I can. During the same time professionally I get access to way more highly sensitive and sought after information than ever. Knowledge that’d make you laugh, cringe, worry, think, excited, and upset. Much of which is locked up in NDAs, intellectual property, and business relationships but that also help me see what’s coming 2-3 years out.

This brings me to the second thing I agree with RSnake on. Things are bad, much worse than they appear, worse now than when I started, and probably because we’ve learned a great deal about the existing problem as have the bad guys. Top down are endless mountains of critical vulnerabilities we’re incapable of fixing the conventional way (through code), built on platforms of technology suboptimal security-wise, and we can’t simple start over from scratch. Bottoms up incidents taking place daily, some waiting to take place rarely spoken of, especially by me, and almost never in detail by anyone. I only get to hint at the specifics and what lessons we may learn. Heck I can’t even share a lot of it with RSnake for the same reasons he can’t share back.

This means the bad guys have an edge. They aren’t bound by the same rules as we are, as a result are more nimble than us, and the third thing I agree with RSnake about. Readers here and on his blog have the clearest path to reveal the things we can’t directly. That’s why we support them the best we can and perhaps this is a healthy progression that keeps the industry fresh with new people and ideas. This is not to say I won’t be doing everything in my power to provide the information people need to protect themselves online should they want to. That’s essentially what I do for a living and I have no desire to make a living writing books. :) So with that I disagree that my blog at least will be watered down. I still got lots of cool stuff talk about, look forward to hearing what others think, and thank you to everyone who takes the time to read.

Monday, May 19, 2008

Academia vs. professional researchers

Dave Aitel recently posted “Thinking Beyond the Ivory Towers”, an article I found really interesting. Dave has earned a reputation for being wicked smart, ninja level at zero-day vulnerability identification/exploitation, and unapologetic in his views on various controversial infosec subjects. I’ve had the pleasure of getting to hang out with him on occasion over the years and have always found his opinions to be extremely thought provoking. Most of all Dave’s a person that when he speaks, whether you tend to agree or disagree, you listen. So when Dave starts discussing the true practicality of automatic exploit generation from patches, I’m all ears.

The lead in and the ending kinda give you the tone of the middle. :)

“In the information-security industry, there are clear and vast gaps in the way academia interacts with professional researchers. While these gaps will be filled in due time, their existence means that security professionals outside the hallowed halls of colleges and universities need to be aware of the differences in how researchers and professionals think.”


...

"That's why people who write papers in LaTeX two-column format end up saying the sky has a high negative trajectory, while the rest of us wish they'd stop living in the clouds."

Adapt and overcome

Most of us understand and accept that Web application vulnerability scanning tools (black and white box analysis) don’t find everything, but that’s its OK since they add value to SDLC processes regardless. Consistency and efficiency is good wherever we can get it. The problem is heated (aggressive/defensive) ideological debates often transpire anytime people who don’t get that come contact with those discussing scanner capabilities. Sometimes though we manage to get past all that to have open and collaborative conversations isolating various technical limitations, theorizing ways to overcome obstacles or improve processes to compensate, and generally move the state of the art forward. This after all is what security is all about, process or not product. That’s where Rafal Los two-part posts come in.

Static Code Analysis Failures
Hybrid Analysis - The Answer to Static Code Analysis Shortcomings

Don’t let the titles fool you into thinking these posts are anti-stactic-analysis. Rafal points out certain scanner shortcomings as premise to put forth ideas on how to improve the technology by combining capabilities. Of course we're all free to agree or disagree, that's kind of the point. Hopefully he’ll add a third installment that’ll dig in deeper into how Hybrid Analysis might function. Seems like an interesting line of research.

Thursday, May 15, 2008

Botnets with SQL Injection tools

Dan Goodin of The Register has a gem of a story about the life of a teenage botmaster and how he got busted by the feds. While this smells of a low hanging fruit conviction, it provides compelling insight into just how little skill a person needs to illegally turn a tidy profit by compromising users machines and committing fraudulent acts. It also begs the question of how much the people with some decent skills are making whom also TRY NOT to get caught.

Who knows some of them could be the same people clever enough to install SQL injection tools on bots as a copycat of the massive attacks going around. “The bots then Google for .asp pages with specific terms -- and then hit the sites found in the search return with SQL injection attacks, says Joe Stewart, director of malware research for SecureWorks”. Bill Pennington lays out the future of botnet attacks leveraging custom web application vulnerabilities like XSS and CSRF. Bigger potential that SQLi. Get ready everyone! This is going to be an interesting year.

Wednesday, May 14, 2008

Does secure software really matter?

If you ask the average expert what organizations should do about Web security you’d almost universally hear what’s become like a religious commandment, “Thou shall add security as part of the application from the beginning. Blessed are those who develop secure code.” Amen. I am a loyal follower of security in the SLDC church. I’ll humbly try to ensure my code does what I preach others should also do. The problem is code security by itself will NOT deliver us unto to the pearly gates of Web security that many people wish for. There are other issues at play.

As an information security professional my responsibility is assisting organizations mitigate the risk of their website being compromised. If the process requires rewriting some insecure code, great, let’s do it. The responsibility also means being open to solutions such Web application firewalls, configuration hardening, patching, system decommissioning, obscurity, a lucky rabbits foot, etc. Anything and everything should be used to our advantage because the odds are stacked in the bad guys favor. Lest we forget the bad guys don’t need more than to exploit a single weakness.

At WhiteHat we assist the effort by rapidly identifying Web application vulnerabilities and helping to get them fixed before attackers exploit them. We also invest significant R&D analyzing website vulnerability data, matching them up to publicized incidents, measuring the benefits of various security strategies, and ascertaining what best practices provide the most bang for the buck in a given situation. And software security proves to be one of those things that’s difficult to measure, however there are a few thing we do know for sure about it.

Important as it is the SDLC processes can't always take into consideration unknown attack techniques, current techniques we don’t fully appreciate and ignore, or the massive amounts of old insecure code we depend upon already in circulation. Think 165 million websites and mountains of new code being piled on top all the time. How do we defend our code against attacks that don’t yet exist? And once these the techniques are disclosed its obvious we can’t instantaneously update all the world’s Web-based code (far far from it). As an industry we fail to realize these SLDC limitations, as a result don’t prepare for them, and inevitably pay a heavy price. Sin of omission.

Only a short time ago we didn’t know that integer and heap overflows were exploitable and were something to worry about. Code inspected and declared clean all of a sudden was vulnerable even though not a single line changed. The same happened in the webappsec with Cross-Site Scripting (XSS), ignored for years until the bad guys loudly demonstrated its potential. The same is happening with Cross-Site Request Forgery (CSRF), HTTP Response Splitting, and hundreds of other attack variants. Now the vultures are circling null pointers attacks. Secure code is only secure, if there is such a thing, for a period of time impossible to predict. We can’t future-proof our code and I’ll guarantee new attack techniques are on the way with the existing ones often becoming ever more powerful.

On the horizon are clever and evilly lucrative uses for timing attacks, passive intelligence gathering, application DoS, CSRF, and several other rarely explored examples I plan to present at Black Hat USA (if accepted). And that’s not to mention vulnerabilities that have nothing at all to do with the code. Crossdomain.xml, Predictable Resource Location, Abuse of Functionality, and a dozen other issues. Lately I’ve also been noticing in our data a link between a website’s security posture and when it was actually launched/built - equally or more so than the technology in use. Newer websites developed after an attack class became mainstream appears to stand a higher chance of being immune. If true this would make a lot of sense to me, more than developers suddenly having learned the virtue of input validation.

Secure coding best practices even if implemented perfectly mostly only account for the attack techniques we’re currently aware of, but once something new comes up, we got a big problem because of the scale of the Web. That’s why XSS, SQL Injection, and CSRF are biting us in ass so hard. For years we really didn’t fully understand what they could do or effectively get the message out where anyone would care. Now significant portions of the Web are vulnerable, we just don’t know where exactly, and even if we did are we really going to go back line-by-line? Now we’re in a spot where hundred of thousands of pages are being infected with JavaScript malware. I don’t expect this to end anytime soon, get worse if anything because the bad guys have a lot of green field to work with.

My point is we need to look at Web security in a new way and accept that code (or developers) will never be perfect or even close to it. To compensate we need solutions, including Web application firewalls (virtual patches), wrapped around our code to protect it. Some might call this approach a band-aid or a short-term solution. Whatever, I call it realistic. Just ask those who are actually responsible for securing a website and they’ll tell you the same thing. We need nimble solutions/products/strategies that help us identify emerging threats, react faster to them, and adapt better to a constantly changing landscape. Now when a vulnerability or new attack class shows up IT Security should have a fourth option for the business to consider giving the developers time to fix the code:

1. Take the website off-line
2. Revert to older code (known to be secure)
3. Leave the known vulnerable code online
4. Vulnerability Mitigation (“virtual patch”)

Crossdomain.xml Invites Cross-site Mayhem

Update 05.14.2008: Figured I'd make available the list active crossdomain.xml websites I've found. Enjoy! *hat tip to RSnake for the bandwidth*

This week I took a renewed interest in crossdomain.xml. For those unfamiliar this is Flash’s opt-in policy file that extends the same-origin policy to include more sites in the circle of trust. Normally client-side code (JavaScript, Flash, Java, etc.) is limited to reading data only from the website (hostname) in which it was loaded. Attempting to read data from other domains is met with security exceptions.

With crossdomain.xml a site owner may configure a policy to stating which off-domain sites are allowed to read its data (or parts thereof) and the client, Flash in this case, is responsible for enforcement. This feature paves the way for more rich client-side applications. Crossdomain.xml policies are also extremely flexible allowing websites to be defined by IP, domain, subdomain, or everyone (*) under the sun. And this is one area where we potentially run into trouble.

When a hostname is included in the circle of trust you allow them to read all data on the site that the user has access to, this includes any (authenticated) content and (session) cookies. So should a malicious attacker or website owner gain control of a website in the circle of trust (via a server hack or XSS), then they feasibly can compromise user data off that domain. This could easily leads to privacy violations, account takeovers, theft of sensitive data, and bypassing of CSRF protections (grabbing the key ahead of time).

With this understood I was curious just how many prominent websites are actively using crossdomain.xml and generally how they are configured. For sampling I combined the “www” hostnames of fortune 500 with the Global Alexa 500. Of the 961 unique websites in all (and keeping the results to myself for now)…
  • 28% have a crossdomain.xml policy file of some type.
  • 7% have unrestricted crossdomain.xml policy files.
  • 11% have *.domain.com restricted crossdomain.xml policy files.
I was quite surprised by the penetration, but not as much as how many possessed unrestricted policies. Meaning any website can pull any data from them that they want. It's not just so much that they allow this, many are just brochure-ware so who cares, but others we’re talking very sensitive data here. Then of course domain restricted percentages were higher still. That would mean if a user should get XSS’ed ANYWHERE on the domain (or other *’d domain), easy enough to do, an attacker could load up a flash payload on pilfer the data that way. Ouch. Another thing I noticed was a noticeable amount of intranet (development) hostnames being leaked publicly, weird.

Now if I may take things just one step further because these types attacks can scale far easier and become more damaging that it might first appear. We've already seen several cases where Flash-based advertising is poisoned through an upstream CDN provider eventually leading to the exploitation of users browsers. These attacks are spotted because they take advantage of a well-known vulnerability, load malware detectable by A/V signatures, and detectably compromise a machine. But let's say they didn't do that and instead attempted something subtle.

What an attacker could do is purchase some flash-based advertising delivered anywhere on a domain inside a circle of trust (*.domain.com). Instead of using traditional malware exploits they’d force an innocent looking and invisible cross-domain request on behalf of the user. This request could easily steal session cookies, read your Web email, spam email for that matter, access your social network, and the list goes on and on. Not only would this be inexpensive, but also extremely difficult to detect because everything would appear legit. As I say this, I can’t help but wonder if it hasn’t happened already and we just haven’t realized it. We’re all so used to blaming online account compromises on trojan horse, that we haven’t stopped to consider or investigate other possibilities.


thanks to Russ McRee for blog title and content assistance.

Monday, May 12, 2008

Trifecta of WebAppSec Posts

I remember a time not so long ago where good web application security content was extremely rare and difficult to come by. These days it seems every week something new is posted that’s worth taking the time to read. It’s hard to keep up with all of it and analyzing the details, so I’ll post what I can.

1) Dancho Danchev is masterful at noticing and analyzing what nefarious bag guys are up to, especially in the web security environment. In his most recent post, Stealing Sensitive Databases Online - the SQL Style, he talks about economies of scale in the recent massive SQL injection hacks. Essentially he asks rather opening if these massive attacks are attempts to pull smaller data sources together or generally just leverage them as a mass platform for attack. Good question, could go either way in my opinion.

2) C. Warren Axelrod posted something rather interesting, Metrics Revisited – Application Security Metrics, where he comes right out and says:

“I have recently been giving some thought to, and doing some research into, application security metrics, and I have determined, quite simply, that there aren’t any good ones.”

Then check out his next question...

“One application has 100 inherent vulnerabilities, of which 10 are discovered and patched. Another application has 1000 inherent vulnerabilities, of which 900 are known and fixed. The former has 90 residual vulnerabilities, and there are 100 remaining in the latter application. Which application is more secure?”

A damn fine question and an answer he digs into.

3) Ready to rip into PCI-DSS 6.6? If you haven’t done so already or have an still don't know what to do -- WhiteHat’s own Trey Ford posts Deconstructing PCI 6.6 inside SC Magazine. Trey takes the “Find, fix, prove(n)” model which really makes things simple.

“With a clear understanding of PCI Requirement 6.6, compliance is not only achievable, but can provide great value to web application owners and users. This requirement creates a need for visibility into the lifecycle for vulnerability detection and correction, and will serve to mature web application security. Applying metrics to the efficiency of detection, the cost of producing vulnerable code, and the associated costs of correction will only serve to advance the goal of total web application security.”

Friday, May 09, 2008

A pair of podcast interviews

1) In the Security Bites podcast with Rob Vamosi (transcript) of C-Net I describe what’s new and interesting about the recent malicious mass scale SQL Injection attack. This is where website DBs are loaded up with malicious JavaScript exploiting browser based vulnerabilities, the so-called drive-by-downloads. Reports are saying 600,000 or so pages are infected with several high provide targets (UN, DHS, USAToday, etc.) on the hit list.

2) During RSA I spent some time with Help Net Security guys answering question about my favorite infosec conferences and what they have to offer. Of course each has a different focus for the content and the audience, so it just depends on what you are into.

Cisco announces a Web Application Firewall

Cisco has jumped into the WAF game with their recently announced Cisco ACE Web Application Firewall. A full proxy device with HTTP(s) and XML policy enforcement, web-based/shell management interfaces, solid performance metrics, and support for both black and white list rules. Apparently Cisco sees a sizable market for WAFs and PCI 6.6 as a driver by reading their overview literature (video). So now most big players have a stake in webappsec. This should make things interesting. With Cisco’s brand reputation and reach, people might be willing to get over their initial trust issues with WAFs and do quite well. Should customers demand, perhaps another device we can integrate Sentinel with for virtual patching purposes. The interest has been quite impressive.

Monday, May 05, 2008

Blue Hat 2008

I don’t recall drinking any Kool-aid while in Redmon, but I can’t deny something about my first trip to Blue Hat (Microsoft’s bi-annual internal security conference) affected me. The only thing I can think of is those crafty people over at MS security must have piped something in the air ducts or put something in the eggplant parmesan, because well, I was impressed -- influenced even. Andrew Cushman, MSRC Director (among other things), managed to convince me to attend, even though I thought I knew what the event all about.

Well, my precognitive abilities failed me. There we no underground chambers, secret member handshakes, career limiting NDAs, or endless interrogation by the brainwashed hordes hunting for 0-day. Apparently I also wasn’t even there to be recruited away from WhiteHat or at least convinced to give up my MacBook. To test the theory I brandished it in hopes it might start some kind of scene, but to no avail, no one really cared. What Blue Hat had did have is a technically kickass speaker/topic line-up, better than most infosec conferences I’ve attended. I also got to opportunity to hang out with Billy Rios, Nitesh Dhanjani, Nate “stolen laptop” McFeters, Kuza55, Fukami, Adam Shostack, and several others.

The attendees were mostly MS software engineers looking to learn about the latest security goings on. What struck me in when conversation with them was their openness. Not “open” in a sense that they were willing to share all their secrets, but more that they genuinely eager to listen to the thoughts and ideas of others. No arrogance detected and truly wanted to make their products better. By contrast there is much general animosity towards Apple now amongst the security researchers within the community. While many of the bad guys are searching for their precious Windows 0-day, the good guys are focusing attention on OS X now mostly out of spite (or at least to win a MacBook).

My role at Blue Hat was to participate on Vulnerability Economics Panel, the name describes it all. The other panelists definitely had some interesting things to share. Including Windows XP SP2 and IE 6 vulnerabilities come at a premium over Vista due to market share factors and well above OS X. Also interesting is the rose colored view of the world that the security community still tends to have in believing that reverse engineers won’t be influenced by money. Yah, like we all work for free or something. Their thinking is that 0-day work product will continue to flow like it has to software vendors or intermediaries (TippingPoint / iDefense) even if the potential payout on the black market (or other venues) is orders of magnitude higher. I hold onto no such illusions.

Some mental notes I made to myself, which not all the panelists agree with are:
  • As MS reduces the number of externally found 0-days, their black market street value goes up. Maybe into the high 6 or even 7 figure range over the next 2-3 years.
  • iDefense and TippingPoint 0-day payouts are getting larger, now often in the 5 figure range having already purchased 300 or so issues.
  • As the black market 0-day payouts rise, “freely” disclosing issues to MS will seem less attractive to freelance security researchers.
  • Microsoft vulnerability metrics will continue to decline as they clean up their software, hired most good reverse engineers as employees or contractors (taking those issues off the market), and those who remain considering their options for profit potential.
  • Third-party applications will come under heavily increased scrutiny.
  • Increase likelihood of vulnerabilities being purposely introduced into MS code by insider threats looking for a big payout.
  • 3-5 year prediction, the US Government regulates the sale of 0-days, much like encryption, likely stimulated by the a major incident resulting from a sale.
Overall, I had a really good time and hope to be back for the next one.