Monday, August 31, 2009

Production-Safe Website Scanning Questionnaire

Hackers target websites. Why? Because that’s where the money is. Organizations should at least know as much as the bad guys do, and that means hacking themselves before someone less desirable does it first. It also means scanning “production” websites, which carries a certain risk. Scanners can and will disrupt normal website operations unless proper precautions are taken. In theory, production systems are identical to preproduction, which would seem to alleviate the need to test them. In theory, there is no difference between theory and practice. But, in practice, there is.

Even in those websites with the most stringent change control processes, experience shows identical production and preproduction deployments are extremely rare. It is incredibly common to find hidden files and directories containing source code and logs, mismatched security configurations, infrastructure differences, and more, with each impacting the website’s true security posture. Also, for those websites with required to maintain PCI-DSS 6.6 compliance, the standard mandates scanning publicly facing websites. If scanning production websites for vulnerabilities is important to you, and it should be, then production-safety is likely equally important.

The more thorough a website vulnerability scan is, the greater the risk of disruption due to exercising potentially sensitive functionality. For example, an authenticated scanner is capable of testing more areas of an application’s attack surface than one that is unauthenticated. The same is true of a scanner custom-configured to process multi-form work flows (i.e. an online shopping cart). Furthermore, scanners testing for most of the ~26 known website vulnerability classes similarly increase the odds for causing damage. “Damage” may be as minor as flooding customer support with error email all the way up to a denial of service condition.

Clearly production-safe scanning is a legitimate concern. Below is a questionnaire about what organizations ought to know, so they can better understand the risks of production scanning and mitigate them accordingly. Please feel free to use this document to probe vendors about how their offerings ensure production-safe website scanning while achieving required levels of testing coverage and depth. As a guide, I’ve supplied the answers that apply to WhiteHat Sentinels beneath each question. Vendors may choose to do the same in the comments below, on their sites and blogs, or of course privately by customer request.

1) How is the scanner tuned, manually or dynamically, as to not exhaust website resources that could lead to a Denial-of-Service?
Scanners must share resources with website visitors. Open connections, bandwidth, memory, and disk space usage by the scanner can seriously impact operations. High-end scanners can easily generate a load equivalent to a single user, or even up to a hundred or more, unintentionally causing serious events like DoS when resources are exhausted. Each website’s infrastructure should be considered unique in load capacity. Scanning processes should be adjusted accordingly.

WhiteHat Sentinel
WhiteHat Sentinel scans consume the load of a single user. They are single threaded, exceeding no more than a user-defined number of requests per second, and generally do not download static content (i.e.. images) thereby reducing bandwidth consumption. WhiteHat Sentinel also monitors the performance of the website itself. If performance degrades for any reason, scan speed slows down gracefully. If a website looks like it is failing to respond or incapable of creating new authentication sessions, in most cases Sentinel will stop testing and wait until adequate performance returns before resuming.


2) How are multi-form application flows marked as safe-for-testing and/or removed from testing?
Depending on the website, complex application functionality can only be located by filling out multi-form work flows with valid data. Insurance applications, bill payment, password recovery, purchase processes, and more are prime examples. Some application flows are more sensitive than others, including those with a direct monetary cost when activated. Automatically scanning these areas can have consequences.

WhiteHat Sentinel
WhiteHat Sentinel does NOT automatically fill out or test HTML forms. In our experience doing so is extremely dangerous without significant preexisting knowledge about the website. Each HTML form discovered during crawling, including multi-form process flows, are custom configured by our Operations Team with valid data. The Operations team also marks individual forms as being safe-for-testing. Those HTML forms that cannot be tested safely are either tested manually or not at all.


3) Does the scanner send executable attack payloads? If so, how are such tests made safe?
To identify vulnerabilities, scanners may inject executable, even malicious, payloads. For SQL Injection, testing may include executing back-end system commands, elicit errors messages, and/or retrieving/modifying data -- each potentially impacting or halting database performance. Another example is found when scanning for Cross-Site Scripting. Scanners may submit browser interpretable payloads (i.e. live HTML/JavaScript code) that could be returnd in the website code in an unknown number of locations. Should these payloads be encountered by Web visitors, they could easily interfere with or break the user experience entirely.

WhiteHat Sentinel
Sentinel, by default, performs the majority of its tests using proprietary pseudo-code. This enables the scanner to identify vulnerabilities without the payload being interpreted by parsers within the application. This helps ensure that no errant process execution occurs that can negatively impact production software execution. Also, Sentinel does not perform any requests that are non-idempotent, perform write activity, or potentially destructive actions without explicit authorization from either a security engineer or the asset owner.


4) How are links (URLs) that point to sensitive application functionality marked safe-for-testing and/or removed from testing?
According to RFC specification, HTTP GET requests (i.e. from hyperlinks) should be treated as idempotent by applications. Meaning, no significant action other than data retrieval should be taken, even upon multiple link clicks. In practice though, many links (URLs) discovered during (authenticated) crawls can indeed delete, modify, and submit data potentially causing disruption very similarly to non-idempotent POST requests.

WhiteHat Sentinel
As part of the WhiteHat Sentinel assessment process, customers may alert us to functionality that may execute dangerous non-idempotent requests. When such areas are identifiied, they can be ruled out of the scanning process and tested manually. Also, authenticated scans are restricted to special test accounts, so any potential negative impact is restricted to those areas and do not extend to other users.

Wednesday, August 19, 2009

Website VA Vendor Comparison Chart

Update: 09.03.2009: "Production-Safe Website Scanning Questionnaire" posted to add context to the chart and ensuing discussion. Also, new vendors have been added to the sheet.

Update 08.24.2009
: Billy Hoffman (HP) and I have been having some email dialog about the production-safe heading. Clearly this is contentious issue. Scanning coverage and depth are directly tied to the risk of production-safety, and every vendor has a slightly different approach to how they address the concerns. Basically I asked if vendors made a production-safe claim, that they have some reasonable verbiage/explanation for how they do so -- no assumption of production safety will be made. Billy publicly posted how HP does so (complete with the highlights of our dialog) and got check mark. Simple. Still for the immediate future I'm going to eliminate the heading from the chart until I can draft up a decent set of criteria that will make things more clear. This of course will be open to public scrutiny. In the meantime, if anyway vendors want to post links about how their achieve "production-safe" they should be feel free to do so.

As you can imagine I spend a good portion of my time keeping a close watch on the movements of website vulnerability assessment market. Part of that requires identifying the different players, who is really offering what (versus what they say they do), how they do it, how well, and for how much. Most of the time it is easier said than done, parsing vague marketing literature, and it is never "done." Every once in a while I post a chart listing the notable SaaS/Cloud/OnDemand/Product vendors and how some of their key features compare, not so much in degree, but at least in kind. If anything is missing or incorrect, which there probably is, please comment and I’ll be happy to update.





Web Security is about Scalability

If today’s Web security challenges are to be overcome, then scalability is what we need. Scalability of people, scalability of process, and scalability of technology. The holy trinity of all IT solutions. Without the ability to scale globally, and Web security is a global issue, our problems will remain too costly to solve. Consider that there are 240+ million websites, millions more added every month, an unknown number of Intranet Web applications, 17+ million developers, and over one billion people on the Web. Any solution capable of making a real difference must be valued by its potential worldwide impact. Of course smaller niche solutions are still of value, we just can’t automatically expect them to work well for anyone or everyone else. Whether we are talking about source code review, developer education, compliance, etc. it is all about scale.

In the past I’ve “guesstimated” the billions of dollars, tens of thousands of experts, and time requirements for the aforementioned initiatives. Most of all though I’ve spent nearly a decade specifically focused on the scalability of website vulnerability assessment since founding WhiteHat Security. In the beginning, assessments were conducted by consultants performing largely manual one-time engagements. Productivity of a single expert was severely limited, completing no more than 20 - 50 sites per year. Nothing about this model scaled. Not the people, technology, or process. Rates of $20,000 to $50,000 per website assessment were typical. Obviously IT budgets could not justify covering a large website footprint, so selecting only a few of the most important (if that) was typical. The need for added scalability encouraged development of new technology, particularly dynamic scanners and other assistive tools like crawlers and proxies.

Organizations with dynamic scanners could assess larger volumes of websites, even with only minimal comprehensiveness, and more often than not with less experienced and expensive personnel. Value was received by some, but the best case for many was haphazard scans, incomprehensible reports, and no risk management strategy. This technology alone was not enough because it did not scale the people, where the real costs were hidden, nor the assessment process. Those with experience in attempting to manage the scans/assessments of as little as 10-20 websites (never mind hundreds or thousands) using these products know what I’m talking about. Additionally software licensing and hardware costs are significant. As before, the need for scalability opened up opportunities, namely for Software-as-a-Service (SaaS). As has been demonstrated in other markets, SaaS is better suited to scale technology than licensed software, which in turn enables the scalability of people and process necessary.

First introduced by WhiteHat Security (via Sentinel) and later followed by others, website vulnerability assessment delivered as a service provides a scalable, cost-effective, efficient alternative to point-in-time consulting engagements or legacy enterprise software. SaaS achieves better infrastructure scalability at a lower cost through multi-tenancy (i.e. customer applications run on the same unit of hardware and/or software). Also thanks to multi-tenancy, IT costs are reduced because the purchase, and maintenance of servers; physical security; and installation and maintenance of software is eliminated. Plus, subscription pricing is easier on budget than large upfront outlays. And, SaaS is less risky because you can change your subscription without losing the initial investment. Remember the shelfware problem? Last but not least, SaaS deployment is much faster as is access to innovation in the identification and remediation of vulnerabilities that is unavailable in traditional software release cycles.

Clearly vulnerability assessment is not the only area within the application security space witnessing scalable technology innovation. ThreadStrong (via Denim Group), high-end eLearning platform for secure coding, has the promise of being able to scale to meet the education needs of the masses. OWASP ESAPI (via Jeff Williams @ Aspect Security), makes it easier for developers the world over to guard design and implementation flaws. APIs like this are absolutely essential because let’s face it, without them everyone is going to roll their own, probably get it wrong, if they try at all. Source code reviews are now being offered SaaS style (via Fortify OnDemand), whose model has all the aforementioned benefits. WAF-in-the-Cloud as described by Alex Meisel (Art of Defence) to easy or make deployment possible.

As I’ve said before this is an exciting time to be in application security. Those who bring new ideas to the table that work, will be rewarded. The rest, part of what is already a storied history.

Tuesday, August 18, 2009

I'm going to Miami

I love Florida. Great place. Although I'm hearing it's possible that I might run into a hurricane while there. But, I'm from Hawaii and used to weather so its all good -- better for surfing. :) Or maybe just BJJ. If you are in town and want to attend the event below, just register or get in touch with me directly.

August 27: The Web Attack Defense Playbook Luncheon (Hollywood, FL)
* Hosted by WhiteHat, Imperva, and Terremark

Learn about a robust website risk management security strategy that will enable an organization to successfully defend against dangerous website attacks. Presenters will provide insight into the unique benefits that an integrated Web Application Firewall (WAF) and website vulnerability management solution provides while highlighting the ability to execute policies that are unmatched in their level of accuracy and granularity.

11:30am - 1:30pm
The Westin Diplomat Resort & Spa
3555 South Ocean Drive
Hollywood, FL 33019

Web pages Detecting Virtualized Browsers and other tricks

The ability for a Web page to detect if a browser is within a virtualized environment has a number of interesting applications. Malware distributors could serve their payload only to likely victims and avoid analysis from detection engines. One super simple way to do so is by checking the screen dimensions (1024×768, 1440×900, etc.) using JavaScript. For example, while in windowed (not full screen) VMWare, the nonstandard pixel width and height of the viewer’s screen is a dead giveaway of virtualization. To see for yourself view this page in VMWare, resize the outer window, and click the button below. You might get something weird like 1070x676. See screenshot.



<* input type="button" value="Show screen resolution" onclick="alert('Your resolution is ' + screen.width + 'x' + screen.height);">


The limitation is that malware detection engines, like those run by the anti-malware firms, Google and Microsoft, probably operate with standard resolution settings or in headless full-screen mode. Anyone know if a virtualized browser with no display still has a DOM screen property? I'm sure it probably does, but is the default full-screen mode? Even still this trick might be just enough for nefarious search engine optimizers (SEOs) to tell if sentient insiders of major search engines or affiliate networks are snooping around. They’d be able to dynamically remove telltale signs of cheating like cookie-stuffing and cloaking that get them banned.

MAC Addresses are another way for a Web page to determine if a browser is being virtualized because they are unique identifiers assigned to network adapters. The first three of six octets represent a hardware manufacturer, which includes VMWare (00-0C-29, 00-1C-14, 00-50-56, etc). While there is no known way for JavaScript to access MAC addresses, grandpa’s Java Applets can. The “MAC Address Java Applet” by Tim Desjardins works great on Internet Explorer 6/7/8, Chrome, and Firefox on Windows XP. See screenshots.


OS X does not seem to be supported, but that could probably be remedied. All the browsers auto-loaded the applet except IE8, which requires user permission. I believe in most cases the automated malware detection engines running IE8 would explicitly grant permission to increase the odds of getting infected. It is also possible these guys spoof their MAC Address, but I’m sure not everyone does so religiously. Another question is if Flash, ActiveX, or Silverlight have non-user permissions wags to obtain MAC Addresses.

Beyond virtualization there are yet more ways for the bad guys to differentiate between casual users and everyone else. Earlier this year Collin Jackson and I demonstrated Private Browsing Mode detection. By leveraging the well-known CSS color history hack, if the URL of the current page is not “visited,” chances are a non-default security measure is blocking it. The CSS color history hack can also be combined with leaked Intranet hostnames, particularly those of Google, Yahoo, and Microsoft. Hosts only insiders could have visited. And finally, if the client is using Firefox and JavaScript is disabled, detectable in a number of ways (CSS, noscript tags, JS enabled property, etc.), chances are NoScript plug-in is the culprit. All of which are solid indications that the client is not the average user.

Happy Surfing!

Monday, August 17, 2009

Overcoming Objections to an Application Security Program

Today a large percentage of security professionals truly “get” application security. They understand the importance, the best-practices, the value, etc. What inhibits their success the most in building an effective application security program is a lack of buy-in from the business and support from development groups. Justifying the investment remains extremely challenging and many security professionals tend to encounter the same objections. I brought this up to Jeff Williams who agreed that assisting people overcome the most common objections with packaged language would be useful. Jeff then helped me develop the content below. Please, if you find any of this useful, by all means steal away! Nothing would be a bigger compliment to the authors.


The business says an Application Security Program is unnecessary because:

"There have been no security problems in the past, nor is there any evidence we’ll be attacked in the future."

It is very fortunate nothing bad has happened, that is known of. But, relying on luck is not a reliable security strategy for the future. Nor is luck a legally defensible position in the event of an intrusion. Assurance is required, as is visibility into the type and frequency of Web attacks, to understand the current security posture. It is possible the website was previously compromised, but this can’t be known for certain without being proactive. The fact is virtually every industry report says Web security is the #1 digital threat organizations face and the vast majority of websites have serious vulnerabilities. The Web Hacking Incident Database (WHID) has catalogued organizations that have experienced unfortunate Web application related incidents. Many lost customer data, intellectual property, were infected with malware, suffered fines, etc. A true Application Security Program helps organizations manage their risk.


"Security is an IT problem. They have firewalls, patch & configuration management systems, and SSL currently in place protecting us."

These security measures are designed to defend at the network/host layers of the infrastructure, and they’ve done a really good job. So much so that today’s attacks have moved up the software stack to the application layer where traditional security controls offer little to no protection:
  • Firewalls specifically allow port 80 & 443 Web traffic to pass, including malicious attacks, unencumbered through to the website.
  • Patch management keeps commercial off the shelf and open source software up-to-date, but anything custom, developed in-house or out-sourced is simply not covered. Secure code must be internalized.
  • SSL assures the confidentiality and integrity of data while in transit, but does not safeguard applications that are under attack directly.
To get real application security, the focus must be on the application itself.


"We need new features first and there is no discretionary budget left to allocate towards security."

Fortunately, the resources necessary to implement an effective application security program do not have to be large or operationally disruptive. If done properly, a program can be cost-effectively structured where security assurance can be realized, resulting in higher quality code with negligible additional costs and development time. By implementing a program incrementally, and being sensitive to IT operations and capital expense budgets, coordinated investments can be offset by consistent savings. What is also well understood is the costs relating to an intrusion, which include downtime, legal liability, regulatory fines, downtime, customer revolt, etc. are far less to add security up front.


"Hackers can't break in because our Web application can't be accessed externally."

It is true an external attacker cannot directly attack an internally-facing Web application remotely, but this is not the only threat model that must be considered. Incredibly common in today’s environments are malware infected machines located within the corporate network, can be a launching pad for intranet attacks. These machines can be remotely controlled, individually or collectively, when they connect out to an external central control system. Malicious JavaScript located on blogs, social networks, or infected websites can be leveraged in the same way as conventional desktop malware. JavaScript can instruct a browser to scan and attack other machines on the internal network, particular weakly defended Web applications. There is also the ever present insider threat. With economic conditions as they are, there are those with motive and opportunity within the network that can cause significant financial damage.


"We outsource our software development and the vendor is responsible for making sure the code is secure."

Third-party software vendors should not be implicitly trusted to deliver secure code without any requirements to do so. Experiences shows software vendors and their clients often possess very different views on what has been agreed to. To prevent conflict and disappointment, software development agreements should require vendors to specify how their code is tested for security. Ask them to describe their internal process or perhaps have the code tested by a third-party firm before customer acceptance. Also to prevent long standing risk, vendors should be required to fix identified vulnerabilities within a certain amount of time before and after acceptance and as long as the software is relied upon.


The development manager says the existing Application Security Program is enough because:

"We use penetration-testing services. We fix or accept the risk of any issues found, which keeps us safe."

Penetration-testing simulates an actual attack, measure true exploitability, and determine an organizations defense readiness with all protective measures in place. The results of which are predominantly a list of vulnerabilities, poorly defended systems, and open avenues of attacks. This is invaluable intelligence, but does not specifically measure what has been done to assure the security of an application. Penetration-tests show when something is wrong, but not what is right. Properly documenting the stages of architectural design, access controls, code implementation, quality assurance, and deployment must be done separately.


"We passed our most recent compliance audit and not required to do anything more."

Compliance-based security, even under the best of circumstances, only establishes a very minimum baseline of risk reduction. The Payment Card Industry Data Security Standard (PCI-DSS) is the most direct regulation affecting Web security. Compliance, or validation of compliance, with PCI-DSS may prevent an organization from receiving fines, but certainly not from being compromised. There are numerous well-publicized examples of organizations that passed compliance audits, such as TJX, Heartland, and Network Solutions, who subsequently suffered major breaches. While compliance may help improve security where none previously existed, the fact is these standards are slow moving and have difficulty adjusting to a shifting threat landscape. True, if the organization trusts a generic compliance standard with the ongoing safety and viability of their business, then no additional security is required. However, if doubts remain, additional security measures are warranted to properly mitigate risk.


"We trust our developers and they already know how to develop secure code after completing the training course."

Security cannot be assured through trust, it must be achieved through verification. While having developers well versed in security is of tremendous value, the lack of formal processes and technology prevents an efficient and effective application security program. The proper application of process and technology ensures code is securely built, implemented, measured, and consistently improved. These are fundamental tenants of software security maturity models such as OpenSAMM and BS-IMM, but also recommended by basically every Secure Development Lifecycle (SDL) program recommended in the industry.


"We already have scanning tools. Doing more will slow down the development process, inhibit innovation, and add large unnecessary costs."

True. Just adding scanning tools, including free and open source products, to the development process will undoubtedly cause disruption. Adding “more security” this way will not work. However, by implementing an application security project incrementally with a thoughtful combination of people, process, and technology – development speed, quality, and security can be improved significantly without disruption.

Sunday, August 09, 2009

Security Religions and Risk Windows

Information security threats are way up, fraud losses continue to rise, regulatory fines are increasingly common, and budget dollars to solve the myriad of problems are in short supply. Hampered by a sluggish economy, organizations simply cannot afford to hire all the talent they need, implement every best-practice, or buy every blinking light widget out there. Sacrifices are unavoidable, risk must be managed.

Each organization must decide for themselves the level of risk they are willing to accept. Security managers are asked to provide budgetary guidance by articulating that spending “$X on Y, will reduce of risk of loss of $A by B%.” These decisions have given rise to two prevailing, but opposing security religions -- Depth and Breadth. Graciously referred to as religions for how their value is typically justified resembles more of a belief system rather than rooted in science or metrics. Examining how these beliefs apply to website security and vulnerability management was the underlying message of the Mo’ Money Mo’ Problems presentation Trey and I delivered at BlackHat USA 2009.

To begin we organized the threats, the attackers, or those we want to defend our websites against using the Verizon DBIR naming conventions. Those are Random Opportunistic, Directed Opportunistic, and Fully Targeted. The descriptions are our own and made to apply to the overall Web security threat landscape. Trey and I focused exclusively on real-world examples of Fully Targeted business logic flaw attacks. Attacks exceptionally easy for anyone to perpetrate, invisible to IDS’s, inaccessible to scanners, legally gray, and most importantly lead to making A LOT of money.

Random Opportunistic: Targets selected widely and indiscriminately. Attacks are fully automated, unauthenticated, exploit unpatched issues and some custom Web application vulnerabilities. Mass SQL Injection worms that infect websites with browser-base malware and/or load Web pages with hidden SEO links are a prime examples.

Directed Opportunistic: Targets selected from a narrow segment that possess valuable data or a tarnishable brand. Attacks are both automated and sentient, use commercial or open source scanners, may register accounts, and exploit custom Web application code found easily with little to no configuration. Typical examples are Cross-Site Scripting and open URL Redirector flaws that aid in phishing scams, SQL Injection issues to compromise sensitive data, remote command execution to perform website defacement, or embarrassing full-disclosure.

Fully Targeted: Targets selected specifically. Attacks may be both automated and sentient, utilize customized tools, exercise multi-stage business processes, and exploit business logic flaws in custom Web applications. Examples are discovering unlinked press releases, reseting account passwords, accessing other users data/access, abusing product replacement programs and refund processes, altering expected purchase prices, etc.

Depth Religion
A belief system that recommends identifying the most valuable assets, especially those containing sensitive data, and investing the bulk (or all) of the security dollars in defending them. See the investment strategy diagram. Secondary and tertiary assets are essentially left as sacrificial lambs. Borrowing from the age old militaristic strategy of the castle and moat, establish a perimeter around your most valued assets and defend it to the last with defense-in-depth being a fundamental tenet.
The open risk window is a determined and fully targeted adversary, some described as the “super hacker,” capable of penetrating the system. The belief is this individual exists and given enough time simply can’t be stopped. Anything short of that skill level can be successfully defended against. Of course the secondary and tertiary assets lay wide open to anyone, including Random and Directed Opportunistic attackers. The blind often forget about shared data stores neglected sites may share with those “safe” high value assets.

Breadth Religion
A belief system that recommends identifying all assets and establishing a security baseline applied across the range. Primary, secondary, and tertiary assets and treated basically with the same level of care. See the investment strategy diagram. The thinking is most breach losses are due to assets not abiding by security minimums set by compliance requirements, and not the exploits of a “super hacker.” By elevating the intrusion bar to a compliance standard, and believing that to be “good enough,” then the most common attack types can be eliminated or significantly diminished.

The risk window is open to any attacker slightly more sophisticated than a dumb robot. Far shy of a “super hacker” skills and not especially difficult considering that mass scale attacks need not login. Stated differently, the barrier of entry for an attacker will stop at the Random Opportunist, a payday for those paying attention, because compliance requirements are routinely watered down by those clinging to the bare minimum.

So the billion dollar question everyone is asking, “Which belief system is more effective?“

Our intent was not to directly answer this question, but instead expose common misconceptions applied to website security. When organizations want to raise the security bar, as measured by vulnerability assessment, they utilize different levels of testing comprehensiveness.
  • To reach the green rung, an option is a completely automated and unauthenticated scans with a few basic Web security checks (network scanners and PCI-DSS compliance report mills).
  • For the blue rung a person, with at least a basic knowledge of Web security, runs a commercial grade Web application scanner while logged-in and configured. Mid-tier firms offering a junior level consultant, AKA “scanner jockey,” is common.
  • The purple rung requires a person to walk through, understand, and test all the business process flows -- perhaps even create custom tools. To find a single issue, a person only need to be clever and not necessarily highly experienced, let alone a super hacker. On the other hand to have chance to finding all the possible issues all the time, they do -- need to be experienced!
So the fallacy is assuming the “threat-o-meter” represents an attackers skill level. The fact is Fully Targeted attackers are not necessarily more skilled than Directed Opportunists. And, Directed Opportunists in turn are not automatically more sophisticated than a Random Opportunist. In reality attackers types have more to do with target focus and technique of choice. Our presentation focused almost exclusively on Fully Targeted attacks that anyone can pull off, driving revenue anywhere from 5 to 9 figures, and then scale up. For example, how advertising campaigns have been gamed and how the use of discount coupon codes led to significant losses. How WebMail accounts were broken into claiming a hacking contest prize or leveraged to compromise an entire enterprise. Reveal nefarious SEO and affiliate revenue generating schemes.

Two years ago when I said PCI Certification doesn’t make a website harder to hack, these were my primary concerns. Raising the minimum bar to just Random Opportunistic is simply not enough. Website security is clearly a different environment where conventional wisdom is constantly tested. Vulnerability assessment solutions, and by extension compliance standards, must truly be risk-based. Flexibility is essential in security testing. From deep dive to cursory level, vulnerability assessments capable of meeting or exceeding an attackers capability is an absolute necessity. Also crucial is the capacity to scale massively across the enterprise to compare the current security posture against the tolerance for risk.

For those who missed Trey and I at BlackHat or couldn’t attend the show, we’re hosting a special encore webinar of Mo' Money Mo' Problems: Making A LOT more money on the Web the Black Hat Way. Improved material!

Free to attend, but you must register. Space is limited.
Tuesday, August 18th, 11:00 AM PT (2:00 PM ET).