Apparently the mass SQL Injection attacks have really woken people up and they’re probably flooding the MS blogs and inboxes with pleas for assistance. No doubt a lot of them use Twitter. :) Site owners are desperate to protect their old legacy ASP classic code. To help the situation Microsoft has just announced 3 free new toys specifically targeted at SQLi.
1) The Microsoft Source Code Analyzer for SQL Injection (MSCASI) is a static code analysis tool that identifies SQL Injection vulnerabilities in ASP code. In order to run MSCASI you will need source code access and MSCASI will output areas vulnerable to SQL injection (i.e. the root cause and vulnerable path is identified).
Cool. If anyone wants to provide feedback on effectiveness, I'd really like to know!
2) Microsoft worked with the HP Web Security Research group to release the Scrawlr tool. The tool will crawl a website, simultaneously analyzing the parameters of each individual web page for SQL Injection vulnerabilities.
This is nice of HP to offer, but the product limitations seem somewhat onerous to me...
* Will only crawls up to 1500 pages
* Does not support sites requiring authentication
* Does not perform Blind SQL injection
* Cannot retrieve database contents
* Does not support JavaScript or flash parsing
Will not test forms for SQL Injection (POST Parameters)
Hmm, if used MSCASI and Scrawlr are used at the same time, can we call this Hybrid Analysis? :)
3) In order to block and mitigate SQL injection attacks (while the root cause is being fixed), you can also deploy SQL filters using a new release of URLScan 3.0. This tool restricts the types of HTTP requests that Internet Information Services (IIS) will process. By blocking specific HTTP requests, UrlScan helps prevent potentially harmful requests from being executed on the server. It uses a set of keywords to block certain requests. If a bad request is detected, the filter will drop the request and it will not be processed by SQL.
IIS's equivalent to ModSecurity on Apache. Cool stuff, first used it a LOONG time ago and no doubt solid improvements have been made. From the description it appears to still be using a black list negative security model approach to protection. How about that!? :) Looks like the only thing they left out is some kind of DB or system clean up for those who have already suffered an incident. I’m hearing that the hacked count is up to 2 million sites now. Ouch.
10 comments:
Ouch, I didn't read the limitation before. What is that tool good at then?
Throwing ' OR 1=1-- at all get parameters? Open-Source web apps scanners are doing this for ages (and they handle POST parameters ;))
I've already tested scrawlr, I can deal with the auth issue, as auth is realistically a much lower threat wrt automated SQLis, same with blind. Lack of database retrieval is not an issue for POC because all you need is an error message. But the crawl limit is ridiculous, my scan died 20% of the way through due to this limitation...might be nice for home use against Betty's cookie site on mysql but I see this tool less as an act of social compassion and more as a marketing ploy (which is apparently working) unless they yank this arbitrary page limit.
--Eponymous
http://w3af.sourceforge.net/
I made a test of the tool
I'm going to test the static source code analysis tool. That in my opinion makes Scrawlr useless if it really works
I agree it would be pretty lame if we were just throwing a 'OR and looking for an ODBC error. This from our FAQ might help:
Q: How do I know these vulnerabilities are real?
A: When Scrawlr detects what it thinks is a SQL Injection vulnerability, it will try to extract the database name and type, as well as the names of all the user defined tables in the database. This proves that data extraction is possible and that the SQL Injection vulnerability is real.
@billy:
I agree that the DB schema extraction is actually good for reducing the FP (need to figure out the performances about the FN then...), but the crawler's limitations seems to be very bothering for a full website automated audit, aren't they?
This said, the tool doesn't seem that useful... but I didn't test it personally.
You are doing ModSecurity great injustice when you say that URLScan is equivalent to it. I don't mean any disrespect to URLScan, but we try harder. I know you didn't mean it like you said it, but for the sake of your readers not familiar with ModSecurity I feel compelled to clarify.
URLScan is useful, but limited. For example, as far as I am aware, it can only act on the request line and the request headers, but it doesn't do anything about the payload (e.g. POST). Conceptionally it is more similar to mod_rewrite, with some web security functionality added.
ModSecurity, on the other hand, is focused on pre-processing transaction data, avoiding making any choices for the user, and giving her a bunch of tools (e.g. the rule language, transformation functions, persistent storage, logging, XML parsing... I could go on) to enable her to do whatever she wants. It's not only different to URLScan in terms of what you can do with it, but there is a significant difference in the approach.
@Ivan, I meant no disrespect. We all know ModSecurity rulez. :)
Good tip.
However, the MSCASI isn't all it is cracked up to be.
Take the following code from a real site I am having the pleasure of auditing...the problem is pretty evident. I tested the script on a handful of pages, including login scripts, update pages, and more...and nothing. It did work fine with the sample that came in the package.
Due to the very high failure rate I have experienced, I can't trust it...so, thanks for nothing MS.
ID=Request.QueryString("ParentID")
Password=Request.QueryString("password")
dim SQL
dim Obj
dim Rs
set Obj=server.CreateObject("DataLayer.Database")
set Rs=server.CreateObject("ADODB.Recordset")
SQL="select child.child_id, child.child_First_name, child.child_nick, child_email_address, child.child_password, child.child_dob, child.child_gender, child.child_grade, child.WebFilter, child.URLExclude, child.URLInclude, child.URLIncludeOnly, child.EmailBuddyCheck, child.AllowNonBuddy, child.ParentExclude, child.ParentInclude, child.ParentIncludeOnly, child.Lockdown, child.PasswordRequired, child.AllowSubDomain, child.IsParent, child.Child_MSAgent_Access, child.UsePopupBlocker, child.SendBuddyOnly, child.AgentReadChat, child.challengeId, child.response, child.LaunchPopup, Community_Sponsor.Community_Sponsor_HomePage, 'client/Images/Icons/' + icons.icon_file_name iconfile from child,icons,Community_Sponsor,Parents where Parents.parent_id = " & trim(ID) & " and Parents.CommunityId=Community_Sponsor.Community_Sponsor_ID and child.icon_id=icons.icon_id and child.child_state=1 and child.parent_id= " & trim(ID) & " order by isparent desc, child_id asc "
set rs=obj.ExecSQL(SQL)
set Obj=nothing
set Rs=nothing
Response.Write "Required Parameter is missing."
sethf - Please remember that MSSCASI is a static code analysis tool. It wouldn't know that DataLayer.Database.ExecSQL executes a dynamic SQL.
For wrappers written in the same page (or included files), it will generate 80420 warnings. In this case, I assume this is a COM component installed on the box so it wouldn't know that it executes a dynamic SQL.
You can however use the following annotation in the code. It will then detect vulnerable paths that lead to this API.
' @@embed attach __VBS_EXECSQL(obj,x) { __sql_pre_validated(x) }
This is a very rare case that we missed to document, will cover it in the next revision.
Thanks,
Bala Neerumalla
Post a Comment