Application Layer Anti-virus/Firewall

Wed, 11 Apr 2007 09:05:44 GMT

So, we have servers, and these servers host applications. Each application is composed of server-side and client-side components. Traditionally, the server side components is where most of the business logic resides. Traditionally again, in order to secure your application you have to spend considerable amount of time to make sure that every server-side component works as it is intended to. If you happen to miss something, you can quickly patch it with mod_security for Apache. Traditionally, none cares about the client.

If you ask Ivan Ristic, the creator of mod_security, he will tell you all about how good application firewalls are and how everyone should use them. He is sort of application firewall evangelist, although I cannot quite understand why people use this term and why they want to call themselves evangelists, but it is ok.

mod_security is advertised as Application Layer Firewall, but this is quite incorrect. Which side of the application layer we are talking about: server or client? If you play around with it, you will see that it is primarily intended to secure the server although in some situations it may save some trouble for the client. I have been thinking about these type of concepts for quite some time now so I figured that we need to make some clear differentiations.

In this article I will quickly cover some ideas that I've been accumulating about client-side and server-side protection. I put all of these concepts under the common term Application Layer Anti-virus so you don't mistaken them for Application Layer Firewall, as being used by mod_security. In general, I am going to talk about protecting the client and the server all together. If you think that Application Layer Anti-Virus is a bit ambiguous as a name, I apologise. I couldn't think of anything else.

The biggest question is how to protect the client when the server is vulnerable and how to protect the server when the client is vulnerable. This question does not have a simple straightforward answer. Let's examine the following case study:

Joe visits site.com. site.com has a cross-site scripting flaw which is used by the attacker to inject malicious JavaScript inside Joe's browser. Because of site.com, Joe's personal information is leaked and his system is exposed to all sorts of other attacks.

Let's have the same case study but this time change the roles of the client and the server. It looks like the following:

Joe visits site.com. Joe's browser is vulnerable to the MHTML or Adobe PDF UXSS bug. Because of Joe's insecure browser, site.com is at risk of being compromised.

Can you spot the difference? No matter which side is vulnerable, the other is indirectly exposed to an attack too. Therefore we need to secure both sides. However, it is really hard to secure the client from the server since only the user has access in there. So how, can we make some sort of generic solution that is implemented on the server to resolve client and server issues? Well, it ain't going to be easy.

In my mind I picture something like integrated solution which resembles to a great extend mod_security. but with client-side features. The server-side of the application is protected by the typical input validation rules, while the client is protect by injecting JavaScript inside each stage, or at least at the login stage, that verifies for client side vulnerabilities, such as the version of the Adobe Reader plugin, etc.

So again, the server is protect by input validation. While the client is protected by JavaScript injected for each stage. For example, coming back your our case study:

Joe visits site.com. Before login site.com verifies the integrity level of Joe's browser (i.e scan for vulnerabilities). Because Joe's browser has some issues, site.com informs Joe that unless he patches his system, he won't be allowed to enter.

Most of you probably think that this not really that nice solution because it relies on JavaScript. Probably you are thinking that you can trick the system by disabling JavaScript or maybe even change your DOM in a such a way that it seams that you are not vulnerable, however, all these doesn't make sense. None will go through all the hustle to bypass the system when they can simply patch and use the web safely.

I won't be surprised if anti-virus vendors such as McAfee, Kaspersky and Symantec get into this type of venture.

Still, there will be some problems with situations where the client-side of the application is vulnerable to issues like DOM-based XSS. To a great extend, this can be handled by our solution by encapsulating the page DOM inside a jailed environment. The solution will be messy but far from being impossible.

To summarize, this is how the proposed system should work:

The server-side solution verifies the input and makes sure the the request and responses are RFC compliant. The client-side solution embeds itself for each stage and verifies the integrity level of the client. For further protection, the solution wraps the page DOM to protect against common DOM-based XSS issues.

This is what I call Application Layer Anti-virus or Firewall. It is feasible to construct something like this and I believe it is very likely to happen in the near future.

AodhhanAodhhan
I think the idea is a good one, and is already being used for VPN connections. A computer is put into "Limbo" until it is verified to have certain applications, and they are up to date. I don't think it would be successful, if the goal was to report on all applications a client computer has before it connected to a host web site. However, I do believe it is possible for Operating Systems and Anti-virus/Anti-Spamware applications to report their presence, status and whether or not they are up-to-date to the browswer (actually to a file which the browser could parse). This way, the host web site could interrogate the browser to find out if the operating system is up to date on hot-fixes, a anti-virus is being used and it is also up to date before allowing it full access. In today's movement towards Web 2.0 architecture; I think your idea is right on track... and I too, can see this happening in the near future.
pdppdp
So, I guess Anti-virus agent could supply browser plugins the same way other 3rd-party components (Flash, QuickTime, PDF) do. The client-side script instantiate an anti-virus agent and performs several queries to verify the integrity of the system. If the system is ok, then client is allowed to continue. However, although in general, it sounds like a good idea, I can definitely see situations where the Ant-virus agent provides too much information or it is vulnerable to some type of attack. Goode stuff all together. However, with this post I was trying to show a server-side solution with a little bit of JavaScript that performs simple checks such as what is the current version of PDF and what is the version of QuickTime, etc. Of course this is quite limited. I wonder whether browser will support anti-virus or firewall objects in the future. Something like: window.antivirus or window.firewall
AcidusAcidus
Repeat after me: You cannot trust the client! At least in a VPN environment you can use certs and digital signatures to ensure the integrity of the binary client. This can't happen with a browser without significant re-architecture. I understand your solution is aimed at protecting legitimate users. So you assume no one would bypass the restrictions/enforcement applied to the client just to gain access to the site. Ignoring the social engineering opportunities here ("Hello, please upgrade to Acrobat 31337, here is the URL...) I am very wary of this idea mainly because of the mistakes devs will make when they are using something that "verifies the integrity level." It's trivial for the devs to make the small (and faulty) logical leap and think that now the client is somehow trusted. I can easily imagine a dev using this system saying something like "well, if they got this far, I know they are safe, so let's do [insert bad security practice here]." I think this idea is interesting, but I worry that it will create a false sense of security in the client that causes devs to make silly mistakes. Of course, they are already making silly mistakes... :-)
Sp0oKeRSp0oKeR
Maybe if Anti-Virus solutions use project like http://www.security-database.com/ssa.php it'll be very nice! =) Nice Article and idea =) Regards, Sp0oKeR
JordanJordan
Instead of Anti-Virus the current buzzwords would either be "Content Filtering" or "Endpoint Compliance" depending on which particular aspect you're talking about. Either way, Anti-Virus isn't ambiguous -- it's just wrong. ;-) It's a really interesting idea in general, but here's one problem I see -- if the user's browser is compromised, /ANY DATA/ from the browser is suspect. That means your windows.antivirus or windows.firewall object is too. The threat isn't that the /user/ will try to bypass the protection to access the site, so much as it is that malware on the machine (or malscripts, or whatever) will try to bypass the protection for the site. Of course, even in that case, if you assume that the client has a period of time where he's not infected but is vulnerable and visiting the site, this idea might work to help get him to upgrade faster, so it's still got potential. You'll run into the exact same situation that Cisco finds themselves in now with their NAC being subverted: http://www.darkreading.com/document.asp?doc_id=120852
ArthurArthur
This idea has been worked on by companies like wholesecurity and Sygate, both of whom were purchased by Symantec, particularly on the securing the VPN front.
Ivan RisticIvan Ristic
Actually, I think the term Application Layer Firewall should be used to protect all sides involved in communication, servers (applications) and clients equally. Although ModSecurity is typically seen as a solution to protect the server-stuff this is just a matter of perception. For example, you can configure Apache to work as a forward proxy, add ModSecurity to it, and configure your the network to force all clients to go through Apache for their HTTP needs. In this situation ModSecurity will protects the clients, as it allows you to inspect the content before it is delivered. While it is true that ModSecurity is better equipped to protect the server-side this is only the current situation. Improving support for forward-proxy deployments in on my TODO list. FYI, I have already experimented with content injection in ModSecurity (i.e. a server-side Greasemonkey). This feature will be published later this month as part of ModSecurity v2.2-dev1 and discussed in my presentation at OWASP Europe in Italy in May (http://www.owasp.org/index.php/6th_OWASP_AppSec_Conference_-_Italy_2007/Agenda).
PoYoXPoYoX
I'm not shure that a site could be compromise by a vulnerable client. I have seen 2 approaches to make a solution for the client side : 1. Some guy from Hauri Antivirus tell us in a conference about the solution in the Asian Banks. They developed an ActiveX component to verify the integrity of the machine (not only the browser) before the client could get into the bank page. 2. There is SpyBye, a tool for checking URLs while browsing. From the site : "It functions as an HTTP proxy server and intercepts all browser requests. SpyBye uses a few simple rules to determine if embedded links on your web page are harmlesss, unknown or maybe even dangerous" I think this topic on securing web clients for malicius server its very hot now. Regards
TomTom
I'm sorry, but this solution just won't work. If you have an application that is vulnerable to XSS, you can't implement JavaScript functionality that would reliably protect the client. There is no way to prevent my injected JavaScript from simply overwriting all of your functions, for example. The only way to implement real client-side protection is to incorporate that functionality into the browser. There are a number of Firefox plugins that attempt to address this problem, like Firekeeper. I'm not sure how well they work, but at least they have the potential to help address the problem. JavaScript just isn't able to do that.
pdppdp
wow, so many comments in such a short time... Acidus, We don't really need to architecture the whole thing. All we need to do is to somehow make sure that the client integrity is not compromised. In many cases that won't work. However, even if it improves the situation with just 30%, I believe that it worth the effort. There is no secure system for as long as it needs to be used. All we need to do is to find the balance between security and accessibility. Jordan, Well yes, if the client system is compromised by some sort of malware then everything that comes in or goes out is also subjected to attacker's whishes. However, this is not the point I am trying to make. This system protects when the client machine is not fully patched and it could be compromised on the way. Ivan, I am not sure what server-side Greasemonkey scripts do but one particular instance where ModSecurity fails to protect is when the client-side logic is also vulnerable to something like DOM-based XSS. We are going to see a lot more of these in the future mainly because everyone goes AJAX. How does ModSecurity protects against the PDF UXSS issue? Tom, You are right, we cannot prevent XSS injected JavaScript to overwrite certain functions. However, it is the server-side component job to prevent this thing from happening at first place. Moreover, I've seen some AJAX solutions (GMail) where the structure of the application is so weird that if you try to replace or wrap particular functionalities you almost end up with breaking everything.
AodhhanAodhhan
I don't think anyone made the claim this would be the security invention of the century. To be honest; I don't trust one single appliance, application, user, administrator or client which touches a network. Every one of them can be defeated. Hence security professionals implement a policy of "defense in depth". Saying that something won't work because... displays ignorance. "What if it becomes compromised..." pretty much works on everything. Firewalls won't work, because... they can be compromised. Proxy's won't work because they can be bypassed. Certificates won't work because I can make a copy of it. VPN's won't work because of all of the above. Wireless won't do because the signal can be intercepted and decrypted. Can't use Windows, UNIX, Linux, Oracle or thousands of applications because there are vulnerabilities to them. It is easy to say something will not work or cannot be done. What makes you the big dollars and gives you credibility, is finding out ways to make it work. The idea is still a good one. Another possible future choice for defense in depth.
celfcelf
pdp - You responded to Tom by talking about the server side. Yup, that's the place to do it alright (but isn't this a discussion of client side protection?). Heck, we can fix all this jive by proper server side output encoding. You stated above "...site.com informs Joe that unless he patches his system, he won’t be allowed to enter." I don't really think that a business that runs an app with thousands of users is going to start turning them away because they run some antiquated or unpatched browser. Not for any customer I've ever assessed (lots). That could mean lost revenue, and that's just as unacceptable as a security issue.
GSEGSE
I personally believe that you cannot dictate security controls to a customer. In your example, if Joe is unable to access site.com due to patching issues, I would think that Joe would simply go to competitors.com and perform his transactions elsewhere. VPN is not really a good analogy for this since most VPN users have no other choices. For websites (especially ecommerce), individuals do have a choice. Inmy opinion, the only way to ensure that a site is secure is a multi-pronged approach: - Ensure that security is a fundamental part of the defined SDLC that is used to create the web application. - Have a defined response plan to be able to evaluate, prioritize, test, and remediate vulnerabilities found within the application's architecture and the application itself. -Realize that there is always going to be a risk in doing business on the web and take server side controls to mitigate that risk. One item that I am surprised that I do not see used more is something that banks/credit card issuers do to help protect themselves: profile the normal usage of an application and alert on behaviour that falls outside the norm. Application firewalls, at least in my experience, seem to cause some more issues than what they solve. I know of multiple businesses that deployed app firewalls where it became harder to ensure that the applications were written with security in mind. Input validation was not focused on because "the appfirewall does that for us". Product management started to spend money that was currently allocated to security functionality on other items since "the application firewall does a better job". Deploying application firewalls requires a lot of level setting on the amount of security that it truly gives an organization.
pdppdp
GSE, interesting points all together.