The Shadow

Fri, 02 Feb 2007 14:26:48 GMT

Let's start this conversation with a quick overview of the browser security model.

We all know that every modern browser has a security sandbox also know as the same origin policy. This sandbox prevents scripts from one site accessing information from a different site. If this restriction was not set on place everyone would be able to hijack your Gmail, Yahoo or Microsoft Live account (if authenticated) by simply reading your session information. The same origin policy also prevents scripts from retrieving content of remote resources that are not part of their origin/domain. This restriction is set for the reason to prevent the remote server leaking sensitive information. The same origin policy is far more complicated then what I have just covered but we had enough material for the purpose of this article.

Obviously attackers are looking for information that is valuable. We said that the browser does a good job to secure this information, but the question is whether this is enough. Because of the same origin restrictions attackers are trying other means of achieving what they want. Like always, the easiest way to do that is to play by the rules. If the browser disallows malicious scripts to access information from a given domain, then change the origin and bypasses the restriction. This is where attacks such as Cross-site scripting come into place.

Once a Cross-site attack is in motion, the attacker can cause quite a lot of trouble by simply hijacking your account. However, if the target happens to be on a location that does not offer that much of a value then I guess most of you will conclude that the XSS vector is completely wasted. Yes? Not if the attacker sends a shadow after you.

What you must understand is that attackers have achieved some kind of control over the target and they will try everything that is in their power to preserve it. This is not easy in terms of WEB technologies because we all know that the WEB is stateless and highly dynamic. If the target movies away from the Cross-site scripted resource, the control is lost.

We, as computer security professionals, went a little bit ahead of the attackers and developed ways to hijack the user experience across an entire domain. This is done by employing various XMLHttpRequest and IFRAME techniques. For a demonstration of such kind of attack vector, I enclosed the following snippet extracted from the Atom Database.

function framejack(url) {
    var ifr = document.createElement('iframe');
    ifr.src= url;

    document.body.scroll = 'no';
    document.body.appendChild(ifr);

    ifr.style.position = 'absolute';
    ifr.style.width = ifr.style.height = '100%';
    ifr.style.top = ifr.style.left = ifr.style.border = 0;
}

If you look at the code you will see that when the framejack function is called, an absolutely positioned IFRAME is placed on the top of the current window. When the target interacts with the page the session is persisted. This is great, although obviously suspicious.

What might be better is to continue exploiting various Cross-site scripting flows as the target moves. As such, if the target is on siteA.com and they click on a link to siteB.com, the malicious code picks a vector for siteB.com and although the target goes for real on the specified domain, the control is preserved, i.e. a shadow is spawned.

The real change is to find as many Cross-site scripting vectors as possible. It is insane to thing that such kind of thing can be achieved dynamically, although I am far from thinking that this is impossible. However, for practical reasons, attackers may want to know about different Cross-site scripting attack vectors in advance.

A simple scan for the most obvious Cross-site scripting issues could prove to be quite useful. Google is also a valuable resource for discovering various input injection flaws. So it is a matter of constructing of big enough database.

One important think to remember is that the control can be lost as soon as the user access a page from the address bar and you are right that this will most definitely happen but think about web application that you don't usually move away from, like Gmail or Google Reader, or even your critical corporate app. Think about Kiosks and other web technologies that prohibit the user from changing the current location from the browser address bar.

Be gone with my shadow now!

kuza55kuza55
Ok, excuse me if I'm wrong - but this seems like just another scare article, it tells us nothing about what you've come up with only that you've come up with some attacks. But anyway; for maintaining control of a user there is one type of XSS vulnerability which can greatly help an attacker. XSS vulns which print data stored in cookies, but which you can set through request parameters, e.g. when you are allowed to choose a stylesheet for a site, or if a reflected XSS vector is printing a $_REQUEST value rather than a $_GET value. Using the example of being able to choose a stylesheet; if you create a reflected XSS vector where you set the cookie to a value that is valid, but append some data so that you execute your script as well, and have the cookie set to last for years, you will effectively have control of the user's browser for the whole time they are on the site - which on sites like forums is an exceptionally long time. Furthermore you could even go so far as have your code rewrite the DOM so that when the user selects a new skin, the cookie is adultered so that the XSS payload stays intact.
Anurag AgarwalAnurag Agarwal
If i am reading you correctly then you want to try out an XSS attack on all the external links on a web page to see if they are vulnerable to XSS. if they are then you spawn an iframe but won't it be easier to replace the url with the XSS attack vector rather then spawning a frame? the other point i would like to mention is the limitation with this approach is that you can only try out for XSS in the url or header variables.
pdppdp
kuza55, probably you misunderstand the article. Thanks for the info. Anurag, not exactly! What I am saying is that an AJAX worm can move with the user as long as the next page of destination is also vulnerable to XSS. It is that simple. When the user serfs further, the worm just follows like a shadow. In order to achieve that you have to find all possible XSS vectors, in advance, for all external links given a starting point. So, you need a spider that will be able to detect XSS while visiting pages. Once the shadow is in motion, it will perform queries on some sort of remote database to retrieve information about the next XSS vector. When the user clicks on a link, the shadow will take that user to the specified place but also, it will be able to move itself as well. Hope that now the concept is clear enough? Unfortunately, I cannot present my POC because I have to share my research XSS database. This is very unethical, I believe. However, with a little bit of Python and a few simple XSS checks you can definitely write a XSS spider on your own. I might publish mine actually. Let me think about it first.
Anurag AgarwalAnurag Agarwal
I had been thinking about the same thing. Here is my approach 1. victimA is hijacked by exploiting its XSS vulnerability. 2. I have control over all its links (internal and external) 3. Internal links i am not worried about as i can pass them through my ajax worm (you can see the PoC on my blog) 4. When it comes to external links, ajax wont work but i can still send all those links to attacker.com server and using any server side program i can check those sites for different types of XSS vuln and when found, i replace the link on victimA.com with the XSS attack vector so when a user clicks, the worm can be passed to the new site. All this can be done without iframe. If your approach is different, then i would be interested to know whenever you decide to post it.
pdppdp
Anurag, it is the same. However, I have all XSS vectors in advance for all external links, so the user doesn't have to wait. I have tested this technique on several Kiosk platforms and they kind of work. :)