The Generic XSS Worm

Wed, 20 Jun 2007 22:52:15 GMT
by david-kierznowski

When we think of computer worms, we generally think about operating-system based worms such as the famous Code Red, which replicated itself 250,000 times in approximately nine hours on July 19, 2001. Its replication was made possible by a vulnerability within MS Windows platform. Firewalls and defense in depth help mitigate the spread of worms by providing layers of protection between public and private networks; however, a new age worm is upon us, the XSS Worm aka the Web 2.0 worm.

The Samy XSS Worm - the first of its kind to make headlines carried a payload that would display the string "but most of all, Samy is my hero" on a victim's profile. When a user viewed that profile, the worm would infect the visitor's profile via a Cross-Site Scripting payload. Within just 20 hours, of its October 4, 2005 release, over one million user profiles were infected, making Samy one of the fastest spreading viruses of all time. In this post, I am going to summarize several publicly discussed JavaScript Malware techniques and depict how future worms may look like, based on what we have today.

XSS is a powerful engine on which to build a platform independent virus and whether we are ready or not, attackers are definitely going to be utilising these techniques in the future and the ground work and education factors must be put into place now. Think about this, an XSS engine has the potential to propagate much faster than an operating-system based worm, requires less effort in many cases (not all), requires little or no authentication and is client-side which means the XSS engine can propagate across network boundaries utilising the user's circle of trust. This is almost the antithesis of traditional worms, which are server-side meaning it struggles to propagate over network boundaries and they usually require high operating-system level access.

XSS engines will require these fundamentals:

  • ability to identify targets (the XSS vulnerability)
  • an XSS payload (the purpose or exploit); and* continuity (the propagation technique)

This article will introduce Scrape, Specific and Generic XSS engines.

Scrape

Scrape XSS Worms utilise external resources to identify targets (i.e. Google, xssed.com) - pdp recently released an article titled, The Next Super Worm which basically uses xssed.com to identify vulnerable targets. Note that this example "scrapes" the targets from a third party resource.

Specific

Specific XSS Worms usually have an individual target - the Samy XSS worm is a perfect example here. It exploited a specific vulnerability in a specific target. Its focus is not to spread across network(s) but rather to remain in once place infecting a particular aspect of a website or service.

Generic

Generic XSS Worms which make assumptions - an example here would be a worm which exploits environment variables in application frameworks like PHP's $_SERVER['PHP_SELF']. This method is ideal for blind XSS worms, where you do not know what the web server is running ( i.e. my wp-scanner tool uses generic XSS tests to find vulnerabilities in WordPress themes; it doesn't care what theme the user is running). Another really good example is Solarius's recently found XSS vulnerability in ASP.NET's PATH_INFO variable which affects the latest version of SharePoint and possibly many other ASP.NET applications. The Generic XSS engine category could also extend to include web server or application flaws (i.e. XST, Universal PDF XSS etc).

I hope I have given you some fruit for thought, and to encourage the Internet community to move forward in coming up with new techniques, methods and strategies to combat the rise of client-side flaws. Web 2.0 security will require us to lengthen our strides if we are to come up with effective solutions; a number of excellent contributions and ideas have already begun and we encourage these individuals and organizations to continue on.

Archived Comments

ntpntp
Browser developers should build anti-XSS, LocalRodeo, content-restrictions, and SafeHistory/SafeCache concepts into the browser. Someone (preferably browser developers, but could be administrators) should separate browser profiles (different process id's) into at least two separate browsers / browser profiles: 1) Firefox for the Internet - will not connect to RFC1918 2) Firefox for your Intranet - will only connect to RFC1918 Jeremiah Grossman suggested a way to make this more transparent, but I suggest this stricter method. Vulnerability hunters should disclose responsibly to security, cert, noc, hostmaster, postmaster, and webmaster email addresses at top-level domains, preferably to someone who has a GPG public key available in a key server (or use something similar like Hushmail or IBE). They should also check for any incident response policies and vulnerability reporting processes the organization or website owner may already have in place. There is no reason to go to another source or through an intermediary. If all email bounces or there is no response in two weeks, vulnerability hunters should try calling a person by acquiring a phone number of an appropriately titled employee on jigsaw.com, spoke.com, or linkedin.com or from whois as a last resort. If you can find an XSS, you should know how to social engineer or footprint an email address and phone number. Administrators should respond to and fix problems within 2 weeks (XSS moves faster than buffer overflows) or full-disclosure is fully appropriate. Like I've said before - if administrators can respond to an SSL certificate expiring within 2 weeks - then they can also respond to an XSS finding in the same amount of time. The vulnerability researcher can then co-ordinate a fix/disclosure schedule with the vendor and should provide remediation support if necessary. If you can find an XSS with a scanning tool, you should learn how to fix them for others. IT administrators for users should setup the ability to whitelist URL's (e.g. http://whitetrash.sf.net ) if they think their environment can support it. They should also ensure OS, browser, and browser plugin automatic updates. Administrators should scan and patch, just like regular vulnerability management. Good administrators will enforce use of a safe browser similar to the one described in my first paragraph. Most importantly, developers should respond to issue tracking about XSS findings (reported by the administrators / operators) immediately as the highest priority... higher than application availability issues (24x7x365). They should always use validators that are built into their framework properly. Before coding anything, they should have an enforceable coding standard built into their IDE. They should use source and static-file code checkers in their IDE. They should use source checkers (again), static-file checkers (again), and path/code coverage tools (e.g. concolic unit testing) that run fuzz testing across all inputs at build time, including on dependencies. They should combine model checkers and continuous integration tools (especially build schedulers), ensuring a clean release. Applications should be monitored for security events using application monitoring tools (not network security tools or log management tools) by both operators/administrators and developers. Web Application Security Scanners and Web Application Firewalls are OPTIONAL but NOT REQUIRED. I refer to this strategy as NTPolicy. You may use or change it however you like.
pdppdp
ntp, that was great comment. IMHO, things are a bit more complicated. You say:
Before coding anything, they should have an enforceable coding standard built into their IDE. They should use source and static-file code checkers in their IDE. They should use source checkers (again), static-file checkers (again), and path/code coverage tools (e.g. concolic unit testing) that run fuzz testing across all inputs at build time, including on dependencies. They should combine model checkers and continuous integration tools (especially build schedulers), ensuring a clean release.
Of course, this is the way forward. However, scripted languages are very hard to automatically examine them, mainly because they are nothing like compiled languages. Most of the IDEs that I've seen which have these types of facilities work only for .NET or Java and nothing else is supported. Although it may seams that most corporate applications use something like that, the truth is that there are a plethora of things that cannot be easily ignored. But yes, you are right. We should do it.
david.kierznowskidavid.kierznowski
I don't think there is a clean cut solution right now, we are in for the long-haul.
ntpntp
Of course, this is the way forward. However, scripted languages are very hard to automatically examine them, mainly because they are nothing like compiled languages. Most of the IDEs that I’ve seen which have these types of facilities work only for .NET or Java and nothing else is supported. Although it may seams that most corporate applications use something like that, the truth is that there are a plethora of things that cannot be easily ignored.
Scripting languages have one great aspect: they are focused on Test-Driven Development (TDD) to "get all the bugs out". This is fast, works extremely well, and "drives" development to the end goal. Unfortunately, tested code is not the end goal. Also - even more extremely unfortunately - DFT (Design for Test) gets skipped. How could Agile developers skip the most important part? Well, I guess they don't call them script kiddies for nothing! Also see: http://www.jwz.org/doc/cadt.html Although I would argue that "testing is fun", especially when you find security-related bugs or flaws. Microsoft uses the SDL, which is largely a waterfall model. The end goal of the SDL is security. Microsoft actually has a better model, where the end goal is everything everybody really wants. It's called Design for Operations (DFO) http://www.codeplex.com/dfo/ Unfortunately, tools like VSMMD for DFO only leak out from Microsoft once every two to ten years, and usually in crippled form. Their internal tools such as PreSharp (source checker for C#), FxCop (static file checker for CLR), Magellan (code coverage), and FuzzGuru (path/code coverage with fuzz testing) - as well as their internal coding standards (esp/espx) are not well-known. Sure, they do have some public tools (that still require VS 2k5/2k8) for coding standards, SAL annotations, and model checking (AppVerif). But let's take a look at what Java offers all as open-source. Java, with Eclipse plugins, supports CheckStyle, FormatOnSave, and many other options for enforcing a particular coding standard. Source checkers (PMD, Jlint, Hammurapi), and static-file checkers (FindBugs, SofCheck) are available both in the IDE and in the build tools (e.g. Ant, Maven2, etc). Model checkers such as Java PathFinder, have some amazing capabilities - and even the Java-specialized tools for CI and build automation (e.g. Luntbuild) are quite complete for this type of work. Scripters can do the same (Groovy especially), but Ruby, Python, and probably even PHP can have a more planned/focused development model and use the right tools. It's not just Java and .NET.
ntpntp
Oh I forgot to add Java code coverage tools such as EMMA (and EclEmma for Eclipse) and the combined path/code coverage tool with fuzz testing (jCUTE). Note that Ruby has rcov for code coverage and supports many build options with rake.
DavidDavid
Guys, great thanks for sharing this info. Could anybody tell me does XSIO - Cross Site Site Image Overlaying still working in social resources like myspace, digg etc? I tried to find this vuln, but i couldn't.. :(
pdppdp
David, they are aware of the technique so I guess it will be harder.