XSS Worms and Mitigation Controls

Sat, 23 Jun 2007 13:52:13 GMT
by david-kierznowski

NTPolicy is some of ntp's ideas around mitigating XSS worm potential. He reflected these ideas as a response to our post, "The Generic XSS Worm" where we reached out to the community to brainstorm ideas to solve the XSS crisis. I have summaried his thoughts below in a bullet-list with my comments beneath.

Implement logical seperation for Internet zones via a browser policy or use different versions of the browser for different network level access.

For clarity, we obviously mean implementing this a layer above the current same-origin-policy or else XSS or future attacks may be used to circumvent these controls. The idea here is to create a logical seperation between RFC1918 (private and reserved IP ranges) and the Internet. There are a few challenges, namely:

  • A number of big names such as Google and Adobe are pushing applications that bridge the desktop and browser, so you'd certainly be swimming against the current on this one.
  • How will this affect other browser controls such as file uploads and other DOM based controls? Imagine the costs involved.
  • is it really feasible to expect users to have a different profile when viewing say an Intranet site and when browsing the Internet

Vulnerability researchers and developers need to take XSS as a high priority to ensure quick remediation.

This is the human factor, so this will require a reward and penalty system to encourage this or has anyone got any mind-control syrup handy? The XSS risk rating is definately higher since the Samy worm and research conducted by ourselves and others, this has helped awareness, but I think its an optimistic view indeed, if we think corporate bodies will drop everything to resolve an XSS issue, in fact, I was in a meeting of this sort recently.

Use whitelists and update services effectively to prevent exposire and to decrease the attack surface

This is an excellent suggestion and one that most companies should already have in place via proxy servers. However, this doesn't really help our home users and looking at xssed.com, I don't think whitelists are going to help a great deal right now, but certainly in the future.

Developers should use a vul-IDE tool to alert them of poor code decisions;

Yes this would help to a degree, but again the costs and complexity of these tools are not attractive to the general market.

Logging and alerting on web applications to detect attacks;

Every company should already be doing this. This would really require human intervention which is costly as XSS attacks can be represented in a variety of ways making it very difficult for alerting systems to detect. Also, some XSS vectors may be DOM or browser based, so the attack materialises on the client-side rather then via the network.

Security testing is optional

Car manufacturers would NEVER release a car onto the market without sufficient safety and quality assurance tests, why would software be any different? Looking over the above options, security testing is probably one of the most cost-effective and security-effective solutions available.

Some interesting and well-thought ideas there by ntp. I think these mitigation controls or atleast most of them are to be used in policies later on after we have solved the initial problems. The initial challenge can be summed up in these wise words, "How do you eat an elephant? One peice at a time". I think we are in for the long-haul, a quick solution just wont cut it; carefully thought out procedures, standards and metrics need to be in place first.

Archived Comments

ntpntp
Thanks for posting about my thoughts. Jeremiah said, "Restrict websites with public IP’s from including content from websites with non-routable IP address" as seen here: http://jeremiahgrossman.blogspot.com/2007/02/if-we-could-start-all-over.html and he also said it slightly differently here: http://jeremiahgrossman.blogspot.com/2007/01/3-wishes-for-web-browser-security.html As much as I dislike making changes (which usually means adding features instead of fixing bugs) to protocols and languages - there are two things going in the short term future would could affect XSS and similar attacks in the future. 1) ECMAScript 4 progress 2) httpbis BOF at the IETF-69 What would we want to design into these to make them safer and provide security with a higher level of assurance? Otherwise, I enjoyed how you summarized (edited?) my suggestions. Here's one point I wanted to elaborate further on: Security testing is optional. I said that Web Application Security Scanners and WAF's are optional. Testing should be done by the developers, as I described. If you are a developer and feel that you are missing tools that will help you eliminate XSS in your code completely - let me know and I'll find something to help you or make the process almost completely automatic. Or we'll take it up as a project and make it happen. Even for ColdFusion. If we rely on pen-testers and vulnerability hunters to find XSS, and do nothing else, then we'll continue to be in the same situation we are right now. Elimination of CSRF is step two, but near-pointless if we don't have a global strategy to defend against XSS worms.
Roland DobbinsRoland Dobbins
Why have you turned off full-text syndication feeds? Full-text feeds are very important to folks who have lots of feeds to read each day. Please re-enable full-text feeds. Many thanks!
Giorgio MaoneGiorgio Maone
FYI, current stable version of the NoScript Firefox extension - http://noscript.net/getit#direct - aside the notorious JS whitelist, already implements: 1. Unconditional XSS filters on every request (GET/POST/...) originated by untrusted origins (i.e. non whitelisted sites or external applications, e.g. email client) landing on whitelisted pages - http://noscript.net/features#xss 2. "Light" script injection detection on every GET request (even from trusted sites), triggering the aforementioned XSS filters on suspicious URL patterns 3. Java, Flash and other plugin content blocking - http://noscript.net/features#contentblocking Other relevant Anti-XSS and Anti-CSRF features in the works are: 1. A better implementation of the LocalRodeo concepts (made easy by the request interception framework already used by Anti-XSS filters) 2. A facility to flag some IFrames as "scriptless" and "XSS checked", usable either by the web developer or by the user (sort of "" preview leveraging currently available browser technology) 3. A "Mashup manager" to declare web application trust boundaries (i.e. whitelists of sites that are allowed to perform cross-site requests in a certain "mashup"), definable either by the web developer dropping a "mashup.txt" file on the web-app root directory or on the client side via UI. This will overcome the inherent unreliability of REFERER header checks. 4. Finer grained per site permissions/restrictions
pdppdp
Giorgio, thanks for the info. We follow noscript development very closely. Roland, yes, full-text feeds are disabled for now. We are experimenting with a few things at the moment. We are going to keep the situation as such till the end of the month. Then, we are going to decide which way to go.
Tim BrownTim Brown
"Given the issues that Javascript injection poses, it is questionable whether it should be enabled by default on web browsers as they are supplied to members of the public. It is also questionable that Javascript should be considered an all or nothing option. Web browser developers need to up their game and start to provide sandbox functionality similar to that found in JVM, with options to limit access to dangerous interfaces on an individual site by site basis in a granular manner. It should also be considered whether it would be possible for Javascript code to be signed in a manner which makes use of existing PKI to lessen the opportunities available to malicious code." -- I wrote this over a year ago in my paper http://www.nth-dimension.org.uk/pub/MUJSI.pdf, but it still holds true today. NoScript and the mechanisms employed by Konqueror for example are part of the the solution, but when are web browsers going to support/push coding signing for Javascript (http://www.mozilla.org/projects/security/components/signed-scripts.html) down the throats of developers. This could make a real difference without breaking the functionality of Javascript reliant sites.
ntpntp
NoScript is what I meant by Anti-XSS features, although I do know that Microsoft is working on a separate one based on their .NET Anti-XSS library. RSnake has blogged about these in the past. @ Tim : Javascript Sandbox functionality - 1) HTTPOnly. Which has a broken implementation and design 2) Content-restrictions. I already mentioned these, which are improvements to HTTPOnly. Browser developers need to put this stuff in the browser so that web application developers can start using it Signing JavaScript code is by far one of the best options we have. I mention this very often. I think it does have a place in NTPolicy, but in all seriousness, I think it's the least likely for any organization to implement. I would say in 100% of cases, it will be the last thing to implement, thus a waste of time in comparison to everything else. I guess maybe the same can be said of content-restrictions, but I try to be optimistic since it is so badly needed.