Let's Fix The Web

Sun, 31 Aug 2008 08:12:25 GMT

I am heavily frustrated from the way the Web works today. Everything seems to be broken beyond reason. I really want to fix the damn thing but I realize that it is not up to me to do that. It is up to all of us to make sure that code is written in the most secure possible way. Can we do that? Perhaps not! What can we do then?

Before I get to the point, I need to tell you how I fixed my insecure Wordpress blog. Wordpress has many security shortcomings and I was so frustrated that I decided to fix whatever I can once and for all. I believe that we can fix the Web in a similar way, but first these are all the patches that were implemented:

  • mark all cookies as secure to prevent leakage over unencrypted channels
  • mark all cookies as httpOnly to prevent session hijacks due to Cross-site Scripting vulnerabilities
  • if you try to login, force SSL to prevent leakage of credentials and other sensitive data
  • when logged in, make sure that all URLs are HTTPS enabled to prevent leakage of sensitive information
  • when over HTTPS make sure that all URLs that point to your domain start with https:// to prevent leakage of any data
  • restrict 443 (HTTPS) to blog users and admins only
  • disable error messages everywhere to prevent leakage of sensitive information
  • allow upload of only known file types such as jpg, gif and png (I will add a check for the gifar problem soon)
  • embed an IDS type of solution (PHPIDS in my case) to block known attacks
  • integrate with blogsecurify to enable continues security checks and warn the admin if a problem is found

I believe that this makes the blog a lot more secure. There still might be ways to attack it but this is all I can do in the most reasonable possible way, without completely breaking Wordpress. All of these fixes are implemented as a plugin which I will make available for free download soon.

So how can we fix the Web? I have a few ideas in mind and all of them can be implemented. Here they are:

  • allow the user to sandbox and unsandbox applications and web resources with a single click
  • sandbox by default known applications such as GMail, Yahoo Mail, etc.
  • in the sandbox, mark all cookies as secure to prevent session leaks
  • in the sandbox, mark all session cookies as httpOnly to prevent session hijacks due to XSS
  • make sure that while on HTTPS, all embedded resources are delivered over HTTPS as well.
  • provide the option to turn off JavaScript, JAVA, Flash, SilverLight, etc on per-sandbox basis
  • block any external requests to sandboxed applications
  • implement the PHPIDS signature matching mechanism in JavaScript
  • if the HTML structure is heavily broken, block the page to prevent some types of persistent XSS
  • record ssl signatures on trusted network and warn if signature changes while on untrusted network

I think that this type of solution will make the Web a lot more secure. It definitely wont fix it, but it will make the majority of attacks not easy at all. It will block the majority of CSRF and XSS attacks. It will provide certain mitigations against persistent XSS attacks. It will provide some mitigations against Browser exploits which employ Flash or Java technology to exploit the browser. It is not perfect, but it looks good enough to me.

Next stop: actually fixing the browser!

just mejust me
Your ideas are very good, I wonder if a similar thing can be done for other commonly used web site applications, such as phpNuke and Invision Power Board
mindcorrosivemindcorrosive
Your ideas, of course, suggest that the users are educated about secure browsing and are impenetrable to stupidity - which is hardly the case, considering the amount of non-technical privacy and security breaches we see these days. The power users already know how to fix the things and do not need much more security - it's all the others irresponsible individuals that need to be educated in the first place, instead of trying to invent an imaginary fool-proof technology..
Adrian 'pagvac' PastorAdrian 'pagvac' Pastor
It's worth it mentioning that if someone is in the same Wi-Fi network as you, SSL alone won't protect you against session hijacking: https://www.defcon.org/html/defcon-16/dc-16-speakers.html#Perry http://enablesecurity.com/2008/08/11/surf-jack-https-will-not-save-you/
pdppdp
just me, yes! mindcorrosive, you are right but this is exactly why some mainstream applications like GMail and Yahoo Mail will be protected by default. Also, I am thinking that the browser should ask you how much you trust the current network, every time the network settings change. adrian, why not? we force SSL, we compare the self-signed signatures against a list of SHA1 collected while being on a trusted network, we force secure and httpOnly cookies. this setup should make you feel save even on very unsafe networks.
pepepepe
yeah...lets make millions of websites, millions of lines of code secure by telling everyone what rules they should obey...it worked so well for open smtp relays..it works so well for phishing... Lets just list all the things that could break and implement a measure against it. It works so well with AV software suites... Its good to be in the security field: Everyone constantly assures that there's new work to do next week..
mindcorrosivemindcorrosive
pdp: still, it boils down to "trust". How much do you trust your ISP? Your government? Yahoo and Google, for that matter? Without a trusting model from the bottom up, it is impossible to guarantee security - it is simply "security by obscurity", i.e. malicious crackers lack the sheer labor force to harvest the web and the average inhabitants at large scale, giving the imagery of "security". Please note that I don't underestimate your efforts in that respect, I just point out what is probably obvious to everyone. I agree that secured connections are the way to go, but what good is an 4096-bit SHA-1 encrypted SSH connection to a non-trusted host? I would suggest that both parties need to identify each other - both the server and the client. What is done today is server-side only, in most occasions. Client-side is still dependent on the human factor - and that probably constitutes the largest share of security breaches these days. Why not issuing a government-signed certificate to everyone - the way we get ID papers? Of course, that creates more problems along a way, but might be a solution in the near future.
Jeff WilliamsJeff Williams
It's not easy to do all of the things in your list in some environments. The OWASP ESAPI project is defining an security API that encourages developers to do these things. The Java implementation has been released for almost a year, and the .NET and PHP implementations are in progress.
pdppdp
mindcorrosive, well you cannot be 100% sure but you still trust some networks more then others. the plugin will detect if you are changing network settings and thus will ask you how much you trust the network you are in. if you are not trusting it as much as your home network then the plugin will match any data it gets from the web apps you are visiting against a trusted model built previously. I think that it makes sense. Jeff, I think that the project is very interesting but I doubt that you will be able to force it on developers. WebApp firewalls are such a hot topic at the moment simply because you don't have to deal with developers. They are not perfect but provide that transparency that satisfies most people. My proposal for this firefox extension aims to do to the client-side what webapp firewalls do for the server-side. No more then that - a simple, elegant, yet effective solution.