Cross-platform JavaScript testing

Last week I gave a talk at the London Ajax User Group on testing JavaScript software across different platforms. I wanted to share the talk with a wider audience so what follows is an essay version of the talk; it’s what I planned on saying before I got nervous, fluffed my lines and spoke too fast.

I want to start with a quick history lesson. Cast your mind back to 2006. We had a few browsers out in the wild, not as many as are in mainstream use as today but enough to give us a headache, especially with IE6 still dominating the stats. The problem we had back then was that all the browsers behaved in slightly (sometimes vastly) different ways and had different scripting APIs. Standards were being slowly rolled out but the overhead of dealing with browser quirks was still very high.

So, around 2005 and 2006 we see two projects that try to fix the situation: Prototype and jQuery. They both aimed to normalize and improve the scripting API across all browsers, so you could be more productive and be more confident that your code would work across the board. They’ve both been really successful and since then we’ve seen a lot more projects that have their own take on how we should tame the browsers; projects like YUI, MooTools and Dojo.

Skip forward five years, and the landscape looks quite different. We’ve got a few more browsers in mainstream use, especially with the mobile web becoming a mainstream platform, but we’ve also got JavaScript being used all over the place. People are finally taking it seriously for server-side work, it’s being embedded in databases, it’s everywhere. Node has kick-started this, but there’s a ton of frameworks based on Rhino.

So what’s the problem this time? Well, it’s not so obvious. These platforms all have different APIs but that’s largely because they’re used for different things; you probably don’t care that your web front-end won’t run on CouchDB. But one of the promises we’ve heard for years and years about server-side JavaScript is that it’ll amplify code reuse. You’ll be able to share business logic between the client and the server. Remember ‘write once, run anywhere’? We were actually going to make that work.

But what we’re actually seeing is needless specialization. Application frameworks for Node, testing libraries for Node, template languages for Node, API clients for Node. I don’t mean to pick on Node; it’s been the same with jQuery, although fortunately jQuery became so pervasive that depending on it wasn’t so much of a problem. But we’re seeing the same pattern all over again with Node and this seems like a wasted opportunity to me. Node’s popularity is bringing client-side developers onto the server, and it would be great of this helped unify our efforts.

Now I didn’t want to single any particular project out for criticism, because really I’m just as guilty as anybody else. I maintain a project called Faye which bills itself as a pub/sub messaging server for Node. But the truth is, most of the internal logic about managing clients and subscriptions is just pure JavaScript, Node is just used to make this logic accessible over the Internet. You could probably tear the Node part out and run the server in a browser, if you felt like it.

So we have a problem with portabilty, we’re wasting time reinventing the same wheels on every platform we migrate to. And to solve this, to help us share more code, I don’t think we need to normalize APIs like we did in the browser. Variety and innovation are good things, and people should have some choice about which platforms they target. But for those of us that want to write cross-platform code, I think we’re going to need testing tools that work everywhere.

Now, some back story. Since 2007 I’ve been working on this project called JS.Class. It’s an object system for JavaScript that’s based on Ruby, so it gives you classes and mixins and a few of Ruby’s object methods. It comes with a class library of common object-oriented idioms and data structures, and it tries to establish some conventions for things like how to compare objects for equality, how to sort them, store them in hash tables, and what-have-you.

OrderedHash = new JS.Class('OrderedHash', {
  include: JS.Enumerable,
  initialize: function() {
    // ...
  forEach: function(callback, context) {
    // ...

It lets you make a class like this. Say I want a new data structure, which we’ll call an ordered hash. I can create it, add the Enumerable methods to it, give an initializer, tell the Enumerable module how to iterate over it, and add other methods. Pretty standard stuff.

Then some time in around 2009, I added JS.Packages. It’s a package manager, and the aim is to seperate out the logic of how to load code and dependencies from the code itself. So you say I’ve got this file /lib/ordered_hash.js, it defines OrderedHash, and it depends on JS.Class and JS.Enumerable. You can use local files or files off other domains. Then when you need to use an object you just require() it and JS.Packages will make sure it’s loaded then run your code.

JS.Packages(function() { with(this) {
    .requires('JS.Class', 'JS.Enumerable')
    .provides('jQuery', '$')

JS.require('jQuery', 'OrderedHash', function() {
  var links = $('a'),
      hash  = new OrderedHash()
  // ...

The reason it focuses on object names rather than file paths is because I also wanted to use it to load code from libraries with their own loaders. For example, Google has this thing where you include a seed file, and then use the google.load() function to load more components. Using JS.Packages, you can have custom loader functions that bridge to other platforms’ loading systems, so I can use the same system to load all the objects I want to use. This abstraction also means I can use the same configuration to load code on non-browser platforms; JS.Packages can figure out how to load external files based on the environment.

JS.Packages(function() { with(this) {
  loader(function(onload) {
    google.load('maps', '2', {callback: onload})

JS.require('google.maps', function() {
  var node = document.getElementById("map"),
      map  = new google.maps.Map2(node)

Finally there’s this autoload() function, which lets you say if I require an object whose name matches this regex, try to load it from this directory. It’ll turn the object name into a path and then try to load that file for you. You can also use the matches from the regex to generate a dependency, for example if I’m testing and I load the TwitterSpec, I probably want to load the Twitter class as well, and I can use the match from the regex to specify that.

JS.Packages(function() { with(this) {
  autoload(/^(.*)Spec$/, {
           from: 'test/specs',
           require: '$1' })
  // e.g. TwitterSpec
  //      defined in test/specs/twitter_spec.js
  //      requires Twitter

So I’d been using this for months and all was good with the world until this landed in my inbox:

Does it work on Node?

And it turned out that it didn’t. There are some environment differences in Node that meant that JS.Class wouldn’t run. I patched them up and had a play around with it and everything looked okay. But still, all I could legitimately say at the time was, “I don’t know.” All my tests at the time were written with Scriptaculous, which I loved because it’s really simple to get going with, but it meant my tests were confined to the browser. I needed something new.

So I had a look around to see what was out there, and there are a few really nice frameworks around that work on a few platforms. But none of them seemed to Just Work out of the box on all the platforms people were asking me to support. A few of them you can hack to override their platform bindings but it’s not a great first-time experience. So I did what any self-respecting nerd with a GitHub account does, and wrote a whole new testing library.

The next version of JS.Class will ship with a package called JS.Test. It’s a testing library, and it looks pretty much like most other testing libraries you’ve seen, but with a few explicit goals. First, it should run everywhere, without modification or configuration. I shouldn’t have to tell it how to load files or print results, it should figure that out based on the environment. It should remove as much boilerplate as possible, because setting up a new test suite is always a drag and I constantly forget how to do it. And, it should hide any platform differences – you should just be able to use a single API to write, load and execute your tests across all supported platforms.

Now rather than pore over the API details, which aren’t that interesting since they’re similar to stuff you’ve already seen, I thought I’d take you through an example. We’re going to build a little Twitter API client that works in Node and in web browsers.

Twitter has a search API, which you don’t need to sign up or do any OAuth leg-work to use, it’s a great place to get started. It looks like this, you make a request to with your query and you get back a JSON document with some tweets matching the search.

$ curl ''
    "results": [{
        "id": 23843942428577792,
        "created_at": "Sat, 08 Jan 2011 20:50:13 +0000",
        "from_user_id": 4393058,
        "from_user": "extralogical",
        "to_user_id": 86308,
        "to_user": "jcoglan",
        "text": "@jcoglan I need to write some JS testing code...",

Now, let’s say I want to access this with a JavaScript API. I make a new client, tell it to search for something, then process the results with a callback function. This is what we’re going to build.

var client = new Twitter()'@jcoglan', function(tweets) {
  // tweets == [{to_user: 'jcoglan', ...}, ...]

So first we need to set up a project structure for this. JS.Test doesn’t require any particular file layout, this is just how I’ve settled on doing things. You see we have a source directory that contains the project’s source code, we have a vendor directory where I’ve installed JS.Class, and we have a test directory. This test directory needs a few items in it.

        twitter.js            : source code
        browser.html          : runs in browsers
        console.js            : runs on command line
        run.js                : loads and runs all tests
            twitter_spec.js   : test definitions
            core.js           |
            loader.js         | -> Framework code
            test.js           |
            (etc)             |

browser.html is what we’ll load up in a web browser to run the tests, and console.js is a script we’ll run in the terminal. They both do exactly the same thing, which is to load the JS.Class seed file, and load the test runner. The test runner, that’s run.js, is a cross-platform script that loads the project’s code, loads all the tests, and runs them. The tests themselves live in the specs directory, one spec file for each source file. Again, this is just convention, you can change this easily once you’re familiar with the setup.

I’m going to start with the console setup first, because it’s slightly simpler. As I said the job of console.js is to load the JS.Class seed file and then load the test runner. Here we’re using Node’s require() function to load the files, some platforms use load() but it’s easy to detect what’s available and pick the right one.

// test/console.js

JSCLASS_PATH = 'vendor/jsclass'
require('../' + JSCLASS_PATH + '/loader')

So with that done we move onto the runner file, this is the script that’s used in all environments to load the project and execute its tests. Notice we can do away with require() vs. load() here, since we’ve got JS.Packages loaded now we can use it to load everything. We start with an autoload() statement to tell it where to find Spec objects, then we tell it where our source code is: file source/twitter.js provides Twitter and requires JS.Class. Finally we load JS.Test, load all our specs and tell JS.Test to run the test suite.

// test/run.js

JS.Packages(function() { with(this) {
  autoload(/^(.*)Spec$/, {
           from: 'test/specs',
           require: '$1' })

JS.require('JS.Test', function() {
             function() { JS.Test.autorun() })

Now onto the spec itself. In our spec file, we create a spec. This is almost the same API is Jasmine, or JSpec, or any number of things so it should be familiar. We have a before block that creates a new Twitter client, then a test for it: when I call search() with "@jcoglan", the client should yield tweets mentioning me. That resume() business is there because we’re running an asynchronous test; JS.Test passes this function to the test block, and we call it when the test is ready to continue, passing any assertions we want to make. It you leave the resume argument out, JS.Test assumes it’s a synchronous test and won’t suspend running when the outer test block completes.

// test/specs/twitter_spec.js

TwitterSpec = JS.Test.describe("Twitter", function() {
  before(function() {
    this.client = new Twitter()

  it("yields matching tweets", function(resume) {"@jcoglan", function(tweets) {
      resume(function() {
        assertEqual( "jcoglan", tweets[0].to_user )

Let’s go and run our test suite. We immediately get a helpful error message from Node: it couldn’t find our source code.

$ node test/console.js

Error: Cannot find module './source/twitter'

Great, let’s go and create a blank file to get rid of this error. Now we’ve created the file, Node finds it but JS.Pacakges starts complaining.

$ mkdir source
$ touch source/twitter.js
$ node test/console.js

Error: Expected package at ./source/twitter.js
       to define Twitter

You said twitter.js would define the Twitter class, but it doesn’t! Better go and add that.

// source/twitter.js

Twitter = new JS.Class('Twitter')

Now we’ve made the package loader happy and we start to get some meaningful output.

$ node test/console.js

Loaded suite Twitter
Finished in 0.072 seconds.

1) Error:
test: Twitter returns tweets matching the search:
TypeError: Object #<Twitter> has no method 'search'

1 tests, 0 assertions, 0 failures, 1 errors

We get an error because our Twitter class doesn’t have the method we need. So, let’s go and implement it. I won’t go through the whole TDD cycle here, let’s just assume I prepared some Node code earlier that does what we want.

// source/twitter.js

Twitter = new JS.Class('Twitter', {
  search: function(query, callback) {
    var http   = require('http'),
        host   = '',
        client = http.createClient(80, host)

    var request = client.request('GET',
                  '/search.json?q=' + query,
                  {host: host})
    request.addListener('response', function(response) {
      var data = ''
      response.addListener('data', function(c) { data += c })
      response.addListener('end', function() {
        var tweets = JSON.parse(data).results

And run the test again:

$ node test/console.js 

Loaded suite Twitter
Finished in 2.684 seconds.

1 tests, 1 assertions, 0 failures, 0 errors

We’re all good! Except… this won’t run in a browser. All the network code we wrote only works on Node, and we want this to work client-side too. We’re going to need tests for this. Thankfully, JS.Test makes this easy: all we need is a web page that, just like our terminal script, loads the JS.Class seed file, and loads the test runner. All the test code we wrote earlier will work just fine in the browser.

    <meta http-equiv="Content-type" content="text/html">
    <title>Twitter test suite</title>
    <script src="../vendor/jsclass/loader.js"></script>
    <script src="../test/run.js"></script>

If we load this up in a browser, we see something like this. “ReferenceError: require is not defined”. Okay, it’s hitting our Node implementation where we load the Node HTTP library, we want to avoid that. So what do we do? Easy, just detect whether we’re in a DOM environment and switch to using JSONP to talk to Twitter instead of Node’s HTTP libraries. Again, here’s one I made earlier, this is just the usual JSONP hackery, nothing fancy, and we’ve moved the Node version into the nodeSearch() method that will be called if we’re not in a DOM environment.

Twitter = new JS.Class('Twitter', {
  search: function(query, callback) {
    if (typeof document === 'object')
      this.jsonpSearch(query, callback)
      this.nodeSearch(query, callback)
  jsonpSearch: function(query, callback) {
    var script  = document.createElement('script')
    script.type = 'text/javascript'
    script.src  = '' +
                  'callback=__twitterCB__&' +
                  'q=' + query
    window.__twitterCB__ = function(tweets) {
      window.__twitterCB__ = undefined
    var head = document.getElementsByTagName('head')[0]
  nodeSearch: function(query, callback) {
    // ...

Reload the page, and fantastic – we’ve got a green build. Quick side-note, in the browser UI, JS.Test will print out a tree of all your nested context blocks that you can browse, which can be more useful than the terminal UI for finding errors. It’ll also notify TestSwarm if that’s where you’re running your tests, so you can use it for continuous integration.

The final piece of the process is to refactor. We’ve got a bunch of networking code gunking up our API client. Maybe we should move the networking code into its own module that’s a generic interface for making HTTP calls in any environment we support. Then we could call it like this:

// source/twitter.js

Twitter = new JS.Class('Twitter', {
  search: function(query, callback) {
    var resource = '' +
                   '/search.json?q=' + query
    Twitter.Net.getJSON(resource, callback)

Because the logic for how to do networking is now isolated in one module, it’s easier for a user to replace if they want to make the Twitter client run in another environment: they just have to replace the implementation of Twitter.Net.getJSON() with the HTTP code for their platform. It also means that the network is easier to stub out, since we don’t want to rely on the real Internet during testing:

// test/specs/twitter_spec.js

TwitterSpec = JS.Test.describe("Twitter", function() {
  before(function() {
    this.client = new Twitter()
    stub(Twitter.Net, "getJSON")
        .yields([{to_user: "jcoglan"}])
  it("yields matching tweets", function() {
    // ...

This approach also means we can run the test in any platform because we don’t need to actually talk to the network. We’d then write unit tests for the Twitter.Net module to make sure it worked on the right platforms.

So what I wanted to get across here isn’t that you should all go and use my code, but try to follow some of the same patterns. If we want highly reusable software we’re going to need to test it everywhere. If you’re building libraries to support your work, and it looks like the abstraction would be useful in other contexts, consider making it available to users of other platforms. If you do have platform-specific code, try to isolate it in one place and hide it behind abstractions. Remember how in Faye I’ve isolated the Node bindings to make it easy to run and test the internal components in other environments. Make it easy to replace the platform bindings, so if someone wants to run it somewhere you didn’t expect it’s easy to swap in new bindings.

And finally, write usage documentation. If someone’s trying to get your code running somewhere new, step-by-step tutorials are great for showing people the ropes and getting them comfortable with how your stuff works, so they feel more confident hacking it to their needs. You’ll be amazed what people do with your code when you make it easy to use and write nice docs for it.

Talk: Writing a language in 15 minutes

I gave a talk at London Ruby User Group yesterday, based on the work I’ve been doing on Heist, my Scheme interpreter project. I wrote the core of a basic Scheme interpreter in about 15 minutes as a live-coded demo (well, kind of – the coding was pre-recorded so I could focus on talking), which seemed to go down pretty well. If you missed it (or if you were there and want to watch it again in slow motion), here’s the slides and the video (just code, no narrative (sorry)). (Side note: I think Lisp may be affecting my writing style.)

The slides first: They are S5-format HTML, introducing the Scheme language features I implement during the talk. The video shown below is available at higher resolution from Vimeo.

Scheme interpreter in 15 minutes from James Coglan on Vimeo.

Video is also available from Skills Matter if you want the narrative. The code’s not really visible in this version so combine the audio from this with the above video and you should just about piece things together.

Some relevant links:

  • Heist is my main Scheme interpreter project. It has macros, tail recursion, continuations, and a reasonable chunk of the R5RS spec and its REPL auto-indents your code. It’s about 1000 lines of well-commented Ruby with a few hundred lines of Scheme, including macros for most of the syntax.
  • Stickup is a tiny interpreter for a small subset of Scheme, about 150 lines long. Closer to what I present in the talk, and easier to get your teeth into.
  • Treetop is what I use to generate parsers, it’s super-simple to use and lets you write a parser in no time at all.

Thanks to everyone who came along and had nice things to say about the talks, especially to whoever was telling me about about the trie data structure; Heist’s tab-completion code is now much prettier.

PackR won’t touch your $supers

Another quick update: PackR received an update today that means that when you use its :shrink_vars mode, it won’t minify any variables called $super. In Prototype, $super is used to implement inheritance and your class definitions will break if you change its name.

I didn’t really want to make PackR inconsistent with Dean’s original, but without this feature Prototype code is essentially unpackable beyond basic whitespace-stripping, and given that Ruby/Rails users are likely to be using Prototype, I figured it was worth the addition. It also lets you specify your own protected variable names — I’ll be updating the docs soon.

To update, just gem install packr or

ruby script/plugin install

Self-currying JavaScript functions

I’m telling you, this language keeps surprising me. You’ll need Prototype for this one.

Function.prototype.toSelfCurrying = function(n) {
  n = n || this.length;
  var method = this;
  return function() {
    if (arguments.length >= n) return method.apply(this, arguments);
    return method.curry.apply(arguments.callee, arguments);

Make a simple function:

var adder = function(a,b,c) {
  return a + b + c;

And curry away:

var add = adder.toSelfCurrying();

add(1)(2)(3)  // --> 6
add(7,8)(23)  // --> 38

Every call to add returns a curried version of add, until the required number of arguments have been supplied. When all arguments are present, you get a return value.

Using ChainCollector to respond to Ajax calls

Saq made a couple of comments on my ChainCollector article about how to queue up functions to respond to Ajax calls, and whether I could write something up to shed a bit of light on how this might be done. Today, I’m going to implement some methods that allow to GET from/POST to a URL, then do some basic things with the response using very sentence-like code. Specifically, I’m going to end up with:


(I hope you’re starting to see why I wrote ChainCollector: code clarity is something that’s very important to me, especially when you’re working in a team that covers the whole web software stack and many of them don’t know JavaScript.) I’m not sure if this exactly answers Saq’s problem, but I hope it will illustrate how you might begin using ChainCollector to solve issues like this.

Okay, the first thing we’re going to need is a new class that provides us with methods to use in the chain above. Let’s call it ChainableRequest. I’m using Prototype today, but hopefully non-Prototype users will be able to follow along.

Ajax.ChainableRequest = Class.create();

Object.extend(Ajax.ChainableRequest.prototype, {
  initialize: function(verb, url) {
    this.chain = new ChainCollector(this);
    this.request = new Ajax.Request(url, {
      method: verb,
      onComplete: function(transport) {
        this.response = transport;;

This class takes two arguments to initialize its instances: an HTTP verb (GET or POST) and a URL. It sets up a new ChainCollector (which will inherit any methods we add to ChainableRequest) and sets off an Ajax request to the URL. It registers a callback that tells the request to fire the chain when the request completes.

So, onto the methods that we want to add to the chain. We need a method that inserts the response into some elements on the page. I want this method to accept a CSS selector, an element reference, or an array of element references. For each element found, it strips any scripts out of the response and inserts the remainder into the element. Note how each method returns the ChainableRequest object for chaining purposes.

Object.extend(Ajax.ChainableRequest.prototype, {
  insertInto: function(elements) {
    if (!this.response) return this;
    if (typeof elements == 'string') elements = $$(elements);
    if (!(elements instanceof Array)) elements = [elements];
    elements.each(function(element) {
      element.innerHTML = this.response.responseText.stripScripts();
    return this;

And, we need a method to evaluate the scripts in the response. You’ll see that Prototype uses a setTimeout in some cases to do this, in case the document hasn’t finished updating the DOM in response to an innerHTML change.

Object.extend(Ajax.ChainableRequest.prototype, {
  evalScripts: function() {
    if (!this.response) return this;
    setTimeout(function() {
    }.bind(this), 10);
    return this;

The final piece of the puzzle is to create methods for any HTTP verbs you want to use. Note that I’m using capitals because that’s the convention with HTTP verbs, and also because you might want to go on an implement PUT and DELETE (supported by YUI), and delete is a reserved word in JavaScript. Each verb method should create a new ChainableRequest, then return its chain property so you can chain methods into the onComplete callback.

$w('GET POST').each(function(verb) {
  Ajax[verb] = function(url) {
    var req = new Ajax.ChainableRequest(verb, url);
    return req.chain;

// Remember to add the required methods to ChainCollector

And that just about wraps it up in terms of getting our initial code sentence working. You’d probably want something a lot more flexible than this in real life, but this covers some common uses for Ajax calls that can easily be turned into sentence structures. You can download the ChainableRequest JavaScript class to save yourself copy-pasting all the code from this article.

Methodize and functionize

Though the API docs seem to make no mention of it, there is this little gem sitting in Prototype 1.6.0:

Function.prototype.methodize = function() {
  if (this._methodized) return this._methodized;
  var __method = this;
  return this._methodized = function() {
    return __method.apply(null, [this].concat($A(arguments)));

What that does is it returns a new function that calls the original function with its first argument set to whatever the current meaning of this is. This is how they now bind element methods to DOM objects:

// This takes an element as an argument
Element.visible = function(element) {
  return $(element).style.display != 'none';

// Convert is so it can be called as a method on elements
HTMLElement.prototype.visible = Element.visible.methodize()

This bit of trickery means you can myDiv.visible() rather than Element.visible(myDiv). We can make this work in reverse as well…

Function.prototype.functionize = function() {
  if (this._functionized) return this._functionized;
  var __method = this;
  return this._functionized = function() {
    var args = $A(arguments);
    return __method.apply(args.shift(), args);

You could, for example use this if you have a particular iterator you keep reusing…

// If I keep doing this...
var safe = { return s.stripScripts(); });

// I could do this instead...
var stripScripts = "".stripScripts.functionize();
var safe =;

Eh voilá! Much more legible code. You could equally do the above in a similar way with Reiterate, but I digress. The point is that you can do some pretty tricksy stuff with functions in JavaScript, and I’d guess that many web devs (myself included) are mostly unaware of the possibilities. Hopefully the new Prototype release will help folks expand their JS knowledge ever so slightly.

Reiterate 1.3

Tiny update: after adding 16 characters (“this._object || “), Reiterate is now compatible with Prototype 1.6.0′s revised Hash API. Also, the gzipped copy is now even smaller, thanks to a different compression strategy. Essentially, when using Packer (or PackR for that matter), using ‘shrink variables’ plus gzip compression will result in the smallest an fastest-to-execute files. Base-62 is there in case you can’t use gzip for some reason: it creates smaller files than variable-shrinking on its own.

Where’s my inheritance?

Update: from what I can gather from going through the source code, $super in Prototype actually refers to the method in the parent class, rather than the old method in the current class. My point about the other libraries mentioned below stands, though. Also, my apologies to Dan, whom I cornered at @media Ajax and quizzed about Prototype’s stance on this issue.

There is one design decision in JS.Class that sets it aside from all the other inheritance libraries I looked at (Prototype, Base and Inheritance). With all those libraries, super means “the previous version of this method in this class”, rather than “the current version of this method in the parent class”. Now, if you’ve just built a class by inheriting from a parent class and then overwriting some of its methods, those two definitions amount to the same thing.

But JavaScript is a dynamic language, and I can add new methods to any class whenever I want. What if I want to replace a non-super-calling method with a method that does call super. More often than not, I don’t care how the method used to work (if I do, I can easily store a reference to it myself) but I do care about how the parent class’ method works. The aforementioned libraries leave you in the lurch here.

Now, consider the following inheritance situation:

var Car = JS.Class({});
var Ford = JS.Class(Car);
var ModelT = JS.Class(Ford);

Car.method('drive', function() {
    return 'Driving my Car';

ModelT.method('drive', function() {
    return this._super() + ', a Model T';

var a = new ModelT().drive();

Ford.method('drive', function() {
    return 'Driving my Ford';

var b = new ModelT().drive();

Car defines a drive method, which is inherited by Ford and ModelT through the prototype chain. Then ModelT defines its own drive function, which uses super. Clearly, this should refer to the drive method inherited from Car. But then, Ford defines a drive method. This will not be inherited by ModelT — it now has its own drive method — but the question arises about what super within ModelT#drive should refer to.

The prevailing wisdom with the libraries mentioned above is that it ought to refer to the current class’ previous implementation of the method, rather than the parent class’ current implementation. That is, super refers to the overridden method at the time the new method is defined, rather than the parent’s method at the time the new method is called.

Personally, I think this is madness. If I’m trying to debug some JavaScript and see the word this._super, the very first thing I’m going to do is inspect this.klass.superclass at that point in the code. I’m sure as hell not going to start wading through a large codebase (and if you need an inheritance model, I’m assuming you have a large codebase) trying to find out the order in which a particular method was overridden. Both semantically and practically, I think the “current method in superclass” method is superior to the “previous method in current class” one. In the above example, a contains "Driving my Car, a Model T" while b contains "Driving my Ford, a Model T".

It seems that a fair number of people disagree with this policy though, including the Ruby language (which JS.Class is modelled on), which actually inspects included Modules for super methods to use before working its way up through superclasses. I’d be really interested to know why this is, and whether there are compelling reasons not to do things my way. I might add support for the other libraries’ way of working if anyone can persuade me…

JS.Class updates

Yes, it only came out a couple days ago, but it’s a 0.9.x release, so it’s still being developed. If you downloaded JS.Class over the last couple days, I strongly recommend you upgrade to the latest version.

First off, it improves performance substantially over the initial release by inspecting method definitions to find out if they use this._super. If they don’t, they can be inserted straight into the class’ prototype without being wrapped in a _super-generating function. I believe Prototype and Base do something similar, though Inheritance seems not to.

Second, it fixes some subtle bugs to do with _super being reassigned when one _super-using function calls another _super-using function. This was a pretty basic oversight and was easily fixed.

Finally, it’s been much more thoroughly tested, the design has been tightened up, and instance method inheritance works better by using prototype chaining rather than brute-force method addition. That means you don’t have to use MyClass.method('name', func) if you don’t want to — you can just say = func and JavaScript’s own inheritance model will take care of the rest. (Although, if func uses this._super, you still need to add it using method.) Class method inheritance is still a bit of a pain, and you need to use MyClass.classMethod('name', func) if you want the method to be inherited.

Also, the docs have been augmented — I especially like the bit about module design and how closely you can mimic Ruby’s inheritance model using this library. As far as I know, this is the only JavaScript inheritance model in which you can use super without passing arguments back in, just like in Ruby.

Announcement: JS.Class

After mentioning Prototype’s inheritance model the other day, one rather important thing struck me about it. I was going to borrow their model for some of my own work when I realised that, if you use Prototype’s $super feature, your code will break if you compress it using a variable-shrinking algorithm (all the decent compressors do this). Prototype inspects the argument names of your functions and determines whether to use Function#wrap to pass in a reference to the overridden function.

So, what to do? We use YUI at work, but their inheritance model is so cumbersome as to be almost totally useless. I needed something better. Taking a few leaves out of Prototype’s model, and out of Alex Arnell’s Inheritance.js, I’ve come up with something that does just what I want. Basic features:

  • Simple, elegant single-inheritance model, including inheritance of class (static) methods
  • Clean, intuitive access to the class hierarchy from within instance methods, as well as through class interfaces
  • Automated inheritance: adding class/instance methods to a class after its initial definition instantly updates all its subclasses and their instances
  • Mixins, like in Ruby (this is essentially Ruby-ish syntax masking a trivial JavaScript feature)
  • super, with arguments optional
  • is_a support (JavaScript trivially supports has-a relationships itself)

Although this takes much of its syntax ideas from Ruby (that being the classical inheriting language I know best), it should be intelligeable to users of other classical OO languages such as Java (we are a Java shop where I work).

So, without further ado, go check out JS.Class.