On the role of JS.Packages

There’s been much talk lately of JavaScript loading libraries, and of how everybody and their dog is reinventing the same wheel. Around three years ago I began writing one such library, JS.Packages. This is the module loader from the JS.Class project but it can easily be used standalone outside JS.Class by including the package.js file in your project. I wrote it to solve a particular set of problems I had at the time and I still like using it, but since then there’s been an explosion of new loader libraries. Here are a few that spring to mind:

Some of these predate JS.Packages but a lot are newer. I wanted to understand how these tools differ from each other and whether any of them do a better job at solving my set of problems, and so I started putting a set of examples together showing how to load a JavaScript component with reasonably complex dependencies using each of the above libraries. Obviously this comparison will be biased, but I’ve tried to be as objective as I can be. My experience of these tools is based on reading their documentation and getting a working example going, and if I’ve misrepresented anybody’s work I will happily publish corrections.

The component I chose to write examples against is JS.Test. For the sake of comparison I duplicated the module definitions for all the components JS.Test depends on. JS.Class is a modular library and the testing framework uses quite a few bits of it, as we’ll see. The dependencies are configured like so:

JS.Packages(function() { with(this) {
    file('./lib/core.js')
            .provides('JS.Module', 'JS.Class', 'JS.Singleton');
    
    file('./lib/test.js')
            .provides('JS.Test')
            .requires('JS.Module', 'JS.Class', 'JS.Console', 'JS.DOM',
                      'JS.Enumerable', 'JS.SortedSet', 'JS.Comparable',
                      'JS.StackTrace')
            .styling('./lib/assets/testui.css');
    
    file('./lib/dom.js')
            .provides('JS.DOM')
            .requires('JS.Class');
    
    file('./lib/console.js')
            .provides('JS.Console')
            .requires('JS.Module', 'JS.Enumerable');

    file('./lib/comparable.js')
            .provides('JS.Comparable')
            .requires('JS.Module');
    
    file('./lib/enumerable.js')
            .provides('JS.Enumerable')
            .requires('JS.Module', 'JS.Class');
    
    file('./lib/hash.js')
            .provides('JS.Hash', 'JS.OrderedHash')
            .requires('JS.Class', 'JS.Enumerable', 'JS.Comparable');
    
    file('./lib/set.js')
            .provides('JS.Set', 'JS.HashSet', 'JS.OrderedSet', 'JS.SortedSet')
            .requires('JS.Class', 'JS.Enumerable')
            .uses(    'JS.Hash');
    
    file('./lib/observable.js')
            .provides('JS.Observable')
            .requires('JS.Module');
    
    file('./lib/stack_trace.js')
            .provides('JS.StackTrace')
            .requires('JS.Module', 'JS.Singleton', 'JS.Observable',
                      'JS.Enumerable', 'JS.Console');
}});

There are ten files here, and we have lists of the JavaScript objects that each file provides and depends on. Note that this does not specify a load order explicitly: when the user wants to use JS.Test, they simply call JS.require('JS.Test') and JS.Packages figures out the most efficient path to download and execute the missing components in the correct order. This is all the code that is needed at the point of use:

JS.require('JS.Test', function() {
  // use the JS.Test library
});

Before I go any further I should explain the design goals behind JS.Packages so we can see how other tools measure up. I need a loader library to have the following properties:

  • Should be able to load absolutely any JavaScript from any domain, in the browser and on a range of server-side platforms
  • Should be trivial to just state which objects you want to directly use, and the library should deal with dependencies
  • Should not waste resources on downloading packages that are already loaded
  • Should be lazy, and should not download any components until they are actually needed

The configuration given above looks verbose, but the information it gives the package loader makes all the above properties possible. It’s also no more information than you’d supply to another dependency manager like Dojo, the only difference is it lives outside the source files you want to manage.

The first requirement, that we be able to load absolutely any JavaScript object, rules out the Dojo package system, the YUI module system and anything that requires CommonJS. Not that these aren’t good tools – and a lot of good tooling has indeed sprung up around them – but they require the source you’re loading to explicitly use them. For example using the Dojo system requires source files to use the dojo.provide and dojo.require functions, and CommonJS requires use of the require and exports interfaces. When doing web development I frequently want to use third-party code from domains I do not control, so requiring certain source code conventions won’t work. For this reason, I keep dependency information separate from my source code. This lets me load any code I like and keeps my own code more portable, and JS.Packages gives me a great way to do this.

The second requirement, that it should be simple to just state which objects you want to use directly without worrying about dependencies, is important. I gives me freedom to refactor my codebase without needing to update lots of different call sites; I update a single config file and all my apps carry on working. This highlights a major difference between JS.Packages and the other libraries out there: basing requirements on object names rather than script URLs gives a level of abstraction that means the package loader can make better decisions about what needs to be loaded, and how it goes about loading things. It also means I can use the common strategy of including a build number in the path to a script to help with CDN caching, and I only need to update one reference to the file. I even have tools that generate my config files from version control.

The usage pattern with a lot of loader tools seems to be that you specify which scripts to inject before running a callback. HeadJS just runs everything you tell it to in order, and has the simplest API for doing so:

head.js('./lib/core.js',
        './lib/enumerable.js',
        './lib/dom.js',
        './lib/comparable.js',
        './lib/observable.js',
        './lib/console.js',
        './lib/hash.js',
        './lib/set.js',
        './lib/stack_trace.js',
        './lib/test.js',
        callback );

Yepnope is also quite simple but its API is geared more toward feature detection and is a little lower-level: the callback given fires after every script file loads, so if you’re loading a set of files like this you need to check whether the last thing you want is loaded before continuing:

yepnope({
  load: [ './lib/core.js',
          './lib/enumerable.js',
          './lib/dom.js',
          './lib/comparable.js',
          './lib/observable.js',
          './lib/console.js',
          './lib/hash.js',
          './lib/set.js',
          './lib/stack_trace.js',
          './lib/test.js' ],
  callback: function() {
    if (window.JS && JS.Test) callback();
  }
});

HeadJS and Yepnope download everything in parallel where possible and execute scripts in the order you list them. LABjs and RequireJS allow you to specify where order of execution matters and lets everything else run in parallel:

$LAB.script('./lib/core.js').wait()
    .script('./lib/enumerable.js')
    .script('./lib/dom.js')
    .script('./lib/comparable.js')
    .script('./lib/observable.js').wait()
    .script('./lib/console.js')
    .script('./lib/hash.js').wait()
    .script('./lib/set.js')
    .script('./lib/stack_trace.js').wait()
    .script('./lib/test.js')
    .wait(callback);

require(['./lib/core.js'], function() {
  require(['./lib/enumerable.js', './lib/dom.js',
           './lib/comparable.js', './lib/observable.js'], function() {
    require(['./lib/console.js', './lib/hash.js'], function() {
      require(['./lib/set.js', './lib/stack_trace.js'], function() {
        require(['./lib/test.js'], callback);
      });
    });
  });
});

Using these APIs, LABjs will download everything in parallel just like HeadJS and Yepnope, but will block execution where you tell it to so that scripts execute in the right order. If order is unimportant the scripts execute in the order they arrive from the server. RequireJS’s API for managing dependencies means that downstream dependencies are not downloaded until the upstream files have executed. In theory this lengthens download times but I’ve not seen it make a huge difference in my limited set of tests. If performance is critical you should benchmark these libraries against your own codebase.

The third requirement is that scripts should not be downloaded multiple times. If several parts of an application require the same libraries, those dependencies should be downloaded and executed once and the components waiting for them should all be notified. I found that HeadJS, Yepnope, LABjs and RequireJS all fail at this, in that if I had multiple pieces of code requiring the same scripts then the scripts would show up multiple times in Firebug/Chrome’s network tab. Only LABjs and Yepnope seem to execute the loaded scripts multiple times though; HeadJS and RequireJS only executed the files once.

$script fares better than the others at this by giving each script a short name by which you can refer to it. This lets it track which modules have loaded and gives you an API for waiting for modules to be ready so you can manage dependencies:

$script('./lib/core.js', 'core');
$script.ready('core', function() {
  $script('./lib/enumerable.js', 'enumerable');
  $script('./lib/dom.js', 'dom');
  $script('./lib/comparable.js', 'comparable');
  $script('./lib/observable.js', 'observable');
});
$script.ready(['core', 'enumerable'], function() {
  $script('./lib/console.js', 'console');
  $script('./lib/hash.js', 'hash');
});
$script.ready(['core', 'observable', 'enumerable', 'console'], function() {
  $script('./lib/stack_trace.js', 'stack_trace');
});
$script.ready(['core', 'enumerable', 'hash'], function() {
  $script('./lib/set.js', 'set');
});
$script.ready(['core', 'console', 'dom', 'enumerable', 'set', 'comparable', 'stack_trace'], function() {
  $script('./lib/test.js', callback);
});

What I like about this is that it does a better job of managing complex sets of dependencies. Whereas something like LABjs lets you say ‘load X, Y, and Z, then block, then load A and B, then block’, $script lets you compose a tree of dependencies so each download is triggered as soon as its set of dependencies is ready. This is much closer to what JS.Packages does internally, though because you’re still triggering script downloads yourself it’s a little closer to the ‘metal’. It does suffer the same drawback as RequireJS in that you can’t download the whole list in parallel, but again you should benchmark this to find out if it really affects your use case.

My final requirement is that scripts not be downloaded if the APIs they provide already exist. This is different from saying that the same script should not be downloaded twice, and is closer in spirit to what Yepnope does. JS.Packages uses object detection to figure out if it actually needs to download anything to provide the objects you want. This is why working at the object level rather than the script level is such a big win – I can write a shim that provides an implementation of document.evaluate(), and configure it like this:

JS.Packages(function() { with(this) {
    file('./lib/my-doc-evaluate.js').provides('document.evaluate');
}});

Then, when I want to use this interface I just call JS.require('document.evaluate'), and if the browser or some library has already exposed this API JS.Packages will not download my shim, improving the load time. Though $script can stop you downloading the same thing twice through $script, it cannot stop you re-downloading something downloaded using another mechanism. Using object detection means JS.Packages will never re-download something regardless of how it was initially loaded.

This laziness aspect highlights an important difference between JS.Packages and the other loaders. With the others, you deal with dependencies as part of requiring the scripts you need. This both complicates your code since you have to specify more than just the objects you want to work with, and stops the loader from being lazy and only downloading what you need. You have to say, ‘load these dependencies, and then load the stuff I want to work with’, rather than ‘load this and backfill any missing dependencies for me’. You can implement these features on top of one of these loaders but it’s tricky, and I’d rather keep my application code as uncluttered as I can.

For me, the final big win with JS.Packages is being able to use the same loader for all the objects I want, even objects from libraries with their own loader systems. For example I can configure Google Maps by using custom loader functions:

JS.Packages(function() { with(this) {
    file('http://www.google.com/jsapi?key=MY_API_KEY')
        .provides('google.load');

    loader(function(cb) { google.load('maps', '2.x', {callback: cb}) })
        .provides('GMap2', 'GClientGeocoder', 'GEvent', 'GLatLng', 'GMarker')
        .requires('google.load');
}});

I can then JS.require('GMap2') to use Google Maps. Again, this highlights the benefit of working with objects and interfaces instead of with script URLs. It also means you can make a loader than works across multiple JS platforms, not just on the web.

So that sums up why I’m still using JS.Packages to load my code: it’s portable, and it automates a lot of the decision-making and coding I’d have to do using other loaders. But I’m not here to say it’s the “best”. The design decisions made my other libraries all have their applications. Some emphasize small file size, some emphasize download times, some include dependency managers, and some have server-side build tools to package your code for production. Almost all of them are smaller than JS.Packages, but not by much. Personally I think the slightly extra filesize is worth it considering how simple the code that uses it becomes.

But even if you decide you like the look of JS.Packages you should benchmark a few of these, try out their APIs and see what works for you. My experience is every codebase is different, and things that are good ideas in theory sometimes don’t perform like you’d expect. As Rebecca Murphey says, we as a community should work toward a file loader that works for everyone, but for now I’m going to use what works for me, for JavaScript as it exists today.