Refactoring towards testable JavaScript, part 3

This article is one in a 3-part series. The full series is:

We finished up the previous article having separated the business logic from the DOM interactions in our JavaScript, and adjusted our unit tests to take advantage of this. In the final part of this series, we’ll take a look at how to take the tests we have and run them across a range of browsers automatically to give us maximum confidence that our code works.

To automate cross-browser testing, I use a much-overlooked tool called TestSwarm. Developed by John Resig for testing jQuery, it takes care of tracking your test status in multiple browsers as you make commits to your project.

To set it up, we need to clone it from GitHub and create a directory within to host revisions of our project.

$ git clone git://github.com/jquery/testswarm.git
$ cd testswarm
$ cp config/config-sample.ini config.ini
$ mkdir -p changeset/jsapp

You’ll need to create a MySQL database for it:

CREATE USER 'testswarm'@'localhost' IDENTIFIED BY 'choose-a-password';
CREATE DATABASE testswarm;
GRANT ALL ON testswarm.* TO 'testswarm'@'localhost';

Then import the TestSwarm schema:

$ mysql -u testswarm -p testswarm < config/testswam.sql
$ mysql -u testswarm -p testswarm < config/useragents.sql

Once you’ve added the database details to config.ini and set up an Apache VHost, you can visit your TestSwarm server and click ‘Signup’. Once you’ve filled in that form you’ll be able to grab your auth key from the database:


$ mysql testswarm -u testswarm -p
mysql> select auth from users;
+------------------------------------------+
| auth                                     |
+------------------------------------------+
| a962c548c22a591e8f150b9d9f6b673b6f212d08 |
+------------------------------------------+

Keep that auth code somewhere as you’ll need it later on. Now before we go any further, to show how TestSwarm works I want to deliberately break our application so that it doesn’t work in Internet Explorer. Do this I’m going to replace jQuery#bind with HTMLElement#addEventListener, and when we push our code to TestSwarm we should see it break.

To get our tests running on TestSwarm, we need a config file. I just grabbed one of the standard Perl scripts from the TestSwarm project and added my own configuration. This tells the script where your TestSwarm server is, where your code should be checkout out, any build scripts you need to run, which files to load, etc. JS.Test includes TestSwarm support baked in so we don’t need to modify our tests at all to make them send reports to the TestSwarm server, we just need to load the same old spec/browser.html file we’ve been using all along. You should configure the Perl script to clone your project into the changeset/jsapp directory we created earlier: this is in TestSwarm’s public directory so web browsers will be able to load it from there. You’ll need to include the auth key we created earlier to submit jobs to the server.

Having created this file, we clone the project on our server somewhere and create a cron job to periodically update our copy and run the TestSwam script: this means that new test jobs will be submitted whenever we commit to the project.

# crontab
* * * * * cd $HOME/projects/jsapp && git pull && perl spec/testswarm.pl

If you now open a couple of browsers and connect to the swarm, you’ll see tests begin to run. If you inspect the test results for our project you should see this:

The green box is Chrome reporting 5 passed tests, and the black box is IE8 reporting 7 errors. If we click through we see what happened:

Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)

  • Error:
    FormValidator with valid data displays no errors:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid name displays an error message:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid email displays an error message:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid argument displays an error message:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid name displays an error message:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid name displays an error message:
    TypeError: ‘submit’ is undefined
  • Error:
    FormValidator with an invalid name displays an error message:
    TypeError: ‘error’ is undefined

5 tests, 4 assertions, 0 failures, 7 errors

“Object doesn’t support this property or method” is IE’s way of saying you’re calling a method that doesn’t exist, in our case addEventListener(). If we make some changes so that we use attachEvent() instead in IE, when TestSwam picks up the change and runs our tests they go green in IE.

You can leave any number of browsers connected to the swarm and they will run tests automatically as you make commits to the project. This is the great advantage of sticking to portable unit tests for your JavaScript, it makes this kind of automation much easier, you run tests in real browsers and don’t need a lot of additional tooling to set up fake environments. I run the JS.Class tests this way and it’s great for making sure my code works across all platforms before shipping a release.

The final big win, having set all this up, is that we can now delete some of our full-stack tests since they just duplicate our unit tests. When we’re testing a full-stack integration, we really just want to test at the level of abstraction the integration works at, i.e. test the glue that binds the pieces together, rather than testing all the different cases of every piece of business logic. In our case, this means testing two broad cases: either the form is valid or it is not valid. We have unit tests that cover in detail what ‘valid’ and ‘invalid’ mean, and we don’t need to duplicate these. We just need to test their effect on the application as a whole: either the form submits or it doesn’t. We can then instantly discard all the integration tests that cover the validation details, leaving the broad cases covered. Doing this rigorously will keep your integration tests to a minimum and keep your build running quickly.

To wrap up this series, I thought I’d mention a couple other things we can do with our tests to get extra coverage. The first is the JS.Test coverage tool. If we write our code using JS.Class, we can make the test framework report which methods were called during the test just by adding cover(FormValidator) to the top of the spec. When we run the tests we get a report:

$ node spec/console.js 
Loaded suite FormValidator

Started
....

  +-----------------------------------+-------+
  | Method                            | Calls |
  +-----------------------------------+-------+
  | Method:FormValidator.validate     | 4     |
  | Method:FormValidator#handleSubmit | 0     |
  | Method:FormValidator#initialize   | 0     |
  +-----------------------------------+-------+

Finished in 0.005 seconds
4 tests, 4 assertions, 0 failures, 0 errors

If any of the methods are not called, the process exits with a non-zero exit status so you can treat your build as failing until all the methods are called during the test.

Finally, I have an ongoing experimental Capybara driver called Terminus that lets you run your Capybara-based tests on remote machines like phones, iPads and so on. If we change our Capybara driver as required, we can open a browser on a remote machine, connect to the Terminus server and run the tests on that machine, or on many machines at once if your tests involve communication between many clients.

Here’s the full list of software we’ve used in this series:

  • Sinatra – Ruby web application framework used to create our application stack
  • jQuery – client side DOM, Ajax and effects library used to handle from submissions
  • JS.Class, JS.Test – portable object system and testing framework for JavaScript
  • Cucumber – Ruby acceptance testing framework used for writing plain-text test scenarios
  • Capybara – web scripting API that can drive many different backends
  • Rack::Test – Capybara backend that talks to Rack applications directly with no wire traffic, suited to doing fast, in-process testing, does not support JavaScript
  • Selenium – Capybara backend that runs using a real browser, slower but supports JavaScript
  • Terminus – Capybara backend that can drive any remote browser using JavaScript
  • PhantomJS – headless distribution of WebKit, scriptable using JavaScript
  • TestSwam – automated cross-browser CI server for tracking JavaScript unit tests across the project history

I’ll leave you with a few points to bear in mind to keep your JavaScript unit-testable:

  • Minimize DOM interaction – write your business logic in pure JavaScript, test it server-side, and use a ‘controller’ layer to bind this logic to your UI.
  • Keep controllers DOM focused – in JavaScript, ‘controllers’ in MVC parlance are basically your event handlers. They should handle user input, trigger actions in your business logic, and update the page as appropriate.
  • If you need a browser, use a real one – in my experience, given how easy it is to test on real browsers and minimize integration tests, fake DOM environments are often more pain than they’re worth. The important thing is to keep your code as portable as possible so you can adapt if you spot more suitable tools.