This article is one in a 3-part series. The full series is:
- Part 1: testing JavaScript without your full stack
- Part 2: separating business logic from the DOM
- Part 3: automating cross-browser testing
At the end of the previous article, we’d just finished reproducing our
full-stack Cucumber tests as pure JavaScript unit tests against the
FormValidator
class, ending up with this spec:
FORM_HTML = '\
<form method="post" action="/accounts/create">\
<label for="username">Username</label>\
<input type="text" id="username" name="username">\
\
<label for="email">Email</label>\
<input type="text" id="email" name="email">\
\
<div class="error"></div>\
<input type="submit" value="Sign up">\
</form>'
JS.require('JS.Test', function() {
JS.Test.describe("FormValidator", function() { with(this) {
before(function() {
$("#fixture").html(FORM_HTML)
new FormValidator($("form"))
this.submit = $("form input[type=submit]")
this.error = $("form .error")
})
describe("with an invalid name", function() { with(this) {
before(function() { with(this) {
$("#username").val("Hagrid")
submit.click()
}})
it("displays an error message", function() { with(this) {
assertEqual( "Your name is invalid", error.html() )
}})
}})
// ...
}})
JS.Test.autorun()
})
These run much faster than their full-stack counterparts, and they let us run the tests in any browser we like. But they’re still not ideal: we’ve made the mistake of tangling up the model with the view, testing validation logic by going through the UI layer. If we separate the business logic from the view logic, we’ll end up with validation functions written in pure JavaScript that doesn’t touch the DOM and that can be tested from the command line.
Before we do that though, let’s move the spec out of the HTML test page and into its own JavaScript file. This will make it easier to load on the command line when we get to that stage. This leaves our HTML page containing just the logic needed to load the code and the tests:
JS.Packages(function() { with(this) {
file('../public/jquery.js')
.provides('jQuery', '$')
file('../public/form_validator.js')
.provides('FormValidator')
.requires('jQuery')
autoload(/^(.*)Spec$/, {from: '../spec/javascript', require: '$1'})
}})
JS.require('JS.Test', function() {
JS.require('FormValidatorSpec', JS.Test.method('autorun'))
})
We will eventually move this into its own file as well, but for now getting the spec into a separate file is the important step.
Recall our FormValidator
class currently looks like this:
function FormValidator(form) {
var username = form.find('#username'),
email = form.find('#email'),
error = form.find('.error');
form.bind('submit', function() {
if (username.val() === 'Wizard') {
error.html('Your argument is invalid');
return false;
}
else if (username.val() !== 'Harry') {
error.html('Your name is invalid');
return false;
}
else if (!/@/.test(email.val())) {
error.html('Your email is invalid');
return false;
}
});
};
We can refactor so that we get a validation function that doesn’t touch the DOM:
FormValidator = function(form) {
form.bind('submit', function() {
var params = form.serializeArray(),
data = {};
for (var i = 0, n = params.length; i < n; i++)
data[params[i].name] = params[i].value;
var errors = FormValidator.validate(data);
if (errors.length === 0) return true;
form.find('.error').html(errors[0]);
return false;
});
};
FormValidator.validate = function(params) {
var errors = [];
if (params.username === 'Wizard')
errors.push('Your argument is invalid');
else if (params.username !== 'Harry')
errors.push('Your name is invalid');
else if (!/@/.test(params.email))
errors.push('Your email is invalid');
return errors;
};
Notice how FormValidator.validate()
does not talk to the DOM at all: it
doesn’t listen to events and it doesn’t modify the page. It just accepts a data
object and returns a (hopefully empty) list of errors. The FormValidator
initialization does the work of listening to form events, marshalling the form’s
data, running the validation and printing any errors. The DOM interaction has
been separated from the business logic.
This step lets us refactor our tests so that they don’t use the DOM, they just test the business logic:
JS.ENV.FormValidatorSpec = JS.Test.describe("FormValidator", function() { with(this) {
describe("with valid data", function() { with(this) {
before(function() { with(this) {
this.errors = FormValidator.validate({username: "Harry", email: "wizard@hogwarts.com"})
}})
it("displays no errors", function() { with(this) {
assertEqual( [], errors )
}})
}})
describe("with an invalid name", function() { with(this) {
before(function() { with(this) {
this.errors = FormValidator.validate({username: "Hagrid"})
}})
it("displays an error message", function() { with(this) {
assertEqual( ["Your name is invalid"], errors )
}})
}})
// ...
}})
Testing the business logic without going through the DOM has let us add another test: if the form data is valid, the form submission proceeds unhindered and the page running the tests is unloaded, so we cannot test the valid case through the DOM. By testing the business logic directly, we can test the valid case without worrying about the page changing.
We load up our tests in the browser and once again they are all good.
Since our tests do not now talk to the DOM, we can run them on the command line.
We move the code to load the code and tests out of the HTML page and into its
own file, spec/runner.js
:
var CWD = (typeof CWD === 'undefined') ? '.' : CWD
JS.Packages(function() { with(this) {
file(CWD + '/public/form_validator.js')
.provides('FormValidator')
autoload(/^(.*)Spec$/, {from: CWD + '/spec/javascript', require: '$1'})
}})
JS.require('JS.Test', function() {
JS.require('FormValidatorSpec', JS.Test.method('autorun'))
})
This just leaves the test page spec/browser.html
needing to load the JS.Class
seed file and runner.js
:
<!doctype html>
<html>
<head>
<meta http-equiv="Content-type" content="text/html; charset=utf-8">
<title>FormValidator tests</title>
<script type="text/javascript" src="../vendor/js.class/build/min/loader.js"></script>
</head>
<body>
<script type="text/javascript">CWD = '..'</script>
<script type="text/javascript" src="./runner.js"></script>
</body>
</html>
We’ve now moved all our JavaScript out of our HTML and we can run these new
JavaScript files on the server side. All we need to do is create a file that
performs the same job as spec/browser.html
but for server-side platforms.
We’ll call this file spec/console.js
:
JSCLASS_PATH = 'vendor/js.class/build/src'
if (typeof require === 'function') {
require('../' + JSCLASS_PATH + '/loader.js')
require('./runner.js')
} else {
load(JSCLASS_PATH + '/loader.js')
load('spec/runner.js')
}
This file performs some feature detection to figure out how to load files. This is the only place we need to do this, since JS.Packages will figure out how to load files for us after this. Let’s try running this script with Node:
$ node spec/console.js
Loaded suite FormValidator
Started
....
Finished in 0.004 seconds
4 tests, 4 assertions, 0 failures, 0 errors
We’ve now got some lightning-fast unit tests of our JavaScript business logic
that we can run from the command line. The portability of JS.Test means you can
run these tests with Node, V8, Rhino and SpiderMonkey,
and with a little adjustment to console.js
(see the JS.Test
documentation) you can even run them on Windows Script Host.
However our test coverage has slipped a bit: we are no longer testing the
interaction with the DOM at all. We aught to have at least a sanity test that
our app is wired together correctly, and we can do this easily by adding a
section at the end of our FormValidatorSpec
beginning with this:
if (typeof document === 'undefined') return
We can then define a test after this to check the interaction with the DOM that will only be executed if the tests are running in a DOM environment.
To round off this section, let’s get this DOM test running on the command line as well using PhantomJS. This is a headless browser based on WebKit that you can control using JavaScript. It also lets you catch console output emitted by the pages you load, which lets you monitor your tests. I recently made JS.Test emit JSON on the console for just this purpose.
We can create a script to load our test page and capture this output:
var page = new WebPage()
page.onConsoleMessage = function(message) {
try {
var event = JSON.parse(message).jstest
if (!event) return
if (event.status === 'failed')
return console.log('FAILED: ' + event.test)
if (event.total) {
console.log(JSON.stringify(event))
var status = (!event.fail && !event.error) ? 0 : 1
phantom.exit(status)
}
} catch (e) {}
}
page.open('spec/browser.html')
As you can see, it’s just a case of parsing every console message we get and checking the data contained therein. If the message signals the end of the tests, we can exit with the appropriate exit status. Let’s give this script a spin:
$ phantomjs spec/phantom.js
{"fail":0,"error":0,"total":5}
So we’ve now got full DOM integration tests we can run on the command line, letting us roll this into our continuous integration cycle. You can run PhantomJS on server machines, although if you’re not running X on these machines you’ll need to run Xvfb to give PhantomJS a virtual X buffer to work with.
It’s worth mentioning at this point that I’ve never been a fan of browser
simulators, that is fake DOM environments used during testing. They never behave
quite like real browsers, and often involve a lot of elaborate environment
set-up in other languages that makes your tests non-portable. I’ve found it far
too easy to find bugs in them, for example HtmlUnit (which the Ruby tools
Celerity and Akephalos are based on) will throw a
NullPointerException
when running our tests because of the cancelled form
submission. Given how easy it is to use PhantomJS for unit testing and
Selenium through Capybara for full-stack testing, and that these
tools use real browsers, I don’t see a huge benefit to using simulators. I like
to keep as much of my code as I can in simple, portable JavaScript that can
easily be run in different environments to maintain flexibility.
In the final part of this series, we’ll cover how to strengthen your test coverage by automating your cross-browser testing process.