Terminus 0.3: control multiple browsers with Ruby

As you’ll have noticed if you made it to the end of my last post, there is a new release of Terminus. Terminus is a Capybara driver that is designed to let you control your app in any browser on any device, by sending all driver instructions to be executed on the client side in JavaScript.

This release is the first since Capybara 1.0, and supports the entire Capybara API. This includes:

  • Reading response headers and status codes
  • Handling cookies
  • Running JavaScript and receiving the results
  • Resynchronizing XHR requests (jQuery only)
  • Switching between frames and windows
  • Detecting infinite redirects

This is a superset of the supported features of the Rack::Test and Selenium drivers, and has the added bonus of letting you switch between browsers. When you have multiple browsers connected to your Terminus server, you can select which one you want to control by matching on the browser’s name, OS, version and current URL, for example:

Terminus.browser = {:name => /Safari/, :current_url => /pitchfork.com/}

You can select any browser that is ‘docked’, i.e. idling on the Terminus holding page:

Terminus.browser = :docked

Or simply by selecting one browser from the list:

Terminus.browser = Terminus.browsers.first

All this lets you control multiple browsers at once, for example I’ve been using it to automate some of the Faye integration tests:

#================================================================
# Acquire some browsers and log into each with a username

NAMES = %w[alice bob carol]
BROWSERS = {}
Terminus.ensure_browsers 3

Terminus.browsers.each_with_index do |browser, i|
  name = NAMES[i]
  puts "#{name} is using #{browser}"
  BROWSERS[name] = browser
  Terminus.browser = browser
  visit '/'
  fill_in 'username', :with => name
  click_button 'Go'
end

#================================================================
# Send a message from each browser to every other browser,
# and check that it arrived. If it doesn't arrive, send all
# the browsers back to the dock and raise an exception

BROWSERS.each do |name, sender|
  BROWSERS.each do |at, target|
    next if at == name
    
    Terminus.browser = sender
    fill_in 'message', :with => "@#{at} Hello, world!"
    click_button 'Send'
    
    Terminus.browser = target
    unless page.has_content?("#{name}: @#{at} Hello, world!")
      Terminus.return_to_dock
      raise "Message did not make it from #{sender} to #{target}"
    end
  end
end

#================================================================
# Re-dock all the browsers when we're finished

Terminus.return_to_dock

So what’s not supported? Internet Explorer is still not supported because I cannot find a decent way to run XPath queries on it. I was working on Pathology to solve this but I can’t get it to perform well enough for the workload Capybara throws at it. It might be possible to work around this by monkey-patching Capybara to pass through CSS selectors instead of compiling them to XPath, though. File attachments are not supported for security reasons, and there are still some bugs that show up if you do stuff you’re not supposed to, like using duplicate element IDs. These are particularly apparent on Opera. And finally visiting remote hosts outside your application is supported but is not particularly robust as yet.

You can find out more and see a video of it in action on its new website.

Refactoring towards testable JavaScript, part 3

This article is one in a 3-part series. The full series is:

We finished up the previous article having separated the business logic from the DOM interactions in our JavaScript, and adjusted our unit tests to take advantage of this. In the final part of this series, we’ll take a look at how to take the tests we have and run them across a range of browsers automatically to give us maximum confidence that our code works.

To automate cross-browser testing, I use a much-overlooked tool called TestSwarm. Developed by John Resig for testing jQuery, it takes care of tracking your test status in multiple browsers as you make commits to your project.

To set it up, we need to clone it from GitHub and create a directory within to host revisions of our project.

$ git clone git://github.com/jquery/testswarm.git
$ cd testswarm
$ cp config/config-sample.ini config.ini
$ mkdir -p changeset/jsapp

You’ll need to create a MySQL database for it:

CREATE USER 'testswarm'@'localhost' IDENTIFIED BY 'choose-a-password';
CREATE DATABASE testswarm;
GRANT ALL ON testswarm.* TO 'testswarm'@'localhost';

Then import the TestSwarm schema:

$ mysql -u testswarm -p testswarm < config/testswam.sql
$ mysql -u testswarm -p testswarm < config/useragents.sql

Once you’ve added the database details to config.ini and set up an Apache VHost, you can visit your TestSwarm server and click ‘Signup’. Once you’ve filled in that form you’ll be able to grab your auth key from the database:


$ mysql testswarm -u testswarm -p
mysql> select auth from users;
+------------------------------------------+
| auth                                     |
+------------------------------------------+
| a962c548c22a591e8f150b9d9f6b673b6f212d08 |
+------------------------------------------+

Keep that auth code somewhere as you’ll need it later on. Now before we go any further, to show how TestSwarm works I want to deliberately break our application so that it doesn’t work in Internet Explorer. Do this I’m going to replace jQuery#bind with HTMLElement#addEventListener, and when we push our code to TestSwarm we should see it break.

To get our tests running on TestSwarm, we need a config file. I just grabbed one of the standard Perl scripts from the TestSwarm project and added my own configuration. This tells the script where your TestSwarm server is, where your code should be checkout out, any build scripts you need to run, which files to load, etc. JS.Test includes TestSwarm support baked in so we don’t need to modify our tests at all to make them send reports to the TestSwarm server, we just need to load the same old spec/browser.html file we’ve been using all along. You should configure the Perl script to clone your project into the changeset/jsapp directory we created earlier: this is in TestSwarm’s public directory so web browsers will be able to load it from there. You’ll need to include the auth key we created earlier to submit jobs to the server.

Having created this file, we clone the project on our server somewhere and create a cron job to periodically update our copy and run the TestSwam script: this means that new test jobs will be submitted whenever we commit to the project.

# crontab
* * * * * cd $HOME/projects/jsapp && git pull && perl spec/testswarm.pl

If you now open a couple of browsers and connect to the swarm, you’ll see tests begin to run. If you inspect the test results for our project you should see this:

The green box is Chrome reporting 5 passed tests, and the black box is IE8 reporting 7 errors. If we click through we see what happened:

Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0)

  • Error:
    FormValidator with valid data displays no errors:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid name displays an error message:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid email displays an error message:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid argument displays an error message:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid name displays an error message:
    TypeError: Object doesn’t support this property or method
  • Error:
    FormValidator with an invalid name displays an error message:
    TypeError: ‘submit’ is undefined
  • Error:
    FormValidator with an invalid name displays an error message:
    TypeError: ‘error’ is undefined

5 tests, 4 assertions, 0 failures, 7 errors

“Object doesn’t support this property or method” is IE’s way of saying you’re calling a method that doesn’t exist, in our case addEventListener(). If we make some changes so that we use attachEvent() instead in IE, when TestSwam picks up the change and runs our tests they go green in IE.

You can leave any number of browsers connected to the swarm and they will run tests automatically as you make commits to the project. This is the great advantage of sticking to portable unit tests for your JavaScript, it makes this kind of automation much easier, you run tests in real browsers and don’t need a lot of additional tooling to set up fake environments. I run the JS.Class tests this way and it’s great for making sure my code works across all platforms before shipping a release.

The final big win, having set all this up, is that we can now delete some of our full-stack tests since they just duplicate our unit tests. When we’re testing a full-stack integration, we really just want to test at the level of abstraction the integration works at, i.e. test the glue that binds the pieces together, rather than testing all the different cases of every piece of business logic. In our case, this means testing two broad cases: either the form is valid or it is not valid. We have unit tests that cover in detail what ‘valid’ and ‘invalid’ mean, and we don’t need to duplicate these. We just need to test their effect on the application as a whole: either the form submits or it doesn’t. We can then instantly discard all the integration tests that cover the validation details, leaving the broad cases covered. Doing this rigorously will keep your integration tests to a minimum and keep your build running quickly.

To wrap up this series, I thought I’d mention a couple other things we can do with our tests to get extra coverage. The first is the JS.Test coverage tool. If we write our code using JS.Class, we can make the test framework report which methods were called during the test just by adding cover(FormValidator) to the top of the spec. When we run the tests we get a report:

<code>$ node spec/console.js 
Loaded suite FormValidator

Started
....

  +-----------------------------------+-------+
  | Method                            | Calls |
  +-----------------------------------+-------+
  | Method:FormValidator.validate     | 4     |
  | Method:FormValidator#handleSubmit | 0     |
  | Method:FormValidator#initialize   | 0     |
  +-----------------------------------+-------+

Finished in 0.005 seconds
4 tests, 4 assertions, 0 failures, 0 errors</code>

If any of the methods are not called, the process exits with a non-zero exit status so you can treat your build as failing until all the methods are called during the test.

Finally, I have an ongoing experimental Capybara driver called Terminus that lets you run your Capybara-based tests on remote machines like phones, iPads and so on. If we change our Capybara driver as required, we can open a browser on a remote machine, connect to the Terminus server and run the tests on that machine, or on many machines at once if your tests involve communication between many clients.

Here’s the full list of software we’ve used in this series:

  • Sinatra – Ruby web application framework used to create our application stack
  • jQuery – client side DOM, Ajax and effects library used to handle from submissions
  • JS.Class, JS.Test – portable object system and testing framework for JavaScript
  • Cucumber – Ruby acceptance testing framework used for writing plain-text test scenarios
  • Capybara – web scripting API that can drive many different backends
  • Rack::Test – Capybara backend that talks to Rack applications directly with no wire traffic, suited to doing fast, in-process testing, does not support JavaScript
  • Selenium – Capybara backend that runs using a real browser, slower but supports JavaScript
  • Terminus – Capybara backend that can drive any remote browser using JavaScript
  • PhantomJS – headless distribution of WebKit, scriptable using JavaScript
  • TestSwam – automated cross-browser CI server for tracking JavaScript unit tests across the project history

I’ll leave you with a few points to bear in mind to keep your JavaScript unit-testable:

  • Minimize DOM interaction – write your business logic in pure JavaScript, test it server-side, and use a ‘controller’ layer to bind this logic to your UI.
  • Keep controllers DOM focused – in JavaScript, ‘controllers’ in MVC parlance are basically your event handlers. They should handle user input, trigger actions in your business logic, and update the page as appropriate.
  • If you need a browser, use a real one – in my experience, given how easy it is to test on real browsers and minimize integration tests, fake DOM environments are often more pain than they’re worth. The important thing is to keep your code as portable as possible so you can adapt if you spot more suitable tools.

Refactoring towards testable JavaScript, part 2

This article is one in a 3-part series. The full series is:

At the end of the previous article, we’d just finished reproducing our full-stack Cucumber tests as pure JavaScript unit tests against the FormValidator class, ending up with this spec:

FORM_HTML = '\
    <form method="post" action="/accounts/create">\
      <label for="username">Username</label>\
      <input type="text" id="username" name="username">\
      \
      <label for="email">Email</label>\
      <input type="text" id="email" name="email">\
      \
      <div class="error"></div>\
      <input type="submit" value="Sign up">\
    </form>'

JS.require('JS.Test', function() {

  JS.Test.describe("FormValidator", function() { with(this) {
    before(function() {
      $("#fixture").html(FORM_HTML)
      new FormValidator($("form"))

      this.submit = $("form input[type=submit]")
      this.error  = $("form .error")
    })

    describe("with an invalid name", function() { with(this) {
      before(function() { with(this) {
        $("#username").val("Hagrid")
        submit.click()
      }})

      it("displays an error message", function() { with(this) {
        assertEqual( "Your name is invalid", error.html() )
      }})
    }})

    // ...
  }})

  JS.Test.autorun()
})

These run much faster than their full-stack counterparts, and they let us run the tests in any browser we like. But they’re still not ideal: we’ve made the mistake of tangling up the model with the view, testing validation logic by going through the UI layer. If we separate the business logic from the view logic, we’ll end up with validation functions written in pure JavaScript that doesn’t touch the DOM and that can be tested from the command line.

Before we do that though, let’s move the spec out of the HTML test page and into its own JavaScript file. This will make it easier to load on the command line when we get to that stage. This leaves our HTML page containing just the logic needed to load the code and the tests:

JS.Packages(function() { with(this) {
  
  file('../public/jquery.js')
      .provides('jQuery', '$')
  
  file('../public/form_validator.js')
      .provides('FormValidator')
      .requires('jQuery')
  
  autoload(/^(.*)Spec$/, {from: '../spec/javascript', require: '$1'})
}})

JS.require('JS.Test', function() {
  JS.require('FormValidatorSpec', JS.Test.method('autorun'))
})

We will eventually move this into its own file as well, but for now getting the spec into a separate file is the important step.

Recall our FormValidator class currently looks like this:

function FormValidator(form) {
  var username = form.find('#username'),
      email    = form.find('#email'),
      error    = form.find('.error');

  form.bind('submit', function() {
    if (username.val() === 'Wizard') {
      error.html('Your argument is invalid');
      return false;
    }
    else if (username.val() !== 'Harry') {
      error.html('Your name is invalid');
      return false;
    }
    else if (!/@/.test(email.val())) {
      error.html('Your email is invalid');
      return false;
    }
  });
};

We can refactor so that we get a validation function that doesn’t touch the DOM:

FormValidator = function(form) {
  form.bind('submit', function() {
    var params = form.serializeArray(),
        data   = {};
    
    for (var i = 0, n = params.length; i < n; i++)
      data[params[i].name] = params[i].value;
    
    var errors = FormValidator.validate(data);
    if (errors.length === 0) return true;
    
    form.find('.error').html(errors[0]);
    return false;
  });
};

FormValidator.validate = function(params) {
  var errors = [];
  
  if (params.username === 'Wizard')
    errors.push('Your argument is invalid');
  
  else if (params.username !== 'Harry')
    errors.push('Your name is invalid');
  
  else if (!/@/.test(params.email))
    errors.push('Your email is invalid');
  
  return errors;
};

Notice how FormValidator.validate() does not talk to the DOM at all: it doesn’t listen to events and it doesn’t modify the page. It just accepts a data object and returns a (hopefully empty) list of errors. The FormValidator initialization does the work of listening to form events, marshalling the form’s data, running the validation and printing any errors. The DOM interaction has been separated from the business logic.

This step lets us refactor our tests so that they don’t use the DOM, they just test the business logic:

JS.ENV.FormValidatorSpec = JS.Test.describe("FormValidator", function() { with(this) {
  describe("with valid data", function() { with(this) {
    before(function() { with(this) {
      this.errors = FormValidator.validate({username: "Harry", email: "wizard@hogwarts.com"})
    }})
    
    it("displays no errors", function() { with(this) {
      assertEqual( [], errors )
    }})
  }})
  
  describe("with an invalid name", function() { with(this) {
    before(function() { with(this) {
      this.errors = FormValidator.validate({username: "Hagrid"})
    }})
    
    it("displays an error message", function() { with(this) {
      assertEqual( ["Your name is invalid"], errors )
    }})
  }})
  
  // ...
}})

Testing the business logic without going through the DOM has let us add another test: if the form data is valid, the form submission proceeds unhindered and the page running the tests is unloaded, so we cannot test the valid case through the DOM. By testing the business logic directly, we can test the valid case without worrying about the page changing.

We load up our tests in the browser and once again they are all good.

Since our tests do not now talk to the DOM, we can run them on the command line. We move the code to load the code and tests out of the HTML page and into its own file, spec/runner.js:

var CWD = (typeof CWD === 'undefined') ? '.' : CWD

JS.Packages(function() { with(this) {
  file(CWD + '/public/form_validator.js')
      .provides('FormValidator')
  
  autoload(/^(.*)Spec$/, {from: CWD + '/spec/javascript', require: '$1'})
}})

JS.require('JS.Test', function() {
  JS.require('FormValidatorSpec', JS.Test.method('autorun'))
})

This just leaves the test page spec/browser.html needing to load the JS.Class seed file and runner.js:

<!doctype html>
<html>
  <head>
    <meta http-equiv="Content-type" content="text/html; charset=utf-8">
    <title>FormValidator tests</title>
    <script type="text/javascript" src="../vendor/js.class/build/min/loader.js"></script>
  </head>
  <body>
    
    <script type="text/javascript">CWD = '..'</script>
    <script type="text/javascript" src="./runner.js"></script>
    
  </body>
</html>

We’ve now moved all our JavaScript out of our HTML and we can run these new JavaScript files on the server side. All we need to do is create a file that performs the same job as spec/browser.html but for server-side platforms. We’ll call this file spec/console.js:

JSCLASS_PATH = 'vendor/js.class/build/src'

if (typeof require === 'function') {
  require('../' + JSCLASS_PATH + '/loader.js')
  require('./runner.js')
} else {
  load(JSCLASS_PATH + '/loader.js')
  load('spec/runner.js')
}

This file performs some feature detection to figure out how to load files. This is the only place we need to do this, since JS.Packages will figure out how to load files for us after this. Let’s try running this script with Node:

<code>$ node spec/console.js 
Loaded suite FormValidator

Started
....
Finished in 0.004 seconds
4 tests, 4 assertions, 0 failures, 0 errors</code>

We’ve now got some lightning-fast unit tests of our JavaScript business logic that we can run from the command line. The portability of JS.Test means you can run these tests with Node, V8, Rhino and SpiderMonkey, and with a little adjustment to console.js (see the JS.Test documentation) you can even run them on Windows Script Host.

However our test coverage has slipped a bit: we are no longer testing the interaction with the DOM at all. We aught to have at least a sanity test that our app is wired together correctly, and we can do this easily by adding a section at the end of our FormValidatorSpec beginning with this:

if (typeof document === 'undefined') return

We can then define a test after this to check the interaction with the DOM that will only be executed if the tests are running in a DOM environment.

To round off this section, let’s get this DOM test running on the command line as well using PhantomJS. This is a headless browser based on WebKit that you can control using JavaScript. It also lets you catch console output emitted by the pages you load, which lets you monitor your tests. I recently made JS.Test emit JSON on the console for just this purpose.

We can create a script to load our test page and capture this output:

var page = new WebPage()

page.onConsoleMessage = function(message) {
  try {
    var event = JSON.parse(message).jstest
    if (!event) return
    
    if (event.status === 'failed')
      return console.log('FAILED: ' + event.test)
    
    if (event.total) {
      console.log(JSON.stringify(event))
      var status = (!event.fail && !event.error) ? 0 : 1
      phantom.exit(status)
    }
    
  } catch (e) {}
}

page.open('spec/browser.html')

As you can see, it’s just a case of parsing every console message we get and checking the data contained therein. If the message signals the end of the tests, we can exit with the appropriate exit status. Let’s give this script a spin:

<code>$ phantomjs spec/phantom.js 
{"fail":0,"error":0,"total":5}</code>

So we’ve now got full DOM integration tests we can run on the command line, letting us roll this into our continuous integration cycle. You can run PhantomJS on server machines, although if you’re not running X on these machines you’ll need to run Xvfb to give PhantomJS a virtual X buffer to work with.

It’s worth mentioning at this point that I’ve never been a fan of browser simulators, that is fake DOM environments used during testing. They never behave quite like real browsers, and often involve a lot of elaborate environment set-up in other languages that makes your tests non-portable. I’ve found it far too easy to find bugs in them, for example HtmlUnit (which the Ruby tools Celerity and Akephalos are based on) will throw a NullPointerException when running our tests because of the cancelled form submission. Given how easy it is to use PhantomJS for unit testing and Selenium through Capybara for full-stack testing, and that these tools use real browsers, I don’t see a huge benefit to using simulators. I like to keep as much of my code as I can in simple, portable JavaScript that can easily be run in different environments to maintain flexibility.

In the final part of this series, we’ll cover how to strengthen your test coverage by automating your cross-browser testing process.

Refactoring towards testable JavaScript, part 1

This article is one in a 3-part series. The full series is:

As someone who does a lot of pure-JavaScript projects, I’ve settled into a pattern for organizing my code and its tests in a way I’m comfortable with. At Songkick, despite being obsessed with testing our Ruby code we’ve traditionally done a patchy job of testing our JavaScript. Some recent refactoring is giving us a chance to review our practises and I wanted to use this chance to see how easily we can test JavaScript within applications. I’m pleased to report that the tools available today make this an absolute breeze, and it’s quite easy to do a thorough job of putting together a sustainable testing strategy.

This is the first in a series of articles walking through a demo I presented internally at Songkick, showing various ways we can test our JavaScript and how we can refactor to keep these tests running quickly. I’ll be going through changes to a project and linking to Git commits as appropriate. We’ll cover a range of testing styles using tools written be me and others, all of which make JavaScript testing easy.

Let’s start off with version 0: we decide we want a new software product, and promptly decide to write a spec for it. We decide there should be a sign-up form, and there should be rules about what data is acceptable.

<code>Feature: Signing up
  In order to show everyone what a badass I am
  As a developer
  I want to make my users sit through some JavaScript validation
  
  Background:
    Given I visit the sign-up form
  
  Scenario: Entering the wrong name
    When I enter an invalid name
    Then I should see "Your name is invalid"
  
  Scenario: Entering the wrong email address
    When I enter an invalid email address
    Then I should see "Your email is invalid"
  
  Scenario: Having an invalid argument
    When I use an invalid argument
    Then I should see "Your argument is invalid"
  
  Scenario: Entering valid data
    When I enter valid sign-up data
    Then I should see "You are a wizard, Harry!"</code>

Great! Some detailed full-stack tests are a good starting point for for making sure we build the right thing. Full of enthusiasm, we crack on and write some step definitions and an application that passes the tests. Here’s our little Sinatra application:

require 'sinatra'

get '/signup' do
  erb :signup
end

post '/accounts/create' do
  if params[:username] == 'Wizard'
    'Your argument is invalid'
  elsif params[:username] != 'Harry'
    'Your name is invalid'
  elsif params[:email] !~ /@/
    'Your email is invalid'
  else
    'You are a wizard, Harry!'
  end
end

And the view containing the sign-up form:

<form method="post" action="/accounts/create">
  <label for="username">Username</label>
  <input type="text" id="username" name="username">
  
  <label for="email">Email</label>
  <input type="text" id="email" name="email">
  
  <input type="submit" value="Sign up">
</form>

We run cucumber features/ and all is good:

<code>$ cucumber features/
Feature: Signing up
  In order to show everyone what a badass I am
  As a developer
  I want to make my users sit through some JavaScript validation

  Background:                      # features/signup.feature:6
    Given I visit the sign-up form # features/step_definitions/app_steps.rb:1

  Scenario: Entering the wrong name          # features/signup.feature:9
    When I enter an invalid name             # features/step_definitions/app_steps.rb:5
    Then I should see "Your name is invalid" # features/step_definitions/app_steps.rb:27

  Scenario: Entering the wrong email address  # features/signup.feature:13
    When I enter an invalid email address     # features/step_definitions/app_steps.rb:10
    Then I should see "Your email is invalid" # features/step_definitions/app_steps.rb:27

  Scenario: Having an invalid argument           # features/signup.feature:17
    When I use an invalid argument               # features/step_definitions/app_steps.rb:16
    Then I should see "Your argument is invalid" # features/step_definitions/app_steps.rb:27

  Scenario: Entering valid data                  # features/signup.feature:21
    When I enter valid sign-up data              # features/step_definitions/app_steps.rb:21
    Then I should see "You are a wizard, Harry!" # features/step_definitions/app_steps.rb:27

4 scenarios (4 passed)
12 steps (12 passed)
0m0.582s</code>

These tests are fast because we’re currently using Rack::Test, which talks directly to our Rack application in Ruby without needing to boot a server or go over the wire. That is specified by this in our features/support/env.rb:

Capybara.current_driver = :rack_test
Capybara.app = Sinatra::Application

So next we decide that we want to put the validation on the client side, rather than the server (not a good idea in general, but I needed an example everyone would be familiar with). We hollow out our application and move the validation into a script tag after the form:

post '/accounts/create' do
  'You are a wizard, Harry!'
end
<form method="post" action="/accounts/create">
  <label for="username">Username</label>
  <input type="text" id="username" name="username">
  
  <label for="email">Email</label>
  <input type="text" id="email" name="email">
  
  <div class="error"></div>
  <input type="submit" value="Sign up">
</form>

<script type="text/javascript">
  $('form').bind('submit', function() {
    if ($('#username').val() === 'Wizad') {
      $('.error').html('Your argument is invalid');
      return false;
    }
    else if ($('#username').val() !== 'Harry') {
      $('.error').html('Your name is invalid');
      return false;
    }
    else if (!/@/.test($('#email').val())) {
      $('.error').html('Your email is invalid');
      return false;
    }
  });
</script>

Now Rack::Test won’t run JavaScript, but not to worry – Capybara just lets us set Capybara.current_driver = :selenium and suddenly our tests are all executed in Firefox. But there’s one problem:

<code>$ cucumber features/
# ...
4 scenarios (4 passed)
12 steps (12 passed)
0m9.891s</code>

Our tests have gone from taking 0.5 seconds to nearly 10 seconds: that’s 20 times slower. Multiplied over a whole application test suite you’ll soon be wanting to throw all your tests away. We need to move more of this logic into unit tests if we want a sustainable testing strategy.

The first step is to get that JavaScript out of the view and into an external file that we can share between pages, and then just instantiate a copy of our new class where we need it.

function FormValidator(form) {
  var username = form.find('#username'),
      email    = form.find('#email'),
      error    = form.find('.error');
  
  form.bind('submit', function() {
    if (username.val() === 'Wizard') {
      error.html('Your argument is invalid');
      return false;
    }
    else if (username.val() !== 'Harry') {
      error.html('Your name is invalid');
      return false;
    }
    else if (!/@/.test(email.val())) {
      error.html('Your email is invalid');
      return false;
    }
  });
};
<form method="post" action="/accounts/create">
  <!-- ... -->
</form>

<script type="text/javascript">
  new FormValidator($('form'));
</script>

We can then test this class in isolation by creating a test page using the JS.Test framework (full spec page source on GitHub). This spec replicates what our Cucumber tests do, except that instead of loading the whole sign-up page every time, they just add a form to the page, add the validator to it then run one of the validation examples.

FORM_HTML = '\
    <form method="post" action="/accounts/create">\
      <label for="username">Username</label>\
      <input type="text" id="username" name="username">\
      \
      <label for="email">Email</label>\
      <input type="text" id="email" name="email">\
      \
      <div class="error"></div>\
      <input type="submit" value="Sign up">\
    </form>'

JS.require('JS.Test', function() {
  
  JS.Test.describe("FormValidator", function() { with(this) {
    before(function() {
      $("#fixture").html(FORM_HTML)
      new FormValidator($("form"))
      
      this.submit = $("form input[type=submit]")
      this.error  = $("form .error")
    })
    
    describe("with an invalid name", function() { with(this) {
      before(function() { with(this) {
        $("#username").val("Hagrid")
        submit.click()
      }})
      
      it("displays an error message", function() { with(this) {
        assertEqual( "Your name is invalid", error.html() )
      }})
    }})
    
    // ...
  }})
  
  JS.Test.autorun()
})

We open our test page spec/browser.html up in a web browser and JS.Test confirms that all our JavaScript logic works.

This is a great place to stop for now: we’ve turned what were some full-stack tests that required booting our entire application into some unit tests that we can run quickly. In the next article we’ll get into how we can refactor this further to decouple our business logic from the DOM and get test we can run from the command line.