I’ve been taking a lot of measurements recently, trying to find out ways I can optimise Faye and JS.Class. One of the big things I’ve been looking at is how much various iteration constructs cost in JavaScript. Loops are what made initial versions of Sylvester slow, and there’s a good chance they’re holding back other projects of mine performance-wise. If you’re thinking of moving to a different data structure to speed access times up, it can have unexpected consequences for iteration.
As a preliminary exercise, I took a look at these three common constructs for looping over an array. Note that my benchmarks include the cost of accessing each member of the collection, not just doing the iteration.
// Array: for loop
for (var i = 0, n = array.length; i < n; i++) { array[i] }
// Array: reverse while
var i = array.length;
while (i--) { array[i] }
// Array: forEach()
array.forEach(function(x) {});
The reverse while
loop is not so common, but I use it a lot when I don’t care
about the order since it’s slightly faster and more terse without losing
clarity.
Some representative numbers from these tests are below. I used an array containing 100,000 integers and measured how long it took to iterate over the array 100 times. Times are given in milliseconds.
V8 (Chrome, Node)
Array: for loop :: 64 +/- 1%
Array: reverse while :: 60 +/- 0%
Array: forEach() :: 235 +/- 0%
Firefox 3.6
Array: for loop :: 965 +/- 0%
Array: reverse while :: 964 +/- 1%
Array: forEach() :: 1465 +/- 1%
Opera 10
Array: for loop :: 35 +/- 20%
Array: reverse while :: 40 +/- 28%
Array: forEach() :: 371 +/- 4%
Rhino (Narwhal)
Array: for loop :: 644 +/- 4%
Array: reverse while :: 585 +/- 1%
Array: forEach() :: 569 +/- 3%
So we see a reverse while
loop is ever-so-slightly faster than a for
loop,
and forEach()
is usually much more expensive since it involves a function call
on every iteration. (Internet Explorer 8 doesn’t have forEach()
, but its
while
and for
perform similarly to other platforms.)
Now, a common optimisation if you have a collection of items that needs to support fast removal (for example, unsubscribing a client from a channel in Faye) is to replace an array with a hashtable as the collection’s storage mechanism. Arrays have O(n) time complexity for finding and removing an item, whereas a table is O(1) modulo some edge cases in the implementation.
But, there’s a problem: iterating over an object in JavaScript using this technique is extremely slow:
// Object: for/in
for (var key in object) {
if (!object.hasOwnProperty(key)) continue;
// do something with object[key];
}
Don’t believe me? Here are the numbers for an object with 100,000 fields, compared to a 100,000-item array:
V8 (Chrome, Node)
Array: reverse while :: 60 +/- 0%
Object: for/in :: 2480 +/- 0%
Firefox 3.6
Array: reverse while :: 964 +/- 1%
Object: for/in :: 8846 +/- 4%
Internet Explorer 8
Array: reverse while :: 21 +/- 35%
Object: for/in :: 111 +/- 14%
Opera 10
Array: reverse while :: 40 +/- 28%
Object: for/in :: 1010 +/- 8%
Rhino (Narwhal)
Array: reverse while :: 585 +/- 1%
Object: for/in :: 3085 +/- 3%
This doesn’t look good: making unsubscription faster in Faye could end up making message distribution orders of magnitude slower! However, what if we combine an array and an object to get the best of both worlds? Suppose the object’s keys are stored in the array, we can iterate over the array instead of the object:
// Object: while
var i = array.length, key;
while (i--) {
key = array[i];
// do something with object[key];
}
We could even make a new class to encapsulate this and manage the array for us:
Table = function() {
this._keys = [];
this._data = {};
};
Table.prototype.put = function(key, value) {
if (!this._data.hasOwnProperty(key)) this._keys.push(key);
this._data[key] = value;
};
Table.prototype.forEach = function(block, context) {
var keys = this._keys,
data = this._data,
i = keys.length,
key;
while (i--) {
key = keys[i];
block.call(context, key, data[key]);
}
};
Table#put()
stores a key/value pair in the table, and Table#forEach()
iterates over the stored keys for us and yields a key and value to the block.
Checking whether a key already exists just involves looking it up in a table, so
adding pairs to the collection is O(1).
Let’s see how a while
loop and Table#forEach()
compare to a for
/in
loop:
V8 (Chrome, Node)
Object: for/in :: 2480 +/- 0%
Object: while :: 98 +/- 0%
Table: forEach() :: 191 +/- 1%
Firefox 3.6
Object: for/in :: 8846 +/- 4%
Object: while :: 2728 +/- 1%
Table: forEach() :: 2732 +/- 0%
Internet Explorer 8
Object: for/in :: 111 +/- 14%
Object: while :: 42 +/- 14%
Table: forEach() :: 63 +/- 18%
Opera 10
Object: for/in :: 1010 +/- 8%
Object: while :: 79 +/- 9%
Table: forEach() :: 874 +/- 6%
Rhino (Narwhal)
Object: for/in :: 3085 +/- 3%
Object: while :: 1729 +/- 0%
Table: forEach() :: 2626 +/- 1%
From the above numbers, we see we can get from 2x to 10x faster iteration
depending on the platform. Some platforms even run Table#forEach()
at a good
enough pace to beat a for
/in
loop! Firefox 4.0 actually reported
Table#forEach()
faster than a while
loop, which I have to say leaves me a
little puzzled.
There are further optimisations we can make, though. Recall that my whole reason
for wanting a new data structure is that I want fast removal (i.e. better than
O(n)), but this Table
class won’t support that since to remove a key I have to
look through an array for it, which is no improvement on what I was doing before
for storing client-channel bindings.
In JavaScript, object keys are strings and so we can sort them. If we can keep the keys array sorted we’ll get O(log n) removal; we’ll also get O(log n) insertion rather than O(1), but for large n where every client must eventually be removed, 2log(n) < n by quite some margin. Let’s make a subclass and override the storage methods to keep the keys array in order:
SortedTable = function() {
Table.call(this);
};
SortedTable.prototype = new Table();
SortedTable.prototype.put = function(key, value) {
if (!this._data.hasOwnProperty(key)) {
var index = this._indexOf(key);
this._keys.splice(index, 0, key);
}
this._data[key] = value;
};
SortedTable.prototype.remove = function(key) {
if (!this._data.hasOwnProperty(key)) return;
delete this._data[key];
var index = this._indexOf(key);
this._keys.splice(index, 1);
};
SortedTable.prototype._indexOf = function(key) {
var keys = this._keys,
n = keys.length,
i = 0,
d = n;
if (n === 0) return 0;
if (key < keys[0]) return 0;
if (key > keys[n-1]) return n;
while (key !== keys[i] && d > 0.5) {
d = d / 2;
i += (key > keys[i] ? 1 : -1) * Math.round(d);
if (key > keys[i-1] && key < keys[i]) d = 0;
}
return i;
};
The indexOf()
method is a basic binary search that assumes values are
comparable using JavaScript operators and that the values are unique. It’s a
simplified version of the code from JS.SortedSet, which is orders of
magnitude faster than JS.Set for insertion and removal on large collections. You
could get fancy and use a tree structure but the ones I’ve seen don’t perform
much better than my sorted array approach.
In a later article I’ll go through a practical application of this, showing how we could use it to speed up Faye’s client disconnection routines.