Faye 0.6: it’s all about clustering

After a longer-than-I-would-have-liked gestation period that began around six months ago, Faye 0.6 is finally out. The major feature in this release is the introduction of ‘engines’, a mechanism that gives us a great deal of freedom about how the back-end is implemented, and lets us do useful things like clustering the HTTP front-end.

In previous releases, the Bayeux protocol and the pub/sub logic backing it up were part of the same class, Faye.Server. All the application state – clients and their subscriptions – was stored in memory by this class. In the new release, the protocol layer and the core pub/sub logic have been separated – the protocol still lives in Faye.Server but the messaging logic has been factored out into several classes called ‘engines’. This means we can change how the message distribution layer is implemented without affecting the protocol.

The first big advantage of this is that we can write engines that store state in an external service, meaning we can distribute a single Faye service across many web servers if connection volume becomes a problem. The new Redis engine does just that:

var bayeux = new Faye.NodeAdapter({
  mount:   '/bayeux',
  timeout: 60,
  engine:  {
    type: 'redis',
    host: 'localhost',
    port: 6379

You can start up a cluster of Faye web servers that all connect to the same Redis database, and clients can connect to any one of these web servers to interact with the service. The default engine is an in-memory one, so your existing code will continue to store state in memory. The in-memory engine has lower latency because it doesn’t perform external I/O, but obviously it cannot be clustered.

This engine system also opens up the possibility to turn Faye into a web front-end for whatever messaging technology you use internally. Because all the protocol validation is handled higher up the stack, the engines themselves are very small classes. Here are the in-memory and Redis engines, and the unit tests that are run against both:

So you can see it’s not a lot of work to implement a new engine and register it. Once you’ve called

Faye.Engine.register('my-engine', MyEngine)

you can use it by setting engine.type when creating your NodeAdapter.

The only API change resulting from this is that it’s no longer possible to publish on wildcard channels like /foo/*. Other Bayeux servers prohibit this already but previous versions of Faye allowed it. In order to keep the engines simple and fast I’ve decided to remove this feature. Subscribing to wildcard channels is still supported.

The other major change in this release comes from Matthijs Langenberg, who’s added a CORS transport to the client. This will now be used instead of the JSON-P transport for cross-domain communication in browsers that do not support WebSockets. It will particularly help in Firefox, which shipped without WebSocket in version 4.0 and guarantees execution order of injected scripts. This meant the long-polling JSON-P request from Faye would block calls made to other APIs like Google Maps, making UIs unresponsive. This should now be a thing of the past.

Finally I just want to thank Martyn Loughran, whose em-hiredis powers the Ruby Redis engine in Faye. Martyn works on Pusher, a really polished hosted push service that uses Redis internally, and it’s really nice of them to open-source their client for others to use. I tried three other Redis clients in Ruby and none of them do pub/sub as nicely as em-hiredis.

As usual, install through gem or npm and hop on the mailing list if you have any problems. I’m on holiday at the moment but I’ll get onto your feedback when I’m home next week.

36 laptops = 1 musical instrument

Video by Scott Schiller

Not a lot of people know this, but the first thing I ever built with Faye was at the first Music Hack Day in London back in 2009. It was also the first time I’d demo’d at a hack day, and needless to say some combination of Faye’s immaturity, a hasty deployment and flaky wi-fi meant the demo failed. The demo was called Outcast, and let you push what you were listening to in iTunes to other machines over HTTP. It was a bit like iTunes library sharing, with a push instead of a pull model, and over the Internet rather than just LAN. You can still get the code, though I doubt it works.

Since then I’ve had an impressive track record of failed demos so for the recent San Francisco MHD (thank you to my lovely employers at Songkick for sending me – we’re hiring, don’t you know) I thought I’d embrace failure and try to use as many risky technologies as possible. Against all the odds the result is what you see in the video above: 36 laptops playing a buzzy ambient version of ‘Happy Birthday’ for MHD organiser Dave Haynes.

The cocktail of technologies included the venue’s wi-fi, Faye, the Mozilla audio API, and 36 audience members. (I wanted to throw Fargo in there but I think the performance overhead would have killed the sound generation completely.) I’m not open-sourcing the code for the time being, but there’s not an awful lot of it and I thought it’d be fun to explain what I learned over the weekend. I think mass remote control is an interesting application of WebSocket-based messaging and I’d like to see more experiments like this.

So, where to start? Like the hack itself this is going to be a bit of brain-dump but hopefully it more-or-less hangs together. The basic functionality of the app is that it’s a free-form synth. It’s not a sequencer, doesn’t do any recording/looping, just lets you make tones and play them. To drive the app we need a clock that runs all the tones we’ve created. It doesn’t need to be very fancy: since we’re not building a sequencer we don’t need explicit beats and bars. A simple timer loop that iterates on all the registered tones and tells them to emit a segment of audio will suffice.

Clock = {
  SAMPLE_RATE:    44100,
  // Storage for all the tones currently in use
  _tones: [],
  // Register tones with the application clock
  bind: function(tone) {
  // Main timer loop, tells all the tones to emit a slice
  // of audio using the current time and timer interval
  start: function() {
    var self     = this,
        interval = this.TICK_INTERVAL,
        rate     = this.SAMPLE_RATE,
        time     = new Date().getTime();
    setInterval(function() {
      var tones = self._tones,
          i     = tones.length;
      time += interval;
      while (i--)
        tones[i].outputTimeSlice(time / 1000, interval / 1000, rate);
    }, interval);


The SAMPLE_RATE describes how we generate audio data. Running a synth basically means calculating a lot of data points from sound waves, and to do this we need to know the unit of time between data points that the audio output device expects. The Mozilla audio API uses the common sample rate of 44.1kHz, meaning we need to generate 44,100 data points for every sound wave per second.

In the main timer loop we tell each tone in the system to emit some audio using outputTimeSlice(). This method will take the current time and an interval (both in seconds) and produce sound wave data over the given time slice.

The tone objects are responsible for modelling the tones we’re playing and generating their audio. A tone is produced by a set of parameters that describe the sound’s properties – its frequency, amplitude, waveform (sine, sawtooth, or square wave) as well as modulations like amplitude and frequency modulation. To emit sound I used Ben Firshman’s DynamicAudio, which is a thin wrapper around the Mozilla audio API that provides a Flash-based shim in other browsers.

Tone = function(options) {
  this._audio = new DynamicAudio({swf: '/path/to/dynamicaudio.swf'});

Tone.prototype.outputTimeSlice = function(time, interval, rate) {
  var samples = Math.ceil(interval * rate),
      dt      = 1 / rate,
      data    = [];
  for (var i = 0; i < samples; i++) {
    var value = this.valueAt(time + i * dt);
    data.push(value); // left channel
    data.push(value); // right channel

Here we see the outputTimeSlice() method. Using the SAMPLE_RATE, it figures out how many data points we need to generate in the given time interval, and then builds an array of values by stepping through the time interval and calculating the value of the sound wave at each step. At each time step we need to provide values for both the left and right channel, so in this case I just push each value twice to produce evenly balanced output. Once we’ve built the array, we call DynamicAudio#write() which takes all those numbers and turns them into sound for us.

The next step down the stack is the Tone#valueAt(time) method, which calculates the wave value for the tone at any point in time. A tone is is formed by combining a set of waves: there’s the basic wave that produces the note you’re playing, and then there may be modulations of amplitude or frequency applied, which themselves are described by waves. We need a model for waves:

Wave = function(options) {
  options = options || {};
  this.frequency = options.frequency || 1;
  this.amplitude = options.amplitude || 0;
  this.waveform  = options.waveform  || 'sine';

A wave has three properties: its frequency in Hz, its amplitude between 0 and 1, and a waveform, which describes the shape of the wave: sine, square or sawtooth. Let’s make some functions to represent all the waveforms; each takes a value in the range 0 to 1 that represents how far through a cycle we are, and returns a value between -1 and 1.

Wave.sine = function(x) {
  return Math.sin(2*Math.PI * x);

Wave.square = function(x) {
  return x < 0.5 ? 1 : -1;

Wave.sawtooth = function(x) {
  return -1 + x*2;

With these defined we can add a valueAt(time) method to Wave so we can calculate how it changes as our clock ticks.

Wave.prototype.valueAt = function(time) {
  var form = Wave[this.waveform],     // a function
      x    = this.frequency * time,   // a number
      y    = x - Math.floor(x),       // a number in [0,1]
      A    = this.amplitude;
  return A * form(y);

Now that we can model waves we can go back and fill in the generateWave() and valueAt() methods on Tone. We’ll construct a tone like this:

new Tone({
  note:       'C',
  octave:     3,
  amplitude:  0.8,
  waveform:   'square',
  am: {
    amplitude: 0.3,
    frequency: 2,
    waveform:  'sine'

This data describes the basic note we want to play, its amplitude and waveform, and a wave to use as amplitude modulation (the am field). We turn this data into a couple of Waves, one to represent the base note and one to represent the AM wave:

Tone.prototype.generateWave = function(options) {
  this._note = new Wave({
    amplitude: options.amplitude,
    waveform:  options.waveform,
    frequency: Wave.noteFrequency(options.note, options.octave)
  this._am = new Wave(options.am);

We convert the note and octave into a frequency using a little bit of maths. The details aren’t that important, what matters is that we can manipulate sound in terms of musical notes rather than frequencies.

Wave.noteFrequency = function(note, octave) {
  if (octave === undefined) octave = 4;
  var semitones = Wave.NOTES[note],
      frequency = Wave.MIDDLE_C * Math.pow(2, semitones/12),
      shift     = octave - 4;
  return frequency * Math.pow(2, shift);

Wave.MIDDLE_C = 261.626; // Hz

Wave.NOTES = {
  'C' : 0,
  'C#': 1,  'Db': 1,
  'D' : 2,
  'D#': 3,  'Eb': 3,
  'E' : 4,
  'F' : 5,
  'F#': 6,  'Gb': 6,
  'G' : 7,
  'G#': 8,  'Ab': 8,
  'A' : 9,
  'A#': 10, 'Bb': 10,
  'B' : 11

The Tone#valueAt(time) method simply composes the note and the amplitude modulation to produce the final sound:

Tone.prototype.valueAt = function(time) {
  var base = this._note.valueAt(time),
      am   = this._am.valueAt(time);
  return base * (1 + am);

So that’s all the sound generation taken care of, and we now need a way to interact with the tones and change them. For example you could implement a way to change the note of a tone:

Tone.prototype.setNote = function(note, octave) {
  this._note.frequency = Wave.noteFrequency(note, octave);

Similarly you can add interfaces for changing any of the parameters of the tone: changing its amplitude or waveform, adjusting the frequency of the amplitude modulation, adding further modulators, etc. After I’d implemented the methods I needed to manipulate the tones, I hacked together a GUI using jQuery UI:

Photo by Martyn Davies

To play to the audience I set up a bunch of initial tones that produced a buzzing drone-like sound – these are the tones you see in the photo above. The UI had controls for changing the note’s waveform and amplitude, and changing the amplitude and frequency of the AM and FM modulations. The final control that would let me play the tune is a set of keybindings that mapped letters on the keyboard to musical notes. I mapped the home row (Z, X, C, V, B, N, M) to the notes G, A, B, C, D, E and F. But this control has a twist: I want to change the note on everyone else’s laptop, so I need to send note change instructions to them. To achieve this I gave each tone a name and stored them in an object called TONES, and used Faye to send the tone name and the note to switch to over the wire:

  90: ['G', 3],
  88: ['A', 3],
  67: ['B', 3],
  86: ['C', 4],
  66: ['D', 4],
  78: ['E', 4],
  77: ['F', 4]

$(document).keydown(function(event) {
  var note = KEY_BINDINGS[event.keyCode];
  faye.publish('/notes', {
    name:   'melody',
    note:   note[0],
    octave: note[1]

faye.subscribe('/notes', function(message) {
  var tone   = TONES[message.name],
      note   = message.note,
      octave = message.octave;
  tone.setNote(note, octave);

And that pretty much covers it. If you want to dig into audio generation, I highly recommend the DynamicAudio library, as well as this LRUG talk by Chris Lowis on audio generation and analysis in JavaScript.

Fargo: a Lisp for asynchronous programming

A couple weeks ago, I tooted:

CoffeeScript ought to take this opportunity to add new features that help with async e.g. tail calls, continuations.

Now there’s a problem with this: it’s a core part of CoffeeScript’s usability and resultant popularity that it compiles to readable JavaScript. If you want to introduce new execution semantics, a straightforward one-to-one syntax transformation is not such a viable option.

But I maintain that, given we’re reinventing the language we work with on the web, now’s as good a time as any to reexamine the tools we use to write programs. We’re all annoyed by working with callbacks, and with good reason. Programming with explicit callbacks introduces race conditions and uncertainty; it’s not the syntactic burden of typing function that causes the real pain, it’s the creation of regions of your program whose order of execution is unpredictable.

In the Ruby world, we’ve been using fibers to great effect to hide the asynchronous nature of code. A fiber is just a special kind of single-use function, whose execution can be suspended and resumed by the user. They are perfect for pausing a block of code while some I/O is done, but they do not block the interpreter: when you pause a fiber the interpreter can carry on doing other work until the fiber is resumed. Ilya’s first example is perfect:

def http_get(url)
  f = Fiber.current
  http = EventMachine::HttpRequest.new(url).get
  # resume fiber once http call is done
  http.callback { f.resume(http) }
  http.errback  { f.resume(http) }
  return Fiber.yield

Calling http_get() initiates an async HTTP request, and just before returning it ‘yields’, i.e. it pauses the current fiber. When the request completes, we resume the fiber with the response. A call to http_get() looks like synchronous code, but behind the scenes the I/O is being done in a way that doesn’t block the interpreter.

I thought it would be fun to try this out in JavaScript, so I hacked up a very basic Scheme-like interpreter to support fibers, and called it Fargo. Scheme usually includes a function called call-with-current-continuation or call/cc, of which fibers are a simplified version. Call/cc allows the current state of a computation (the “continuation”) to be saved and resumed any number of times. This means, at least in a naive implementation, that the saved state of the stack must be copied every time you invoke a continuation, otherwise state would leak between uses. Fibers can only be resumed once from their last yield-point, making running and resuming them cheaper.

Fibers as implemented in Ruby also require the user to explicitly initiate a fiber. Supporting call/cc requires constantly keeping track of the current continuation since it may be captured at any time. Because fibers must be explicitly initiated, when not in a fiber we can use a faster stackless execution engine which doesn’t need to track state so much.

Here’s an example in Fargo of getting a value that’s produced by an async API (set-timeout) without using callbacks in the calling code:

; get-message function: captures the current fiber, yields,
; then resumes after a timeout when we have a message to return
(define (get-message)
  (define f (current-fiber))
  (set-timeout (lambda ()
    (f "Hello, world!")
  ) 5000)

; main program fiber
(define main (fiber ()
  ; get the message using a sync-looking API
  (define msg (get-message))
  ; alert the message
  (alert msg)

; run the main fiber

Because it’s tail-recursive, you can use recursion to implement infinite streams:

> (define strea­m (fibe­r ()
    (let loop ((i 0))
      (yiel­d i)
      (loop­ (+ i 1))))­)

> (stream)
; => 0
> (stream)
; => 1
> (stream)
; => 2

So where is Fargo going? Right now, with my current ban on committing myself to any new large side-projects, it remains a quick (12 days old) experiment. Maybe someone will like the idea enough to turn it into a production language but for now I’m happy to let people try it out and let me know what they think. I’ve been making small improvements to it the last few days to get it to run as much of Heist‘s core library as possible and it now runs all the macros and the list and numeric libraries.

I’m going to keep thinking about how we can make async programming easier and I’d love your ideas – the nice thing about working on a Lisp is that changing the interface it provides is really simple.