36 laptops = 1 musical instrument

</embed>

Video by Scott Schiller

Not a lot of people know this, but the first thing I ever built with Faye was at the first Music Hack Day in London back in 2009. It was also the first time I’d demo’d at a hack day, and needless to say some combination of Faye’s immaturity, a hasty deployment and flaky wi-fi meant the demo failed. The demo was called Outcast, and let you push what you were listening to in iTunes to other machines over HTTP. It was a bit like iTunes library sharing, with a push instead of a pull model, and over the Internet rather than just LAN. You can still get the code, though I doubt it works.

Since then I’ve had an impressive track record of failed demos so for the recent San Francisco MHD (thank you to my lovely employers at Songkick for sending me – we’re hiring, don’t you know) I thought I’d embrace failure and try to use as many risky technologies as possible. Against all the odds the result is what you see in the video above: 36 laptops playing a buzzy ambient version of ‘Happy Birthday’ for MHD organiser Dave Haynes.

The cocktail of technologies included the venue’s wi-fi, Faye, the Mozilla audio API, and 36 audience members. (I wanted to throw Fargo in there but I think the performance overhead would have killed the sound generation completely.) I’m not open-sourcing the code for the time being, but there’s not an awful lot of it and I thought it’d be fun to explain what I learned over the weekend. I think mass remote control is an interesting application of WebSocket-based messaging and I’d like to see more experiments like this.

So, where to start? Like the hack itself this is going to be a bit of brain-dump but hopefully it more-or-less hangs together. The basic functionality of the app is that it’s a free-form synth. It’s not a sequencer, doesn’t do any recording/looping, just lets you make tones and play them. To drive the app we need a clock that runs all the tones we’ve created. It doesn’t need to be very fancy: since we’re not building a sequencer we don’t need explicit beats and bars. A simple timer loop that iterates on all the registered tones and tells them to emit a segment of audio will suffice.

Clock = {
  SAMPLE_RATE:    44100,
  TICK_INTERVAL:  10,
  
  // Storage for all the tones currently in use
  _tones: [],
  
  // Register tones with the application clock
  bind: function(tone) {
    this._tones.push(tone);
  },
  
  // Main timer loop, tells all the tones to emit a slice
  // of audio using the current time and timer interval
  start: function() {
    var self     = this,
        interval = this.TICK_INTERVAL,
        rate     = this.SAMPLE_RATE,
        time     = new Date().getTime();
    
    setInterval(function() {
      var tones = self._tones,
          i     = tones.length;
      
      time += interval;
      while (i--)
        tones[i].outputTimeSlice(time / 1000, interval / 1000, rate);
        
    }, interval);
  }
};

Clock.start();

The SAMPLE_RATE describes how we generate audio data. Running a synth basically means calculating a lot of data points from sound waves, and to do this we need to know the unit of time between data points that the audio output device expects. The Mozilla audio API uses the common sample rate of 44.1kHz, meaning we need to generate 44,100 data points for every sound wave per second.

In the main timer loop we tell each tone in the system to emit some audio using outputTimeSlice(). This method will take the current time and an interval (both in seconds) and produce sound wave data over the given time slice.

The tone objects are responsible for modelling the tones we’re playing and generating their audio. A tone is produced by a set of parameters that describe the sound’s properties – its frequency, amplitude, waveform (sine, sawtooth, or square wave) as well as modulations like amplitude and frequency modulation. To emit sound I used Ben Firshman’s DynamicAudio, which is a thin wrapper around the Mozilla audio API that provides a Flash-based shim in other browsers.

Tone = function(options) {
  this.generateWave(options);
  this._audio = new DynamicAudio({swf: '/path/to/dynamicaudio.swf'});
};

Tone.prototype.outputTimeSlice = function(time, interval, rate) {
  var samples = Math.ceil(interval * rate),
      dt      = 1 / rate,
      data    = [];
  
  for (var i = 0; i < samples; i++) {
    var value = this.valueAt(time + i * dt);
    data.push(value); // left channel
    data.push(value); // right channel
  }
  this._audio.write(data);
};

Here we see the outputTimeSlice() method. Using the SAMPLE_RATE, it figures out how many data points we need to generate in the given time interval, and then builds an array of values by stepping through the time interval and calculating the value of the sound wave at each step. At each time step we need to provide values for both the left and right channel, so in this case I just push each value twice to produce evenly balanced output. Once we’ve built the array, we call DynamicAudio#write() which takes all those numbers and turns them into sound for us.

The next step down the stack is the Tone#valueAt(time) method, which calculates the wave value for the tone at any point in time. A tone is is formed by combining a set of waves: there’s the basic wave that produces the note you’re playing, and then there may be modulations of amplitude or frequency applied, which themselves are described by waves. We need a model for waves:

Wave = function(options) {
  options = options || {};
  this.frequency = options.frequency || 1;
  this.amplitude = options.amplitude || 0;
  this.waveform  = options.waveform  || 'sine';
};

A wave has three properties: its frequency in Hz, its amplitude between 0 and 1, and a waveform, which describes the shape of the wave: sine, square or sawtooth. Let’s make some functions to represent all the waveforms; each takes a value in the range 0 to 1 that represents how far through a cycle we are, and returns a value between -1 and 1.

Wave.sine = function(x) {
  return Math.sin(2*Math.PI * x);
};

Wave.square = function(x) {
  return x < 0.5 ? 1 : -1;
};

Wave.sawtooth = function(x) {
  return -1 + x*2;
};

With these defined we can add a valueAt(time) method to Wave so we can calculate how it changes as our clock ticks.

Wave.prototype.valueAt = function(time) {
  var form = Wave[this.waveform],     // a function
      x    = this.frequency * time,   // a number
      y    = x - Math.floor(x),       // a number in [0,1]
      A    = this.amplitude;
  
  return A * form(y);
};

Now that we can model waves we can go back and fill in the generateWave() and valueAt() methods on Tone. We’ll construct a tone like this:

new Tone({
  note:       'C',
  octave:     3,
  amplitude:  0.8,
  waveform:   'square',
  am: {
    amplitude: 0.3,
    frequency: 2,
    waveform:  'sine'
  }
})

This data describes the basic note we want to play, its amplitude and waveform, and a wave to use as amplitude modulation (the am field). We turn this data into a couple of Waves, one to represent the base note and one to represent the AM wave:

Tone.prototype.generateWave = function(options) {
  this._note = new Wave({
    amplitude: options.amplitude,
    waveform:  options.waveform,
    frequency: Wave.noteFrequency(options.note, options.octave)
  });
  this._am = new Wave(options.am);
};

We convert the note and octave into a frequency using a little bit of maths. The details aren’t that important, what matters is that we can manipulate sound in terms of musical notes rather than frequencies.

Wave.noteFrequency = function(note, octave) {
  if (octave === undefined) octave = 4;
  
  var semitones = Wave.NOTES[note],
      frequency = Wave.MIDDLE_C * Math.pow(2, semitones/12),
      shift     = octave - 4;
  
  return frequency * Math.pow(2, shift);
};

Wave.MIDDLE_C = 261.626; // Hz

Wave.NOTES = {
  'C' : 0,
  'C#': 1,  'Db': 1,
  'D' : 2,
  'D#': 3,  'Eb': 3,
  'E' : 4,
  'F' : 5,
  'F#': 6,  'Gb': 6,
  'G' : 7,
  'G#': 8,  'Ab': 8,
  'A' : 9,
  'A#': 10, 'Bb': 10,
  'B' : 11
};

The Tone#valueAt(time) method simply composes the note and the amplitude modulation to produce the final sound:

Tone.prototype.valueAt = function(time) {
  var base = this._note.valueAt(time),
      am   = this._am.valueAt(time);
  
  return base * (1 + am);
};

So that’s all the sound generation taken care of, and we now need a way to interact with the tones and change them. For example you could implement a way to change the note of a tone:

Tone.prototype.setNote = function(note, octave) {
  this._note.frequency = Wave.noteFrequency(note, octave);
};

Similarly you can add interfaces for changing any of the parameters of the tone: changing its amplitude or waveform, adjusting the frequency of the amplitude modulation, adding further modulators, etc. After I’d implemented the methods I needed to manipulate the tones, I hacked together a GUI using jQuery UI:

Photo by Martyn Davies

To play to the audience I set up a bunch of initial tones that produced a buzzing drone-like sound – these are the tones you see in the photo above. The UI had controls for changing the note’s waveform and amplitude, and changing the amplitude and frequency of the AM and FM modulations. The final control that would let me play the tune is a set of keybindings that mapped letters on the keyboard to musical notes. I mapped the home row (Z, X, C, V, B, N, M) to the notes G, A, B, C, D, E and F. But this control has a twist: I want to change the note on everyone else’s laptop, so I need to send note change instructions to them. To achieve this I gave each tone a name and stored them in an object called TONES, and used Faye to send the tone name and the note to switch to over the wire:

KEY_BINDINGS = {
  90: ['G', 3],
  88: ['A', 3],
  67: ['B', 3],
  86: ['C', 4],
  66: ['D', 4],
  78: ['E', 4],
  77: ['F', 4]
};

$(document).keydown(function(event) {
  var note = KEY_BINDINGS[event.keyCode];
  faye.publish('/notes', {
    name:   'melody',
    note:   note[0],
    octave: note[1]
  });
});

faye.subscribe('/notes', function(message) {
  var tone   = TONES[message.name],
      note   = message.note,
      octave = message.octave;
  
  tone.setNote(note, octave);
});

And that pretty much covers it. If you want to dig into audio generation, I highly recommend the DynamicAudio library, as well as this LRUG talk by Chris Lowis on audio generation and analysis in JavaScript.