Callbacks are imperative, promises are functional: Node’s biggest missed opportunity

The nature of promises is that they remain immune to changing circumstances.

Frank Underwood, ‘House of Cards’

You will often hear it said that JavaScript is a ‘functional’ programming language. It is described as such simply because functions are first-class values: many other features that define functional programming – immutable data, preference for recursion over looping, algebraic type systems, avoidance of side effects – are entirely absent. And while first-class functions are certainly useful, and enable users to program in functional style should they decide to, the notion that JS is functional often overlooks a core aspect of functional programming: programming with values.

‘Functional programming’ is something of a misnomer, in that it leads a lot of people to think of it as meaning ‘programming with functions’, as opposed to programming with objects. But if object-oriented programming treats everything as an object, functional programming treats everything as a value – not just functions, but everything. This of course includes obvious things like numbers, strings, lists, and other data, but also other things we OOP fans don’t typically think of as values: IO operations and other side effects, GUI event streams, null checks, even the notion of sequencing function calls. If you’ve ever heard the phrase ‘programmable semicolons’ you’ll know what I’m getting at.

At its best, functional programming is declarative. In imperative programming, we write sequences of instructions that tell the machine how to do what we want. In functional programming, we describe relationships between values that tell the machine what we want to compute, and the machine figures out the instruction sequences to make it happen.

If you’ve ever used Excel, you have done functional programming: you model a problem by describing how a graph of values are derived from one another. When new data is inserted, Excel figures out what effect it has on the graph and updates everything for you, without you having to write a sequence of instructions for doing so.

With this definition in place, I want to address what I consider to be the biggest design mistake committed by Node.js: the decision, made quite early in its life, to prefer callback-based APIs to promise-based ones.

Everybody uses [callbacks]. If you publish a module that returns promises, nobody’s going to care. Nobody’s ever going to use that module.

If I write my little library, and it goes and talks to Redis, and that’s the last thing it ever does, I can just pass the callback that was handed to me off to Redis. And when we do hit these problems like callback hell, I’ll tell you a secret: there’s also a coroutine hell and a monad hell and a hell for any abstraction you create if you use it enough.

For the 90% case we have this super super simple interface, so when you need to do one thing, you just get one little indent and then you’re done. And when you have a complicated use case you go and install async like the other 827 modules that depend on it in npm.

Mikeal Rogers, LXJS 2012

This quotation is from a recent(ish) talk by Mikeal Rogers, which covers various facets of Node’s design philosophy:

In light of Node’s stated design goal of making it easy for non-expert programmers to build fast concurrent network programs, I believe this attitude to be counterproductive. Promises make it easier to construct correct, maximally concurrent programs by making control-flow something for the runtime to figure out, rather than something the user has to explicitly implement.

Writing correct concurrent programs basically comes down achieving as much concurrent work as you can while making sure operations still happen in the correct order. Although JavaScript is single threaded, we still get race conditions due to asynchrony: any action that involves I/O can yield CPU time to other actions while it waits for callbacks. Multiple concurrent actions can access the same in-memory data, or carry out overlapping sequences of commands against a database or the DOM. As I hope to show in this article, promises provide a way to describe problems using interdependencies between values, like in Excel, so that your tools can correctly optimize the solution for you, instead of you having to figure out control flow for yourself.

I hope to dismiss the misunderstanding that promises are about having cleaner syntax for callback-based async work. They are about modeling your problem in a fundamentally different way; they go deeper than syntax and actually change the way you solve problems at a semantic level.

To begin with, I’d like to revisit an article I wrote a couple of years ago, on how promises are the monad of asynchronous programming. The core lesson there was that monads are a tool for helping you compose functions, i.e. building pipelines where the output of one function becomes the input to the next. This is achieved using structural relationships between values, and it’s values and their relationships that will again play an important role here.

I’m going to make use of Haskell type notation again to help illustrate things. In Haskell, the notation foo :: bar means “the value foo is of type bar”. The notation foo :: Bar -> Qux means “foo is a function that takes a value of type Bar and returns a value of type Qux”. If the exact types of the input/output are not important, we use single lowercase letters, foo :: a -> b. If foo takes many arguments we add more arrows, i.e. foo :: a -> b -> c means that foo takes two arguments of types a and b and returns something of type c.

Let’s look at a Node function, say, fs.readFile(). This takes a pathname as a String, and a callback, and does not return anything. The callback takes an Error (which might be null) and a Buffer containing the file contents, and also returns nothing. We can say the type of readFile is:

readFile :: String -> Callback -> ()

() is Haskell notation for the null type. The callback is itself another function, whose type signature is:

Callback :: Error -> Buffer -> ()

Putting the whole thing together, we can say that readFile takes a String and a function which is called with a Buffer:

readFile :: String -> (Error -> Buffer -> ()) -> ()

Now, let’s imagine Node used promises instead. In this situation, readFile would simply take a String and return a promise of a Buffer:

readFile :: String -> Promise Buffer

More generally, we can say that callback-based functions take some input and a callback that’s invoked with some output, and promised-based functions take some input and return a promise of some output:

callback :: a -> (Error -> b -> ()) -> ()
promise :: a -> Promise b

Those null values returned by callback-based functions are the root of why programming with callbacks is hard: callback-based functions do not return anything, and so are hard to compose. A function with no return value is executed only for its side effects – a function with no return value or side effects is simply a black hole. So programming with callbacks is inherently imperative, it is about sequencing the execution of side-effect-heavy procedures rather than mapping input to output by function application. It is about manual orchestration of control flow rather than solving problems through value relationships. It is this that makes writing correct concurrent programs difficult.

By contrast, promise-based functions always let you treat the result of the function as a value in a time-independent way. When you invoke a callback-based function, there is some time between you invoking the function and its callback being invoked during which there is no representation of the result anywhere in the program.

fs.readFile('file1.txt',
  // some time passes...
  function(error, buffer) {
    // the result now pops into existence
  }
);

Getting the result out of a callback- or event-based function basically means “being in the right place at the right time”. If you bind your event listener after the result event has been fired, or you don’t have code in the right place in a callback, then tough luck, you missed the result. This sort of thing plagues people writing HTTP servers in Node. If you don’t get your control flow right, your program breaks.

Promises, on the other hand, don’t care about time or ordering. You can attach listeners to a promise before or after it is resolved, and you will get the value out of it. Therefore, functions that return promises immediately give you a value to represent the result that you can use as first-class data, and pass to other functions. There is no waiting around for a callback or any possibility of missing an event. As long as you hold a reference to a promise, you can get its value out.

var p1 = new Promise();
p1.then(console.log);
p1.resolve(42);

var p2 = new Promise();
p2.resolve(2013);
p2.then(console.log);

// prints:
// 42
// 2013

So while the method name then() implies something about sequencing operations – and indeed that is a side-effect of its job – you can really think of it as being called unwrap. A promise is a container for an as-yet-unknown value, and then’s job is to extract the value out of the promise and give it to another function: it is the bind function from monads. The above code doesn’t say anything about when the value is available, or what order things happen in, it simply expresses some dependencies: in order to log a value, you must first know what it is. The ordering of the program emerges from this dependency information. This is a rather subtle distinction but we’ll see it more clearly when we discuss lazy promises toward the end of this article.

Thus far, this has all been rather trivial; little functions that barely interact with one another. To see why promises are more powerful, let’s tackle something a bit tricker. Say we have some code that gets the mtimes of a bunch of files using fs.stat(). If this were synchronous, we’d just call paths.map(fs.stat), but since mapping with an async function is hard, we dig out the async module.

var async = require('async'),
    fs    = require('fs');

var paths = ['file1.txt', 'file2.txt', 'file3.txt'];

async.map(paths, fs.stat, function(error, results) {
  // use the results
});

(Yes, I know there are sync versions of the fs functions, but most types of I/O don’t have this option. Play along with me, here.)

That’s all well and good, until we decide we also want the size of file1 for an unrelated task. We could just stat it again:

var paths = ['file1.txt', 'file2.txt', 'file3.txt'];

async.map(paths, fs.stat, function(error, results) {
  // use the results
});

fs.stat(paths[0], function(error, stat) {
  // use stat.size
});

That works, but now we’re statting the file twice. That might be fine for local file operations, but if we were fetching some large files over https that’s going to be more of a problem. We decide we need to only hit the file once, so we revert to the previous version but handle the first file specially:

var paths = ['file1.txt', 'file2.txt', 'file3.txt'];

async.map(paths, fs.stat, function(error, results) {
  var size = results[0].size;
  // use size
  // use the results
});

This works, but now our size-related task is blocked on waiting for the whole list to complete. And if there’s an error with any item in the list, we won’t get a result for the first file at all. That’s no good, so we try another approach: we separate the first file from the rest of the list and handle it separately.

var paths = ['file1.txt', 'file2.txt', 'file3.txt'],
    file1 = paths.shift();

fs.stat(file1, function(error, stat) {
  // use stat.size
  async.map(paths, fs.stat, function(error, results) {
    results.unshift(stat);
    // use the results
  });
});

This also works, but now we’ve un-parallelized the program: it will take longer to run because we don’t start on the list of requests until the first one is complete. Previously, they all ran concurrently. We’ve also had to do some array manipulation to account for the fact we’re treating one file differently from the others.

Okay, one last stab at success. We know we want to get the stats for all the files, hitting each file only once, do some work on the first result if it succeeds, and if the whole list succeeds we want to do some work on that list. We take this knowledge of the dependencies in the problem and express it using async.

var paths = ['file1.txt', 'file2.txt', 'file3.txt'],
    file1 = paths.shift();

async.parallel([
  function(callback) {
    fs.stat(file1, function(error, stat) {
      // use stat.size
      callback(error, stat);
    });
  },
  function(callback) {
    async.map(paths, fs.stat, callback);
  }
], function(error, results) {
  var stats = [results[0]].concat(results[1]);
  // use the stats
});

This is now correct: each file is hit once, the work is all done in parallel, we can access the result for the first file independently of the others, and the dependent tasks execute as early as possible. Mission accomplished!

Well, not really. I think this is pretty ugly, and it certainly doesn’t scale nicely as the problem becomes more complicated. It was a lot of work to think about in order to make it correct, the design intention is not apparent so later maintenance is likely to break it, the follow-up tasks are mixed in with the strategy of how to do the required work, and we had to so some crufty array-munging to paper over the special case we introduced. Yuck!

All these problems stem from the fact that we’re using control flow as our primary means of solving the problem, instead of data dependencies. Instead of saying “in order for this task to run, I need this data”, and letting the runtime figure out how to optimize things, we’re explicitly telling the runtime what should be parallelized and what should be sequential, and this leads to very brittle solutions.

So how would promises improve things? Well, first off we need some filesystem functions that return promises instead of taking callbacks. But rather than write those by hand let’s meta-program something that can convert any function for us. For example, it should take a function of type

String -> (Error -> Stat -> ()) -> ()

and return one of type

String -> Promise Stat

Here’s one such function:

// promisify :: (a -> (Error -> b -> ()) -> ()) -> (a -> Promise b)
var promisify = function(fn, receiver) {
  return function() {
    var slice   = Array.prototype.slice,
        args    = slice.call(arguments, 0, fn.length - 1),
        promise = new Promise();
    
    args.push(function() {
      var results = slice.call(arguments),
          error   = results.shift();
      
      if (error) promise.reject(error);
      else promise.resolve.apply(promise, results);
    });
    
    fn.apply(receiver, args);
    return promise;
  };
};

(This is not completely general, but it will work for our purposes.)

We can now remodel our problem. All we’re basically doing is mapping a list of paths to a list of promises for stats:

var fs_stat = promisify(fs.stat);

var paths = ['file1.txt', 'file2.txt', 'file3.txt'];

// [String] -> [Promise Stat]
var statsPromises = paths.map(fs_stat);

This is already paying dividends: whereas with async.map() you have no data to work with until the whole list is done, with this list of promises you can just pick out the first one and do stuff with it:

statsPromises[0].then(function(stat) { /* use stat.size */ });

So by using promise values we’ve already solved most of the problem: we stat all the files concurrently and get independent access to not just the first file, but any file we choose, simply by picking bits out of the array. With our previous approaches we had to explicitly code for handling the first file in ways that don’t map trivially to changing your mind about which file you need, but with lists of promises it’s easy.

The missing piece is how to react when all the stat results are known. In our previous efforts we ended up with a list of Stat objects, but here we have a list of Promise Stat objects. We want to wait for all the promises to resolve, and then yield a list of all the stats. In other words, we want to turn a list of promises into a promise of a list.

Let’s do this by simply augmenting the list with promise methods, so that a list containing promises is itself a promise that resolves when all its elements are resolved.

// list :: [Promise a] -> Promise [a]
var list = function(promises) {
  var listPromise = new Promise();
  for (var k in listPromise) promises[k] = listPromise[k];
  
  var results = [], done = 0;
  
  promises.forEach(function(promise, i) {
    promise.then(function(result) {
      results[i] = result;
      done += 1;
      if (done === promises.length) promises.resolve(results);
    }, function(error) {
      promises.reject(error);
    });
  });
  
  if (promises.length === 0) promises.resolve(results);
  return promises;
};

(This function is similar to the jQuery.when() function, which takes a list of promises and returns a new promise that resolves when all the inputs resolve.)

We can now wait for all the results to come in just by wrapping our array in a promise:

list(statsPromises).then(function(stats) { /* use the stats */ });

So now our whole solution has been reduced to this:

var fs_stat = promisify(fs.stat);

var paths = ['file1.txt', 'file2.txt', 'file3.txt'],
    statsPromises = list(paths.map(fs_stat));

statsPromises[0].then(function(stat) {
  // use stat.size
});

statsPromises.then(function(stats) {
  // use the stats
});

This expression of the solution is considerably cleaner. By using some generic bits of glue (our promise helper functions), and pre-existing array methods, we’ve solved the problem in a way that’s correct, efficient, and very easy to change. We don’t need the async module’s specialized collection methods for this, we just take the orthogonal ideas of promises and arrays and combine them in a very powerful way.

Note in particular how this program does not say anything about things being parallel or sequential. It just says what we want to do, and what the task dependencies are, and the promise library does all the optimizing for us.

In fact, many things in the async collection module can be easily replaced with operations on lists of promises. We’ve already seen how map works; this code:

async.map(inputs, fn, function(error, results) {});

is equivalent to:

list(inputs.map(promisify(fn))).then(
    function(results) {},
    function(error) {}
);

async.each() is just async.map() where you’re executing the functions for their side effects and throwing the return values away; you can just use map() instead.

async.mapSeries() (and by the previous argument, async.eachSeries()) is equivalent to calling reduce() on a list of promises. That is, you take your list of inputs, and use reduce to produce a promise where each action depends on the one before it succeeding. Let’s take an example: implementing an equivalent of rm -rf based on fs.rmdir(). This code:

var dirs = ['a/b/c', 'a/b', 'a'];
async.mapSeries(dirs, fs.rmdir, function(error) {});

is equivalent to:


var dirs     = ['a/b/c', 'a/b', 'a'],
    fs_rmdir = promisify(fs.rmdir);

var rm_rf = dirs.reduce(function(promise, path) {
  return promise.then(function() { return fs_rmdir(path) });
}, unit());

rm_rf.then(
    function() {},
    function(error) {}
);

Where unit() is simply a function that produces an already-resolved promise to start the chain (if you know monads, this is the return function for promises):

// unit :: a -> Promise a
var unit = function(a) {
  var promise = new Promise();
  promise.resolve(a);
  return promise;
};

This reduce() approach simply takes each pair of subsequent directory paths in the list, and uses promise.then() to make the action to delete the path depend on the success of the previous step. This handles non-empty directories for you: if the previous promise is rejected due to any such error, the chain simply halts. Using value dependencies to force a certain order of execution is a core idea in how function languages use monads to deal with side effects.

This final example is more verbose than the equivalent async code, but don’t let that deceive you. The key idea here is that we’re combining the separate ideas of promise values and list operations to compose programs, rather than relying on custom control flow libraries. As we saw earlier, the former approach results in programs that are easier to think about.

And they are easier to think about precisely because we’ve delegated part of our thought process to the machine. When using the async module, our thought process is:

  • A. The tasks in this program depend on each other like so,
  • B. Therefore the operations must be ordered like so,
  • C. Therefore let’s write code to express B.

Using graphs of dependent promises lets you skip step B altogether. You write code that expresses the task dependencies and let the computer deal with control flow. To put it another way, callbacks use explicit control flow to glue many small values together, whereas promises use explicit value relationships to glue many small bits of control flow together. Callbacks are imperative, promises are functional.

A discussion of this topic would not be complete without one final application of promises, and a core idea in functional programming: laziness. Haskell is a lazy language, which means that instead of treating your program as a script that it executes top-to-bottom, it starts at the expressions that define the program’s output – what it writes to stdio, databases, and so on – and works backwards. It looks at what expressions those final expressions depend on for their input, and walks this graph in reverse until it’s computed everything the program needs to produce its output. Things are only computed if they are needed for the program to do its work.

Many times, the best solution to a computer science problem comes from finding the right data structure to model it. And JavaScript has one problem very similar to what I just described: module loading. You only want to load modules your program actually needs, and you want to do this as efficiently as possible.

Before we had CommonJS and AMD that actually have a notion of dependencies, we had a handful of script loader libraries. They mostly worked much like our example above where you explicitly told the script loader which files could be downloaded in parallel and which had to be ordered a certain way. You basically had to spell out the download strategy, which is considerably harder to do both correctly and efficiently as opposed to simply describing the dependencies between scripts and letting the loader optimize things for you.

Let’s introduce the notion of a LazyPromise. This is a promise object that contains a function that does some possibly async work. The function is only invoked once someone calls then() on the promise: we only begin evaluating it once someone needs the result. It does this by overriding then() to kick off the work if it’s not already been started.

var Promise = require('rsvp').Promise,
    util    = require('util');

var LazyPromise = function(factory) {
  this._factory = factory;
  this._started = false;
};
util.inherits(LazyPromise, Promise);

LazyPromise.prototype.then = function() {
  if (!this._started) {
    this._started = true;
    var self = this;
    
    this._factory(function(error, result) {
      if (error) self.reject(error);
      else self.resolve(result);
    });
  }
  return Promise.prototype.then.apply(this, arguments);
};

For example, the following program does nothing: since we never ask for the result of the promise, no work is done:

var delayed = new LazyPromise(function(callback) {
  console.log('Started');
  setTimeout(function() {
    console.log('Done');
    callback(null, 42);
  }, 1000);
});

But if we add this line, then the program prints Started, waits for a second, then prints Done followed by 42:

delayed.then(console.log);

And since the work is only done once, calling then() yields the result multiple times but does not do the work over each time:

delayed.then(console.log);
delayed.then(console.log);
delayed.then(console.log);

// prints:
// Started
// -- 1 second delay --
// Done
// 42
// 42
// 42

Using this very simple generic abstraction, we can build an optimizing module system in no time at all. Imagine we want to make a bunch of modules like this: each module is created with a name, a list of modules it depends on, and a factory that when executed with its dependencies passed in returns the module’s API. This is very similar to how AMD works.

var A = new Module('A', [], function() {
  return {
    logBase: function(x, y) {
      return Math.log(x) / Math.log(y);
    }
  };
});

var B = new Module('B', [A], function(a) {
  return {
    doMath: function(x, y) {
      return 'B result is: ' + a.logBase(x, y);
    }
  };
});

var C = new Module('C', [A], function(a) {
  return {
    doMath: function(x, y) {
      return 'C result is: ' + a.logBase(y, x);
    }
  };
});

var D = new Module('D', [B, C], function(b, c) {
  return {
    run: function(x, y) {
      console.log(b.doMath(x, y));
      console.log(c.doMath(x, y));
    }
  };
});

We have a diamond shape here: D depends on B and C, each of which depends on A. This means we can load A, then B and C in parallel, then when both those are done we can load D. But, we want our tools to figure this out for us rather than write that strategy out ourselves.

We can do this very easily by modeling a module as a LazyPromise subtype. Its factory simply asks for the values of its dependencies using our list promise helper from before, then creates the module with those dependencies after a timeout that simulates the latency of loading things asynchronously.

var DELAY = 1000;

var Module = function(name, deps, factory) {
  this._factory = function(callback) {
    list(deps).then(function(apis) {
      console.log('-- module LOAD: ' + name);
      setTimeout(function() {
        console.log('-- module done: ' + name);
        var api = factory.apply(this, apis);
        callback(null, api);
      }, DELAY);
    });
  };
};
util.inherits(Module, LazyPromise);

Because Module is a LazyPromise, simply defining the modules as above does not load any of them. We only start loading things when we try to use the modules:

D.then(function(d) { d.run(1000, 2) });

// prints:
// 
// -- module LOAD: A
// -- module done: A
// -- module LOAD: B
// -- module LOAD: C
// -- module done: B
// -- module done: C
// -- module LOAD: D
// -- module done: D
// B result is: 9.965784284662087
// C result is: 0.10034333188799373

As you can see, A is loaded first, then when it completes B and C begin downloading at the same time, and when both of them complete then D loads, just as we wanted. If you try just calling C.then(function() {}) you’ll see that only A and C load; modules that are not in the graph of the ones we need are not loaded.

So we’ve created a correct optimizing module loader with barely any code, simply by using a graph of lazy promises. We’ve taken the functional programming approach of using value relationships rather than explicit control flow to solve the problem, and it was much easier than if we’d written the control flow ourselves as the main element in the solution. You can give any acyclic dependency graph to this library and it will optimize the control flow for you.

This is the real power of promises. They are not just a way to avoid a pyramid of indentation at the syntax level. They give you an abstraction that lets you model problems at a higher level, and leave more work to your tools. And really, that’s what we should all be demanding from our software. If Node is really serious about making concurrent programming easy, they should really give promises a second look.