How I accidentally learned Prolog

This year at the London Computation Club, we’ve been reading Types and Programming Languages (TAPL). We covered about half the book, completing three of the book’s six sections: Untyped Systems, Simple Types, and Subtyping. It’s probably the most challenging text we’ve tackled in the club, and certainly one of the most challenging books I’ve read, especially as I have no formal computer science training.

Despite its difficulty, as we got into it I began to find it surprisingly accessible. It is entirely possible to read the book and get a lot out of it even if you approach it informally and skip over all the mathematical proofs. However, to get much out of it you will need to become familiar with one core concept: formal definitions of logical rules for the evaluation and typing of programs. Exploring these rules with code meant that as well as learning the material in the book, I accidentally learned the basics of logic programming, a radically different way of thinking about code if you’re used to imperative systems.

The first formal system we encounter in the book is a very simple language consisting only of the boolean values true and false, and the conditional expression if x then y else z. We learn how to formally describe a process for evaluating expressions in this language, via the following rules.

First, an if-expression whose condition is the literal value true, reduces to the then-expression. We say that (if true then $t2 else $t3) evaluates to $t2 in one step, and we use the symbol -> to mean “evaluates in one step to”. TAPL calls this rule E-IfTrue, and we write it as:

rule E-IfTrue {
  (if true then $t2 else $t3) -> $t2

Second, an if-expression whose condition is false, reduces to the else-expression:

rule E-IfFalse {
  (if false then $t2 else $t3) -> $t3

Finally, if the condition does not yet have a specific value, but can be reduced, then we reduce the whole if-expression by reducing the condition. This is written as the following rule, which says that if $t1 evaluates to $t1', then (if $t1 then $t2 else $t3) evaluates to (if $t1' then $t2 else $t3), i.e. if the condition is not yet one of the exact values true or false, then reduce it one step.

rule E-If {
                         $t1 -> $t1'
  (if $t1 then $t2 else $t3) -> (if $t1' then $t2 else $t3)

The text above the line is the premise of the rule, and the text below the line is its conclusion. Rules without this line, like E-IfTrue, should be read as just a conclusion, with no premises: they state things that are true without depending on any other conditions.

We can use these rules to evaluate nested conditional expressions:

(if (if true then false else true) then false else true) -> ?

Neither E-IfTrue nor E-IfFalse match this statement; the condition of the outer if-expression does not have the literal value true or false, it is itself another if-expression. So we need to use E-If to reduce the inner expression, (if true then false else true).

This inner expression matches E-IfTrue and so we can reduce it directly to false. We write a proof of this result as:

——————————————————————————————————————— E-IfTrue
(if true then false else true) -> false

The line above the result tells us which rule was used to prove it. We can use this result to evaluate our original if-expression, since we know its condition can be reduced.

                          ——————————————————————————————————————— E-IfTrue
                          (if true then false else true) -> false
——————————————————————————————————————————————————————————————————————————————————————————— E-If
(if (if true then false else true) then false else true) -> (if false then false else true)

This proof says: because we know from E-IfTrue that (if true then false else true) evaluates to false, we therefore know by E-If that the outer expression evaluates to (if false then false else true). We could evaluate this one more time by applying E-IfFalse:

——————————————————————————————————————— E-IfFalse
(if false then false else true) -> true

From this chain of reasoning, we know that our original expression ultimately evaluates to true. There are no rules whose conclusion has the form true -> $t, so this cannot be evaluated further.

This language might not seem especially useful, but this process of defining semantics as a set of formal rules is what lets us prove things about programs, and all the type systems in the book are defined as sets of rules like this.

For example, we can write rules that say both true and false have type Bool; the symbol : means “has the type”:

rule T-True {
  true : Bool

rule T-False {
  false : Bool

Then we can write a more complicated rule for the type of if-expressions. An if-expression evaluates to either its then-clause or its else-clause, and the type of an if-expression should match the type of what it evaluates to – in type systems, this property of the type remaining the same after evaluation is called preservation. This means that the then- and else-clause should have the same type. We write this rule to say: if the condition has type Bool, and the two clauses both have some type $T, then the whole expression has type $T. (This rule has three premises and they are divided by a / symbol.)

rule T-If {
  $t1 : Bool / $t2 : $T / $t3 : $T
  (if $t1 then $t2 else $t3) : $T

We can glue these rules together to prove the type of compound expressions, for example:

———————————— T-False   ——————————— T-True   ———————————— T-False
false : Bool           true : Bool          false : Bool
———————————————————————————————————————————————————————— T-If
         (if false then true else false) : Bool

In this language where booleans are the only type this is not particularly interesting, but as more forms are added to the language, we write typing rules for them. For example, we say the value 0 has type Nat, short for “natural number”.

rule T-Zero {
  0 : Nat

We also add a function called iszero; iszero 0 evaluates to true and all other inputs return false. Its typing rule says that if its input has type Nat, then its result has type Bool.

rule T-IsZero {
        $t : Nat
  (iszero $t) : Bool

Now we can show that if-expressions can have other types, if their clauses are things other than booleans:

——————————— T-True   ——————— T-Zero   ——————— T-Zero
true : Bool          0 : Nat          0 : Nat
————————————————————————————————————————————— T-If
        (if true then 0 else 0) : Nat

And we can also decide some programs won’t compile, because the types don’t match. For example, if 0 then true else false cannot be typed, because the first premise in T-If becomes 0 : Bool, which cannot be matched to any rule. if true then 0 else false cannot be typed, because 0 and false have different types and the repeated variable $T in T-If means the types must match.

We can build arbitrarily nested trees of these proofs as our expressions become more complex:

     ——————— T-Zero
     0 : Nat
————————————————— T-IsZero   ——————————— T-True   ———————————— T-False
(iszero 0) : Bool            true : Bool          false : Bool
—————————————————————————————————————————————————————————————— T-If
         (if (iszero 0) then true else false) : Bool

TAPL progresses by inventing increasingly sophisticated typing rules that let us put useful constraints on programs and reject those that would crash at runtime. All these typing rules are essentially of the same kind we’ve seen above: a (possibly empty) set of premises followed by a conclusion, containing variables that can be filled in with arbitrary expressions.

The rules as I’ve written them above are very close to how they’re presented in the book, with minor modifications for writing them as ASCII text. In fact, the rules written out like this can be directly interpreted by a library I wrote to implement the ideas in the book: Infer. It’s based on an initial version of the idea, Tom Stuart’s inference-rules, and all it does is attempt to match input statements to the rules, and print a proof if a match can be found. For example, here’s the code for the last example:

rules = Infer.lang('./rules.txt')
expr  = Infer.expr('(if (iszero 0) then true else false) : $?')

state = rules.derive(expr).first

Infer figures out what the value of that variable $? should be, and generates a proof for the result it found. The process it goes through to do this is fairly simple. We start with the statement we’re trying to prove:

(if (iszero 0) then true else false) : $?

Infer has almost no built-in syntax; the only rules are that things beginning with $ are variables, and parentheses group things. Everything else is an arbitrary meaningless symbol from Infer’s point of view, and all it’s trying to do is find rules whose conclusions can be made to match this statement.

What do we mean by matching in this context? It means that two sequences of symbols can be made equal to each other by replacing variables. For example, the sequences 1 $X 3 and 1 2 $Y can be made equal by setting $X = 2 and $Y = 3. Occurrences of the same variable must be equal, so for example $X $X can be made to match 2 2 but not 3 4.

If we look through the rules above, the only one whose conclusion has the right form to match our target statement is T-If, whose conclusion is

(if $t1 then $t2 else $t3) : $T

We match this against our target statement:

(if (iszero 0) then true else false) : $?

This match gives us:

  • $t1 = (iszero 0)
  • $t2 = true
  • $t3 = false
  • $T = $?

Then we look at the premises of T-If. First, we require $t1 : Bool, that is (iszero 0) : Bool. There is a rule whose conclusion matches this: T-IsZero with (iszero $t) : Bool. This gives us $t = 0, and this rule has premise $t : Nat, which is satisfied by using T-Zero to show 0 : Nat. That takes care of the first premise in T-If.

The next premise says that $t2 : $T, and at this point we know $t2 is true. Is there a rule to match true : $T? Yes: T-True says true : Bool, so that means $T must be Bool.

Now the final premise: $t3 : $T. We now know both $t3 and $T, so we want to show that false : Bool, and indeed we can do that using T-False. If we had previously decided $T was Nat, then we’d be trying to match false : Nat, and there is no rule that will generate this, so the expression cannot be typed.

Having proved all the premises, we can apply T-If to conclude the type of the entire expression, and since we have matched the $? variable to $T, we can conclude that $? is Bool.

This example demonstrates the basic mechanics of answering questions like “what type does this expression have”. We phrase the question as a statement like “the expression has type ?”, and then we match the statement against the rules to try to find out what ? is. When a rule has premises, we recursively try to match each of those in the same way, storing the values of variables as we go. The trace of rules invoked as we match parts of the statement forms a proof of the statement.

This method might not look very powerful, but it can actually be used to perform arbitrary computation, and it is the basis of logic programming. Rather than executing a sequence of statements, as imperative programs do, logic programs are declarative sets of rules that input queries are matched against. A query is just any statement we’d like to prove, and it may have variables in it: (if true then false else true) : $? is a query where we’d like to know the value of $? if one exists.

Logic programming is radically different to imperative programming, and has some powerful properties. We can view the above rules as defining a function that computes the type of an expression, if it can be typed. Each expression should have a single well-defined type, and so this function should return a single value, as we are used to in imperative languages. However, logic functions don’t have to behave that way: they can generate any number of results, and they can be run “backwards”.

For example, rather than ask for the type of an expression, we could ask for all the expressions that have type Bool. We’d express this as the query $? : Bool, and we can evaluate it just as we did before:

expr = Infer.expr('$? : Bool')

rules.derive(expr).each do |state|

This will print a few obvious things that we can reach in one step:

——————————— T-True
true : Bool

It will also print things that require a little more work:

     ——————— T-Zero
     0 : Nat
————————————————— T-IsZero
(iszero 0) : Bool

It will generate complex nested expressions:

     ——————— T-Zero
     0 : Nat
————————————————— T-IsZero   ——————— T-Zero   ——————— T-Zero
(iszero 0) : Bool            0 : Nat          0 : Nat
————————————————————————————————————————————————————— T-If
         (if (iszero 0) then 0 else 0) : Nat
    ————————————————————————————————————————————— T-IsZero
    (iszero (if (iszero 0) then 0 else 0)) : Bool

And it just keeps going, generating an infinite series of all the expressions that have the desired type:

                                           ——————— T-Zero
                                           0 : Nat
                                      ————————————————— T-IsZero   ——————— T-Zero   ——————— T-Zero
                                      (iszero 0) : Bool            0 : Nat          0 : Nat
——————————— T-True   ——————— T-Zero   ————————————————————————————————————————————————————— T-If
true : Bool          0 : Nat                   (if (iszero 0) then 0 else 0) : Nat
—————————————————————————————————————————————————————————————————————————————————— T-If
            (if true then 0 else (if (iszero 0) then 0 else 0)) : Nat
       ——————————————————————————————————————————————————————————————————— T-IsZero
       (iszero (if true then 0 else (if (iszero 0) then 0 else 0))) : Bool

This ability to explore all the possible true statements from a declarative set of rules is core to logic programming, and it makes it particularly good at search and constraint-solving problems.

Generating all possible results like this certainly felt like a neat trick when I first encountered it. (Incidentally, my first encounter with this technique was via a language called microKanren in Tom’s talk Hello, declarative world.) But how does one do real programming? How do we handle state and use data structures? The answer involves a little bit of retraining your brain, so let’s take another example from TAPL.

When we introduce user-defined functions, they look like this: λx: T. t. This is, the symbol λ (lambda), followed by a parameter x, then the parameter’s type T, and finally the function’s body, which is a single term t. Here’s an example function:

λn: Nat. (if (iszero n) then false else true)

When we type-check a program, we recursively go into every sub-expression in the program and work out its type. To computer the type of (iszero n) above, we need to know the type of n – we need to remember that the function definition says that n has type Nat, which would means (iszero n) is a well-typed expression. We do this with something called a typing context, which is just a data structure that remembers the types of any parameters that are currently in scope.

In Ruby, we’d just use a hash for that: { 'x' => Nat }. Each time we enter a lambda-expression, we add its parameter and type to this hash, and then carry the hash down into the function’s body as we check its type. But the language we’ve seen above doesn’t have any apparent data structures, or ways of updating them, so we need to invent one.

In TAPL, typing contexts are referred to by the letter Γ (gamma). A context is a list of pairs of parameters and their types. The structure of these lists is defined recursively; a context is either:

  • the empty context,
  • a context with a parameter-type pair x : T appended

So, these are all valid contexts:

(∅ , x : Bool)
((∅ , x : Bool) , y : Nat)

We can then write a pair of rules for getting the type of a variable in a given context; the statement Γ ⊢ x : T means “in context Γ, the term x has type T”. First, we use pattern-matching on the structure of the context: if the context ends with the pair $x : $T, then the type of $x in the context is $T.

rule T-Var-1 {
  ($Γ, $x : $T) ⊢ $x : $T

Then we write a case for when the parameter doesn’t match the last pair in the context. By writing ($Γ, $y : $S), we destructure the context into its final pair $y : $S and everything before it, . The premise says that if $x has type $T in this reduced context, then it has the same type in the reduced context with one pair added.

rule T-Var-2 {
        $Γ ⊢ $x : $T
  ($Γ, $y : $S) ⊢ $x : $T

These rules define a recursive method for looking up the type of a parameter in a list of key-value pairs. Here are a couple of proofs that show the recursion working, first for the base case:

————————————————————————— T-Var-1
(∅ , x : Bool) ⊢ x : Bool

And then for the recursive case:

      ——————————————————————— T-Var-1
      (∅ , y : Nat) ⊢ y : Nat
———————————————————————————————————— T-Var-2
((∅ , y : Nat) , x : Bool) ⊢ y : Nat

(These were derived by entering a query like (∅ , x : Bool) ⊢ x : $?.)

We can now use this typing-context abstraction to write a rule for computing the type of a function. This rule says that in context Γ a function λx: T. t has type T → S if, in the extended context (Γ, x: T), the body t has type S.

rule T-Abs {
       ($Γ, $x : $T) ⊢ $t : $S
  $Γ ⊢ (λ $x : $T . $t) : ($T → $S)

At this point all the other rules must be extended with rules to pass the context around; all we need to do is add $Γ ⊢ to the start of each premise and conclusion, for example:

rule T-Zero {
  $Γ ⊢ 0 : Nat

rule T-IsZero {
       $Γ ⊢ $t : Nat
  $Γ ⊢ (iszero $t) : Bool

Now we can run a query to see if we can figure out the type of an expression inside a function:

query: ∅ ⊢ (λn: Nat. (iszero n)) : $?

          ——————————————————————— T-Var-1
          (∅ , n : Nat) ⊢ n : Nat
     ————————————————————————————————— T-IsZero
     (∅ , n : Nat) ⊢ (iszero n) : Bool
——————————————————————————————————————————— T-Abs
∅ ⊢ (λ n : Nat . (iszero n)) : (Nat → Bool)

Indeed we can. Let’s try something more complex to make sure the context is passed down into sub-expressions:

query: ∅ ⊢ (λn: Nat. (if (iszero n) then false else true)) : $?

     ——————————————————————— T-Var-1
     (∅ , n : Nat) ⊢ n : Nat
————————————————————————————————— T-IsZero   ———————————————————————————— T-False   ——————————————————————————— T-True
(∅ , n : Nat) ⊢ (iszero n) : Bool            (∅ , n : Nat) ⊢ false : Bool           (∅ , n : Nat) ⊢ true : Bool
——————————————————————————————————————————————————————————————————————————————————————————————————————————————— T-If
                          (∅ , n : Nat) ⊢ (if (iszero n) then false else true) : Bool
                     ————————————————————————————————————————————————————————————————————— T-Abs
                     ∅ ⊢ (λ n : Nat . (if (iszero n) then false else true)) : (Nat → Bool)

Implementing the various language extensions and subtyping rules in TAPL is excellent practice in expressing rules as executable logic programs, and you’ll soon discover you can get a very long way by using pairs of rules to recursively pattern-match over nested structures. A good example of this is the rules for records, a key-value pair structure that’s used repeatedly in the text. Some more advanced patterns can be found in my implementation of joins and meets, the rules for finding common supertypes and subtypes which are used in the subtyping relation.

So finally, what does this all have to do with Prolog? Well Prolog is probably the best-known logic programming language. One of our group, Simon Coffey, was using it in his implementation, and he gave me a lot of help in learning it and figuring out my own logic rules. Through his explanations, I realised that this language that had always looked bizarre and impenetrable to me was actually the thing I’d just spent months learning and implementing! The language I’d cooked up for doing TAPL was just a small subset of Prolog with different syntax.

Here’s the same rules written in Prolog. Instead of premises followed by a line then a conclusion, Prolog rules put the conclusion first, then the symbol :-, then the premises. Expressions in Prolog are not quite as free-form as in Infer; they must take the form a(b, c, ...). Variables begin with uppercase letters, and everything else is an arbitrary symbol. Prolog also has lists, written [a, b, c, ...], or [a | b] where a is the first element and b is the rest of the list. See if you can match these rules to how they’re written above.


% T-True
has_type(_, true, bool).

% T-False
has_type(_, false, bool).

% T-If
has_type(Ctx, if(Cond, Then, Else), T) :-
    has_type(Ctx, Cond, bool),
    has_type(Ctx, Then, T),
    has_type(Ctx, Else, T).

% T-Zero
has_type(_, 0, nat).

% T-IsZero
has_type(Ctx, iszero(T), bool) :- has_type(Ctx, T, nat).

% T-Var-1
has_type([[X, T] | _], X, T).

% T-Var-2
has_type([_ | Ctx], X, T) :- has_type(Ctx, X, T).

% T-Abs
has_type(Ctx, λ(X, T, Body), fn(T, S)) :-
    has_type([[X, T] | Ctx], Body, S).

You can download a Prolog implementation like SWI Prolog, load this file into it and run queries:

$ swipl

?- has_type([], λ(n, nat, if(iszero(n), false, true)), T).
T = fn(nat, bool).

Because I had most of the machinery necessary to run logic programs, I decided to stick a different parser on the front and bootstrap a Prolog implementation of my own. Because Infer has a module for printing the proof tree of any result, I could then use this to show traces of my Prolog code, which was really useful while learning. Here’s a Ruby snippet for proving the above statement:

types = Infer::Prolog.program(''))

expr = 'has_type([], λ(n, nat, if(iszero(n), false, true)), T).'
Infer::Prolog.execute_and_print(types, expr)

And the result:

    has_type([[n, nat]], n, nat)
—————————————————————————————————————      —————————————————————————————————      ————————————————————————————————
has_type([[n, nat]], iszero(n), bool)      has_type([[n, nat]], false, bool)      has_type([[n, nat]], true, bool)
                              has_type([[n, nat]], if(iszero(n), false, true), bool)
                        has_type([], λ(n, nat, if(iszero(n), false, true)), fn(nat, bool))

T = fn(nat, bool)

I’ve barely scratched the surface of Prolog and logic programming here, but hopefully this gives you a flavour of how different it is, and how to solve simple problems with it. It’s a lot like pure functional programming as found in languages like Haskell, but without any distinction between function inputs and outputs. It’s a really good discipline to learn as it forces you to formalise your understanding of a problem and prove things about it. And, it goes to show how trying to study one topic can lead you to learn all sorts of other things by accident, just by seeing them in an unexpected context.

If you’ve enjoyed this article, you might enjoy my recently published book JavaScript Testing Recipes. It’s full of simple techniques for writing modular, maintainable JavaScript apps in the browser and on the server.