Sometimes when you type in a web search, Google will suggest alternatives. For instance, if you type in "Lingusitics", it will ask you "Did you mean Linguistics?". But the engineers at Google have added some playfulness to the system. For instance, if you search for "anagram", Google asks you "Did you mean: nag a ram?" And if you search for "recursion", Google asks: "Did you mean: recursion?"

What is the "rec" part of "letrec" doing?

How could we compute the length of a list? Without worrying yet about what Lambda Calculus encoding we're using for the list, the basic idea is to define this recursively:

the empty list has length 0

any non-empty list has length 1 + (the length of its tail)

In OCaml, you'd define that like this:

let rec length = fun xs ->
                   if xs = [] then 0
                              else 1 + length (List.tl xs)
in ... (* here you go on to use the function "length" *)

In Scheme you'd define it like this:

(letrec [(length (lambda (xs)
                   (if (null? xs) 0
                                  (+ 1 (length (cdr xs))) )))]
        ... ; here you go on to use the function "length"
)

Some comments on this:

  1. null? is Scheme's way of saying empty?. That is, (null? xs) returns true (which Scheme writes as #t) iff xs is the empty list (which Scheme writes as '() or (list)).

  2. cdr is function that gets the tail of a Scheme list. (By definition, it's the function for getting the second member of a dotted pair. As we discussed in notes for last week, it just turns out to return the tail of a list because of the particular way Scheme implements lists.) List.tl is the function that gets the tail of an OCaml list.

  3. We alternate between [ ]s and ( )s in the Scheme code just to make it more readable. These have no syntactic difference.

The main question for us to dwell on here is: What are the let rec in the OCaml code and the letrec in the Scheme code?

Answer: These work a lot like let expressions, except that they let you use the variable length inside the body of the function being bound to it --- with the understanding that it will there be bound to the same function that you're then in the process of binding length to. So our recursively-defined function works the way we'd expect it to. Here is OCaml:

let rec length = fun xs ->
                   if xs = [] then 0
                              else 1 + length (List.tl xs)
in length [20; 30]
(* this evaluates to 2 *)

Here is Scheme:

(letrec [(length (lambda (xs)
                   (if (null? xs) 0
                                  (+ 1 (length (cdr xs))) )))]
  (length (list 20 30)))
; this evaluates to 2

If you instead use an ordinary let (or let*), here's what would happen, in OCaml:

let length = fun xs ->
               if xs = [] then 0
                          else 1 + length (List.tl xs)
in length [20; 30]
(* fails with error "Unbound value length" *)

Here's Scheme:

(let* [(length (lambda (xs)
                 (if (null? xs) 0
                                (+ 1 (length (cdr xs))) )))]
  (length (list 20 30)))
; fails with error "reference to undefined identifier: length"

Why? Because we said that constructions of this form:

let
  length match/= A
in B

really were just another way of saying:

(\length. B) A

and so the occurrences of length in A aren't bound by the \length that wraps B. Those occurrences are free.

We can verify this by wrapping the whole expression in a more outer binding of length to some other function, say the constant function from any list to the integer 99:

let length = fun xs -> 99
in let length = fun xs ->
                  if xs = [] then 0
                             else 1 + length (List.tl xs)
in length [20; 30]
(* evaluates to 1 + 99 *)

Here the use of length in 1 + length (List.tl xs) can clearly be seen to be bound by the outermost let.

And indeed, if you tried to define length in the Lambda Calculus, how would you do it?

\xs. (empty? xs) 0 (succ (length (tail xs)))

We've defined all of empty?, 0, succ, and tail in earlier discussion. But what about length? That's not yet defined! In fact, that's the very formula we're trying here to specify.

What we really want to do is something like this:

\xs. (empty? xs) 0 (succ ((...) (tail xs)))

where this very same formula occupies the ... position:

\xs. (empty? xs) 0 (succ (
\xs. (empty? xs) 0 (succ ((...) (tail xs)))
                              ) (tail xs)))

but as you can see, we'd still have to plug the formula back into itself again, and again, and again... No dice.

So how could we do it? And how do OCaml and Scheme manage to do it, with their let rec and letrec?

  1. OCaml and Scheme do it using a trick. Well, not a trick. Actually an impressive, conceptually deep technique, which we haven't yet developed. Since we want to build up all the techniques we're using by hand, then, we shouldn't permit ourselves to rely on let rec or letrec until we thoroughly understand what's going on under the hood.

  2. If you tried this in Scheme:

    (define length (lambda (xs)
                     (if (null? xs) 0
                                    (+ 1 (length (cdr xs))) )))
    
    (length (list 20 30))
    

    You'd find that it works! This is because define in Scheme is really shorthand for letrec, not for plain let or let*. So we should regard this as cheating, too.

  3. In fact, it is possible to define the length function in the Lambda Calculus despite these obstacles, without yet knowing how to implement letrec in general. We've already seen how to do it, using our right-fold (or left-fold) encoding for lists, and exploiting their internal structure. Those encodings take a function and a seed value and returns the result of folding that function over the list, with that seed value. So we could use this as a definition of length:

    \xs. xs (\x sofar. succ sofar) 0
    

    What's happening here? We start with the value 0, then we apply the function \x sofar. succ sofar to the two arguments xn and 0, where xn is the last element of the list. This gives us succ 0, or 1. That's the value we've accumulated "so far." Then we go apply the function \x sofar. succ sofar to the two arguments xn-1 and the value 1 that we've accumulated "so far." This gives us 2. We continue until we get to the start of the list. The value we've then built up "so far" will be the length of the list.

    We can use similar techniques to define many recursive operations on lists and numbers. The reason we can do this is that our fold-based encoding of lists, and Church's encodings of numbers, have a internal structure that mirrors the common recursive operations we'd use lists and numbers for. In a sense, the recursive structure of the length operation is built into the data structure we are using to represent the list. The non-recursive definition of length, above, exploits this embedding of the recursion into the data type.

    This illustrates what will be one of the recurring themes of the course: using data structures to encode the state of some recursive operation. See our discussions later this semester of the ?zipper technique, and ?defunctionalization.

As we've seen, it does take some ingenuity to define functions like tail or pred for our right-fold encoding of lists. However it can be done. (And it's not that difficult.) Given those functions, we can go on to define other functions like numeric equality, subtraction, and so on, just by exploiting the structure already present in our implementation of lists and numbers.

With sufficient ingenuity, a great many functions can be defined in the same way. For example, the factorial function is straightforward. The function which returns the nth term in the Fibonacci series is a bit more difficult, but also achievable.

Some functions require full-fledged recursive definitions

However, some computable functions are just not definable in this way. We can't, for example, define a function that tells us, for whatever function f we supply it, what is the smallest natural number x where f x is true (even if f itself is a function we do already know how to define).

Neither do the resources we've so far developed suffice to define the Ackermann function. In OCaml:

let rec A = fun (m,n) ->
  if      m = 0 then n + 1
  else if n = 0 then A(m-1,1)
  else               A(m-1, A(m,n-1));;

A(0,y) = y+1
A(1,y) = 2+(y+3) - 3
A(2,y) = 2(y+3) - 3
A(3,y) = 2^(y+3) - 3
A(4,y) = 2^(2^(2^...2)) (* where there are y+3 2s *) - 3
...

Many simpler functions always could be defined using the resources we've so far developed, although those definitions won't always be very efficient or easily intelligible.

But functions like the Ackermann function require us to develop a more general technique for doing recursion --- and having developed it, it will often be easier to use it even in the cases where, in principle, we didn't have to.

The example used to illustrate this in Chapter 9 of The Little Schemer is a function looking where:

(looking '(6 2 4 caviar 5 7 3))

returns #t, because if we follow the path from the head of the list argument, 6, to the sixth element of the list, 7 (the authors of that book count positions starting from 1, though generally Scheme follows the convention of counting positions starting from 0), and then proceed to the seventh element of the list, 3, and then proceed to the third element of the list, 4, and the proceed to the fourth element of the list, we find the 'caviar we are looking for. On other hand, if we say:

(looking '(6 2 grits caviar 5 7 3))

our path will take us from 6 to 7 to 3 to grits, which is not a number but not the 'caviar we were looking for either. So this returns #f. It's not clear how to define such functions without recourse to something like letrec or define, or the techniques developed below (and also in that chapter of The Little Schemer).

The Little Schemer also mentions the Ackermann function, as well as the interesting Collatz conjecture. They also point out that functions like their looking never return any value --- neither #t nor #f --- for some arguments, as in the example:

(looking '(7 1 2 caviar 5 6 3))

Here our path takes us from 7 to 3 to 2 to 1 back to 7, and the cycle repeats. So in this case, the looking function never returns any value.

We've already tacitly been dealing with functions that we assumed to be defined only for expressions representing booleans, or only for expressions representing numbers. But in all such cases we could specify in advance what the intended domain of the function was. With examples like the above, it's not clear how to specify the domain in advance, in such a way that our function will still give a definite result for every argument in the domain. Instead, the capacity for fully general recursion brings with it also the downside that some functions will be only partially defined, even over restricted domains we're able to define in advance. We will see more extreme examples of this below.

(Being only definable with the power of fully general recursion doesn't by itself render you only partially defined: the Ackermann function is total. The downside is rather that there's no way to let fully general recursion in, while limiting its use to just the cases where a definite value will be returned for every argument.)

Using fixed-point combinators to define recursive functions

Fixed points

In mathematics, a fixed point of a function f is any value ξ such that f ξ is equivalent to ξ. For example, consider the squaring function square that maps natural numbers to their squares. square 2 = 4, so 2 is not a fixed point. But square 1 = 1, so 1 is a fixed point of the squaring function. (Can you think of another?)

There are many beautiful theorems guaranteeing the existence of a fixed point for various classes of interesting functions. For instance, imagine that you are looking at a map of Manhattan, and you are standing somewhere in Manhattan. Then the Brouwer fixed-point theorem guarantees that there is a spot on the map that is directly above the corresponding spot in Manhattan. It's the spot on the map where the blue you-are-here dot should go.

Whether a function has a fixed point depends on the domain of arguments it is defined for. For instance, consider the successor function succ that maps each natural number to its successor. If we limit our attention to the natural numbers, then this function has no fixed point. (See the discussion below concerning a way of understanding the successor function on which it does have a fixed point.)

In the Lambda Calculus, we say a fixed point of a term f is any term ξ such that:

ξ <~~> f ξ

This is a bit different than the general mathematical definition, in that here we're saying it is terms that are fixed points, not values. We like to think that some lambda terms represent values, such as our term \f z. z representing the numerical value zero (and also the truth-value false, and also the empty list... on the other hand, we never did explicitly agree that those three values are all the same thing, did we?). But some terms in the Lambda Calculus don't even have a normal form. We don't want to count them as values. Yet the way we're proposing to use the notion of a fixed point here, they too are allowed to be fixed points, and to have fixed points of their own.

Note that M <~~> N doesn't entail that M and N have a normal form (though if they do, they will have the same normal form). It just requires that there be some term that they both reduce to. It may be that that term itself never stops being reducible.

You should be able to immediately provide a fixed point of the identity combinator I. In fact, you should be able to provide a whole bunch of distinct fixed points.

With a little thought, you should be able to provide a fixed point of the false combinator, KI. Here's how to find it: recall that KI throws away its first argument, and always returns I. Therefore, if we give it I as an argument, it will throw away the argument, and return I. So KII ~~> I, which is all it takes for I to qualify as a fixed point of KI.

What about K? Does it have a fixed point? You might not think so, after trying on paper for a while.

However, it's a theorem of the Lambda Calculus that every lambda term has a fixed point. Even bare variables like x! In fact, they will have infinitely many, non-equivalent fixed points. And we don't just know that they exist: for any given formula, we can explicit define many of them.

(As we mentioned, even the formula that you're using the define the successor function will have a fixed point. Isn't that weird? There's some ξ such that it is equivalent to succ ξ? Think about how it might be true. We'll return to this point below.)

How fixed points help define recursive functions

Recall our initial, abortive attempt above to define the length function in the Lambda Calculus. We said:

What we really want to do is something like this:

  \xs. (empty? xs) 0 (succ ((...) (tail xs)))

where this very same formula occupies the ... position...

Imagine replacing the ... with some expression LENGTH that computes the length function. Then we have

\xs. (empty? xs) 0 (succ (LENGTH (tail xs)))

(More generally, we might have some lambda term Φ[...SELF...] where we want the contained SELF to refer to that very lambda term Φ[...SELF...].)

At this point, we have a definition of the length function, though it's not complete, since we don't know what value to use for the symbol LENGTH. Technically, it has the status of an unbound variable.

Imagine now binding the mysterious variable, and calling the resulting term h:

h ≡ \length \xs. (empty? xs) 0 (succ (length (tail xs)))

(More generally, convert Φ[...SELF...] to \body. Φ[...body...], where the variable body wants to be bound to the very lambda term that is that abstract's body.)

Now we have no unbound variables, and we have complete non-recursive definitions of each of the other symbols (empty?, 0, succ, and tail).

So h takes a length argument, and returns a function that accurately computes the length of a list --- as long as the argument we supply is already the length function we are trying to define. (Dehydrated water: to reconstitute, just add water!)

Here is where the discussion of fixed points becomes relevant. Saying that h is looking for an argument (call it LENGTH) that has the same behavior as the result of applying h to LENGTH is just another way of saying that we are looking for a fixed point for h:

h LENGTH <~~> LENGTH

Replacing h with its definition, we have:

(\xs. (empty? xs) 0 (succ (LENGTH (tail xs)))) <~~> LENGTH

If we can find a value for LENGTH that satisfies this constraint, we'll have a function we can use to compute the length of an arbitrary list. All we have to do is find a fixed point for h.

Let's reinforce this. The left-hand side has the form:

(\body. Φ[...body...]) LENGTH

which beta-reduces to:

Φ[...LENGTH...]

where that whole formula is convertible with the term LENGTH itself. In other words, the term Φ[...LENGTH...] contains (a term that convertible with) itself --- despite being only finitely long. (If it had to contain a term syntactically identical to itself, this could not be achieved.)

The key to achieving all this is finding a fixed point for h. The strategy we will present will turn out to be a general way of finding a fixed point for any lambda term.

Deriving Y, a fixed point combinator

How shall we begin? Well, we need to find an argument to supply to h. The argument has to be a function that computes the length of a list. The function h is almost a function that computes the length of a list. Let's try applying h to itself. It won't quite work, but examining the way in which it fails will lead to a solution.

h h <~~> \xs. (empty? xs) 0 (succ (h (tail xs)))

The problem is that in the subexpression h (tail xs), we've applied h to a list, but h expects as its first argument the length function.

So let's adjust h, calling the adjusted function H. (We'll use u as the variable that expects to be bound to the as-yet-unknown argument, rather than length. This will make it easier to discuss generalizations of this strategy.)

h ≡ \length \xs. (empty? xs) 0 (succ (length (tail xs)))
H ≡ \u      \xs. (empty? xs) 0 (succ ((u u)  (tail xs)))

(We'll discuss the general case, when you're starting from \body. Φ[...body...] rather than this specific h, below.)

Shifting to H is the key creative step. Instead of applying u to a list, as happened when we self-applied h, H applies its argument u first to itself: u u. After u gets an argument, the result is ready to apply to a list, so we've solved the problem noted above with h (tail xs). We're not done yet, of course; we don't yet know what argument u to give to H that will behave in the desired way.

So let's reason about H. What exactly is H expecting as its first argument? Based on the excerpt (u u) (tail xs), it appears that H's argument, u, should be a function that is ready to take itself as an argument, and that returns a function that takes a list as an argument. H itself fits the bill:

H H <~~> (\u \xs. (empty? xs) 0 (succ ((u u) (tail xs)))) H

    <~~>     \xs. (empty? xs) 0 (succ ((H H) (tail xs)))

    <~~>     \xs. (empty? xs) 0 (succ ((
             \xs. (empty? xs) 0 (succ ((H H) (tail xs)))
                                           ) (tail xs)))
    <~~>     \xs. (empty? xs) 0 (succ (
                  (empty? (tail xs)) 0 (succ ((H H) (tail (tail xs))))
                                                      ))
    <~~>     \xs. (empty? xs) 0 (succ (
                  (empty? (tail xs)) 0 (succ (
             \xs. (empty? xs) 0 (succ ((H H) (tail xs)))
                                                  ) (tail (tail xs))))
                                                      ))
    <~~>     \xs. (empty? xs) 0 (succ (
                  (empty? (tail xs)) 0 (succ (
                  (empty? (tail (tail xs))) 0 (succ ((H H) (tail (tail (tail xs)))))
                                                                    ))
                                                      ))
    <~~>     ...

We're in business!

How does the recursion work?

We've defined H in such a way that H H turns out to be the length function. That is, H H is the LENGTH we were looking for. In order to evaluate H H, we substitute H into the body of the lambda term H. Inside that lambda term, once the substitution has occurred, we are once again faced with evaluating H H. And so on.

We've got the (potentially) infinite regress we desired, defined in terms of a finite lambda term with no undefined symbols.

Since H H turns out to be the length function, we can think of H by itself as half of the length function (which is why we called it H, of course). (Thought exercise: Can you think up a recursion strategy that involves "dividing" the recursive function into equal thirds T, such that the length function <~~> T T T?)

We've starting with a particular recursive definition, and arrived at a fixed point for that definition. What's the general recipe?

  1. Start with a formula h that takes the recursive function you're seeking as an argument: h ≡ \length. ...length... (This is what we also called \body. Φ[...body...].)
  2. Next, define H ≡ \u. h (u u)
  3. Then compute H H ≡ ((\u. h (u u)) (\u. h (u u)))
  4. That's the fixed point of h, the recursive function you're seeking.

Expressed in terms of a single formula, here is this method for taking an arbitrary h-style term and returning that term's fixed point, which will be the recursive function that term expects as an argument:

 Y ≡ \h. (\u. h (u u)) (\u. h (u u))

Let's test that Y h will indeed be h's fixed point:

Y h  ≡ (\h. (\u. h (u u)) (\u. h (u u))) h
   ~~>      (\u. h (u u)) (\u. h (u u))
   ~~>           h ((\u. h (u u)) (\u. h (u u)))

But the argument of h in the last line is just the same as the second line, which <~~> Y h. So the last line <~~> h (Y h). In other words, Y h <~~> h (Y h). So by definition, Y h is a fixed point for h.

Works!

A fixed point for K?

Let's do one more example to illustrate. We'll do K (boolean true), since we wondered above whether it had a fixed point.

Before we begin, we can reason a bit about what the fixed point must be like. We're looking for a fixed point for K, i.e., \x y. x. The term K ignores its second argument. That means that no matter what we give K as its first argument, the result will ignore the next argument (that is, K ξ ignores its first argument, no matter what ξ is). So if K ξ <~~> ξ, ξ had also better ignore its first argument. But we also have K ξ ≡ (\x y. x) ξ ~~> \y. ξ. This means that if ξ ignores its first argument, then K ξ <~~> \y. ξ will ignore its first two arguments. So once again, if K ξ <~~> ξ, ξ also had better ignore (at least) its first two arguments. Repeating this reasoning, we realize that ξ here must be a function that ignores as many arguments as you give it.

Our expectation, then, is that our recipe for finding fixed points will build us a term that somehow manages to ignore arbitrarily many arguments.

h     ≡ \x y. x
H     ≡ \u. h (u u)
      ≡ \u. (\x y. x) (u u)
    ~~> \u.    \y. u u
H H ~~> (\u y. u u) (\u y. u u)
    ~~>    \y. (\u y. u u) (\u y. u u)

Let's check that it is in fact a fixed point for K:

K (H H) ~~> (\x y. x) ((\u y. u u) (\u y. u u))
        ~~>    \y. (\u y. u u) (\u y. u u)

Yep, H H and K (H H) both reduce to the same term.

To see what this fixed point does, let's reduce it a bit more:

H H ~~> (\u y. u u) (\u y. u u)
    ~~>    \y. (\u y. u u) (\u y. u u)
    ~~>    \y.    \y. (\u y. u u) (\u y. u u)
    ~~>    \y.    \y.    \y. (\u y. u u) (\u y. u u)

Sure enough, this fixed point ignores an endless, arbitrarily-long series of arguments. It's a write-only memory, a black hole.

Now that we have one fixed point, we can find others, for instance,

(\uy.[uu]u) (\uy.uuu) 
~~> \y. [(\uy.uuu) (\uy.uuu)] (\uy.uuu)
~~> \y. [\y. (\uy.uuu) (\uy.uuu) (\uy.uuu)] (\uy.uuu)
~~> \yyy. (\uy.uuu) (\uy.uuu) (\uy.uuu) (\uy.uuu) (\uy.uuu)

Continuing in this way, you can now find an infinite number of fixed points, all of which have the crucial property of ignoring an infinite series of arguments.

A fixed point for succ?

As we've seen, the recipe just given for finding a fixed point worked great for our h, which we wrote as a definition for the length function. But the recipe doesn't make any assumptions about the internal structure of the term it works with. That means it can find a fixed point for literally any lambda term whatsoever.

In particular, what could the fixed point for (our encoding of) the successor function possibly be like?

Well, you might think, only some of the formulas that we might give to succ as arguments would really represent numbers. If we said something like:

succ pair

who knows what we'd get back? Perhaps there's some non-number-representing formula such that when we feed it to succ as an argument, we get the same formula back.

Yes! That's exactly right. And which formula this is will depend on the particular way you've encoded the successor function.

One (by now obvious) upshot is that the recipes that enable us to name fixed points for any given formula h aren't guaranteed to give us terminating, normalizing fixed points. They might give us formulas ξ such that neither ξ nor h ξ have normal forms. (Indeed, what they give us for the square function isn't any of the Church numerals, but is rather an expression with no normal form.) However, if we take care we can ensure that we do get terminating fixed points. And this gives us a principled, fully general strategy for doing recursion. It lets us define even functions like the Ackermann function, which were until now out of our reach. It would also let us define list functions on the encodings we discussed last week, where it wasn't always clear how to force the computation to "keep going."

Varieties of fixed-point combinators

Many fixed-point combinators have been discovered. (And as we've seen, some fixed-point combinators give us models for building infinitely many more, non-equivalent fixed-point combinators.)

Two of the simplest:

Θ′ ≡ (\u h. h (\n. u u h n)) (\u h. h (\n. u u h n))
Y′ ≡ \h. (\u. h (\n. u u n)) (\u. h (\n. u u n))

Applying either of these to a term h gives a fixed point ξ for h, meaning that h ξ <~~> ξ. The combinator Θ′ has the advantage that h (Θ′ h) really reduces to Θ′ h. Whereas h (Y′ h) is only convertible with Y′ h; that is, there's a common formula they both reduce to. For most purposes, though, either will do.

You may notice that both of these formulas have eta-redexes inside them: why can't we simplify the two \n. u u h n inside Θ′ to just u u h? And similarly for Y′?

Indeed you can, getting the simpler:

Θ ≡ (\u h. h (u u h)) (\u h. h (u u h))
Y ≡ \h. (\u. h (u u)) (\u. h (u u))

We stated the more complex formulas for the following reason: in a language whose evaluation order is call-by-value, the evaluation of Θ (\body. BODY) and Y (\body. BODY) won't terminate. But evaluation of the eta-unreduced primed versions may.

Of course, if you define your \body. BODY stupidly, your formula won't terminate, no matter what fixed point combinator you use. For example, let Ψ be any fixed point combinator in:

Ψ (\body. \n. body n)

When you try to evaluate the application of that to some argument M, it's going to try to give you back:

(\n. BODY n) M

where BODY is equivalent to the very formula \n. BODY n that contains it. So the evaluation will proceed:

(\n. BODY n) M ~~>
BODY M <~~>
(\n. BODY n) M ~~>
BODY M <~~>
...

You've written an infinite loop! (This is like the function eternity in Chapter 9 of The Little Schemer.)

However, when we evaluate the application of our:

Ψ (\body. (\xs. (empty? xs) 0 (succ (body (tail xs))) ))

to some list, we're not going to go into an infinite evaluation loop of that sort. At each cycle, we're going to be evaluating the application of:

\xs. (empty? xs) 0 (succ (body (tail xs)))

to the tail of the list we were evaluating its application to at the previous stage. Assuming our lists are finite (and the encodings we've been using so far don't permit otherwise), at some point one will get a list whose tail is empty, and then the evaluation of that formula to that tail will return 0. So the recursion eventually bottoms out in a base value.

Fixed-point Combinators Are a Bit Intoxicating

tatto

There's a tendency for people to say "Y-combinator" to refer to fixed-point combinators generally. We'll probably fall into that usage ourselves. Speaking correctly, though, the Y-combinator is only one of many fixed-point combinators.

We used Ψ above to stand in for an arbitrary fixed-point combinator. We don't know of any broad conventions for this. But this seems a useful one.

As we said, there are many other fixed-point combinators as well. For example, Jan Willem Klop pointed out that if we define L to be:

\a b c d e f g h i j k l m n o p q s t u v w x y z r. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))

then this is a fixed-point combinator:

L L L L L L L L L L L L L L L L L L L L L L L L L L

Sink: watching Y in action

For those of you who like to watch ultra slow-mo movies of bullets piercing apples, here's a stepwise computation of the application of a recursive function. We'll use a function sink, which takes one argument. If the argument is boolean true (i.e., \y n. y), it returns itself (a copy of sink); if the argument is boolean false (\y n. n), it returns I. That is, we want the following behavior:

sink false <~~> I
sink true false <~~> I
sink true true false <~~> I
sink true true true false <~~> I

To get this behavior, we want sink to be the fixed point of \sink. \b. b sink I. That is, sink ≡ Y (\sb.bsI):

1. sink false
2. Y (\sb.bsI) false
3. (\h. (\u. h [u u]) (\u. h (u u))) (\sb.bsI) false
4. (\u. (\sb.bsI) [u u]) (\u. (\sb.bsI) (u u)) false
5. (\sb.bsI) [(\u. (\sb.bsI) (u u)) (\u. (\sb.bsI) (u u))] false
6. (\b. b [(\u. (\sb.bsI) (u u))(\u. (\sb.bsI) (u u))] I) false
7. false [(\u. (\sb.bsI) (u u))(\u. (\sb.bsI) (u u))] I
         --------------------------------------------
8. I

So far so good. The crucial thing to note is that as long as we always reduce the outermost redex first, we never have to get around to computing the underlined redex: because false ignores its first argument, we can throw it away unreduced.

Now we try the next most complex example:

1. sink true false
2. Y (\sb.bsI) true false
3. (\h. (\u. h [u u]) (\u. h (u u))) (\sb.bsI) true false
4. (\u. (\sb.bsI) [u u]) (\u. (\sb.bsI) (u u)) true false
5. (\sb.bsI) [(\u. (\sb.bsI) (u u)) (\u. (\sb.bsI) (u u))] true false
6. (\b. b [(\u. (\sb.bsI) (u u)) (\u. (\sb.bsI) (u u))] I) true false
7. true [(\u. (\sb.bsI) (u u)) (\u. (\sb.bsI) (u u))] I false
8. [(\u. (\sb.bsI) (u u)) (\u. (\sb.bsI) (u u))] false

We've now arrived at line (4) of the first computation, so the result is again I.

You should be able to see that sink will consume as many trues as we throw at it, then turn into the identity function when it encounters the first false.

The key to the recursion is that, thanks to Y, the definition of sink contains within it the ability to fully regenerate itself as many times as is necessary. The key to ending the recursion is that the behavior of sink is sensitive to the nature of the input: if the input is the magic function false, the self-regeneration machinery will be discarded, and the recursion will stop.

That's about as simple as recursion gets.

Base cases, and their lack

As any functional programmer quickly learns, writing a recursive function divides into two tasks: figuring out how to handle the recursive case, and remembering to insert a base case. The interesting and enjoyable part is figuring out the recursive pattern, but the base case cannot be ignored, since leaving out the base case creates a program that runs forever. For instance, consider computing a factorial: n! is n * (n-1) * (n-2) * ... * 1. The recursive case says that the factorial of a number n is n times the factorial of n-1. But if we leave out the base case, we get

3! = 3 * 2! = 3 * 2 * 1! = 3 * 2 * 1 * 0! = 3 * 2 * 1 * 0 * -1! ...

That's why it's crucial to declare that 0! = 1, in which case the recursive rule does not apply. In our terms,

fact ≡ Y (\fact n. (zero? n) 1 (fact (pred n)))

If n is 0, fact reduces to 1, without computing the recursive case.