[[!toc]]
+**Chris:** I'll be working on this page heavily until 11--11:30 or so. Sorry not to do it last night, I crashed.
+
+
#Recursion: fixed points in the lambda calculus##
Sometimes when you type in a web search, Google will suggest
###Fixed points###
-In general, we call a **fixed point** of a function f any value *x*
-such that f <em>x</em> is equivalent to *x*. For example,
-consider the squaring function `sqare` that maps natural numbers to their squares.
+In general, a **fixed point** of a function `f` is any value `x`
+such that `f x` is equivalent to `x`. For example,
+consider the squaring function `square` that maps natural numbers to their squares.
`square 2 = 4`, so `2` is not a fixed point. But `square 1 = 1`, so `1` is a
fixed point of the squaring function.
fixed point for various classes of interesting functions. For
instance, imainge that you are looking at a map of Manhattan, and you
are standing somewhere in Manhattan. The the [[!wikipedia Brouwer
-fixed point]] guarantees that there is a spot on the map that is
+fixed-point theorem]] guarantees that there is a spot on the map that is
directly above the corresponding spot in Manhattan. It's the spot
where the blue you-are-here dot should be.
point. (See the discussion below concerning a way of understanding
the successor function on which it does have a fixed point.)
-In the lambda calculus, we say a fixed point of an expression `f` is any formula `X` such that:
+In the lambda calculus, we say a fixed point of a term `f` is any term `X` such that:
X <~~> f X
You should be able to immediately provide a fixed point of the
-identity combinator I. In fact, you should be able to provide a whole
-bunch of distinct fixed points.
+identity combinator I. In fact, you should be able to provide a
+whole bunch of distinct fixed points.
With a little thought, you should be able to provide a fixed point of
the false combinator, KI. Here's how to find it: recall that KI
fixed points. And we don't just know that they exist: for any given
formula, we can explicit define many of them.
-Yes, even the formula that you're using the define the successor
-function will have a fixed point. Isn't that weird? Think about how it
-might be true. We'll return to this point below.
+Yes, as we've mentioned, even the formula that you're using the define
+the successor function will have a fixed point. Isn't that weird?
+Think about how it might be true. We'll return to this point below.
-###How fixed points help definie recursive functions###
+###How fixed points help define recursive functions###
Recall our initial, abortive attempt above to define the `length` function in the lambda calculus. We said "What we really want to do is something like this:
symbol `length`. Technically, it has the status of an unbound
variable.
-Imagine now binding the mysterious variable:
+Imagine now binding the mysterious variable, and calling the resulting
+function `h`:
h := \length \list . if empty list then zero else add one (length (tail list))
Now we have no unbound variables, and we have complete non-recursive
definitions of each of the other symbols.
-Let's call this function `h`. Then `h` takes an argument, and returns
-a function that accurately computes the length of a list---as long as
-the argument we supply is already the length function we are trying to
-define. (Dehydrated water: to reconstitute, just add water!)
+So `h` takes an argument, and returns a function that accurately
+computes the length of a list---as long as the argument we supply is
+already the length function we are trying to define. (Dehydrated
+water: to reconstitute, just add water!)
-But this is just another way of saying that we are looking for a fixed point.
-Assume that `h` has a fixed point, call it `LEN`. To say that `LEN`
-is a fixed point means that
+Here is where the discussion of fixed points becomes relevant. Saying
+that `h` is looking for an argument (call it `LEN`) that has the same
+behavior as the result of applying `h` to `LEN` is just another way of
+saying that we are looking for a fixed point for `h`.
h LEN <~~> LEN
-But this means that
+Replacing `h` with its definition, we have
(\list . if empty list then zero else add one (LEN (tail list))) <~~> LEN
-So at this point, we are going to search for fixed point.
+If we can find a value for `LEN` that satisfies this constraint, we'll
+have a function we can use to compute the length of an arbitrary list.
+All we have to do is find a fixed point for `h`.
+
The strategy we will present will turn out to be a general way of
finding a fixed point for any lambda term.
h h <~~> \list . if empty list then zero else 1 + h (tail list)
-There's a problem. The diagnosis is that in the subexpression `h
-(tail list)`, we've applied `h` to a list, but `h` expects as its
-first argument the length function.
+The problem is that in the subexpression `h (tail list)`, we've
+applied `h` to a list, but `h` expects as its first argument the
+length function.
So let's adjust h, calling the adjusted function H:
H = \h \list . if empty list then zero else one plus ((h h) (tail list))
-This is the key creative step. Since `h` is expecting a
-length-computing function as its first argument, the adjustment
-tries supplying the closest candidate avaiable, namely, `h` itself.
+This is the key creative step. Instead of applying `h` to a list, we
+apply it first to itself. After applying `h` to an argument, it's
+ready to apply to a list, so we've solved the problem just noted.
+We're not done yet, of course; we don't yet know what argument to give
+to `H` that will behave in the desired way.
-We now reason about `H`. What exactly is H expecting as its first
-argument? Based on the excerpt `(h h) (tail l)`, it appears that `H`'s
-argument, `h`, should be a function that is ready to take itself as an
-argument, and that returns a function that takes a list as an
+So let's reason about `H`. What exactly is H expecting as its first
+argument? Based on the excerpt `(h h) (tail l)`, it appears that
+`H`'s argument, `h`, should be a function that is ready to take itself
+as an argument, and that returns a function that takes a list as an
argument. `H` itself fits the bill:
H H <~~> (\h \list . if empty list then zero else 1 + ((h h) (tail list))) H
Since `H H` turns out to be the length function, we can think of `H`
by itself as half of the length function (which is why we called it
-`H`, of course). Given the implementation of addition as function
-application for Church numerals, this (H H) is quite literally H + H.
-Can you think up a recursion strategy that involves "dividing" the
-recursive function into equal thirds `T`, such that the length
-function <~~> T T T?
+`H`, of course). Can you think up a recursion strategy that involves
+"dividing" the recursive function into equal thirds `T`, such that the
+length function <~~> T T T?
We've starting with a particular recursive definition, and arrived at
a fixed point for that definition.
Works!
+Let's do one more example to illustrate. We'll do `K`, since we
+wondered above whether it had a fixed point.
+
+Before we begin, we can reason a bit about what the fixed point must
+be like. We're looking for a fixed point for `K`, i.e., `\xy.x`. `K`
+ignores its second argument. That means that no matter what we give
+`K` as its first argument, the result will ignore the next argument
+(that is, `KX` ignores its first argument, no matter what `X` is). So
+if `KX <~~> X`, `X` had also better ignore its first argument. But we
+also have `KX == (\xy.x)X ~~> \y.X`. This means that if `X` ignores
+its first argument, then `\y.X` will ignore its first two arguments.
+So once again, if `KX <~~> X`, `X` also had better ignore at least its
+first two arguments. Repeating this reasoning, we realize that `X`
+must be a function that ignores an infinite series of arguments.
+Our expectation, then, is that our recipe for finding fixed points
+will build us a function that somehow manages to ignore an infinite
+series of arguments.
+
+ h := \xy.x
+ H := \f.h(ff) == \f.(\xy.x)(ff) ~~> \fy.ff
+ H H := (\fy.ff)(\fy.ff) ~~> \y.(\fy.ff)(\fy.ff)
+
+Let's check that it is in fact a fixed point:
+
+ K(H H) == (\xy.x)((\fy.ff)(\fy.ff)
+ ~~> \y.(\fy.ff)(\fy.ff)
+
+Yep, `H H` and `K(H H)` both reduce to the same term.
+
+To see what this fixed point does, let's reduce it a bit more:
+
+ H H == (\fy.ff)(\fy.ff)
+ ~~> \y.(\fy.ff)(\fy.ff)
+ ~~> \yy.(\fy.ff)(\fy.ff)
+ ~~> \yyy.(\fy.ff)(\fy.ff)
+
+Sure enough, this fixed point ignores an endless, infinite series of
+arguments. It's a write-only memory, a black hole.
+
+Now that we have one fixed point, we can find others, for instance,
+
+ (\fy.fff)(\fy.fff)
+ ~~> \y.(\fy.fff)(\fy.fff)(\fy.fff)
+ ~~> \yy.(\fy.fff)(\fy.fff)(\fy.fff)(\fy.fff)
+ ~~> \yyy.(\fy.fff)(\fy.fff)(\fy.fff)(\fy.fff)(\fy.fff)
+
+Continuing in this way, you can now find an infinite number of fixed
+points, all of which have the crucial property of ignoring an infinite
+series of arguments.
##What is a fixed point for the successor function?##
+As we've seen, the recipe just given for finding a fixed point worked
+great for our `h`, which we wrote as a definition for the length
+function. But the recipe doesn't make any assumptions about the
+internal structure of the function it works with. That means it can
+find a fixed point for literally any function whatsoever.
+
+In particular, what could the fixed point for the
+successor function possibly be like?
+
Well, you might think, only some of the formulas that we might give to the `successor` as arguments would really represent numbers. If we said something like:
successor make-pair
Yes! That's exactly right. And which formula this is will depend on the particular way you've implemented the successor function.
-Moreover, the recipes that enable us to name fixed points for any
-given formula aren't *guaranteed* to give us *terminating* fixed
-points. They might give us formulas X such that neither `X` nor `f X`
-have normal forms. (Indeed, what they give us for the square function
-isn't any of the Church numerals, but is rather an expression with no
-normal form.) However, if we take care we can ensure that we *do* get
-terminating fixed points. And this gives us a principled, fully
-general strategy for doing recursion. It lets us define even functions
-like the Ackermann function, which were until now out of our reach. It
-would also let us define arithmetic and list functions on the "version
-1" and "version 2" implementations, where it wasn't always clear how
-to force the computation to "keep going."
+One (by now obvious) upshot is that the recipes that enable us to name
+fixed points for any given formula aren't *guaranteed* to give us
+*terminating* fixed points. They might give us formulas X such that
+neither `X` nor `f X` have normal forms. (Indeed, what they give us
+for the square function isn't any of the Church numerals, but is
+rather an expression with no normal form.) However, if we take care we
+can ensure that we *do* get terminating fixed points. And this gives
+us a principled, fully general strategy for doing recursion. It lets
+us define even functions like the Ackermann function, which were until
+now out of our reach. It would also let us define arithmetic and list
+functions on the "version 1" and "version 2" implementations, where it
+wasn't always clear how to force the computation to "keep going."
+
+###Varieties of fixed-point combinators###
OK, so how do we make use of this?