From 02807170e61a338b91cc8f094837372a3c55a359 Mon Sep 17 00:00:00 2001
From: Chris
Date: Tue, 17 Feb 2015 16:37:21 -0500
Subject: [PATCH] edits for fixed-point
---
topics/_week4_fixed_point_combinator.mdwn | 114 ++++++++++++++++++++----------
1 file changed, 78 insertions(+), 36 deletions(-)
diff --git a/topics/_week4_fixed_point_combinator.mdwn b/topics/_week4_fixed_point_combinator.mdwn
index d006a5fc..30b3745e 100644
--- a/topics/_week4_fixed_point_combinator.mdwn
+++ b/topics/_week4_fixed_point_combinator.mdwn
@@ -183,9 +183,9 @@ But functions like the Ackermann function require us to develop a more general t
###Fixed points###
-In general, we call a **fixed point** of a function f any value *x*
-such that f *x* is equivalent to *x*. For example,
-consider the squaring function `sqare` that maps natural numbers to their squares.
+In general, a **fixed point** of a function `f` is any value `x`
+such that `f x` is equivalent to `x`. For example,
+consider the squaring function `square` that maps natural numbers to their squares.
`square 2 = 4`, so `2` is not a fixed point. But `square 1 = 1`, so `1` is a
fixed point of the squaring function.
@@ -193,7 +193,7 @@ There are many beautiful theorems guaranteeing the existence of a
fixed point for various classes of interesting functions. For
instance, imainge that you are looking at a map of Manhattan, and you
are standing somewhere in Manhattan. The the [[!wikipedia Brouwer
-fixed point]] guarantees that there is a spot on the map that is
+fixed-point theorem]] guarantees that there is a spot on the map that is
directly above the corresponding spot in Manhattan. It's the spot
where the blue you-are-here dot should be.
@@ -204,13 +204,13 @@ attention to the natural numbers, then this function has no fixed
point. (See the discussion below concerning a way of understanding
the successor function on which it does have a fixed point.)
-In the lambda calculus, we say a fixed point of an expression `f` is any formula `X` such that:
+In the lambda calculus, we say a fixed point of a term `f` is any term `X` such that:
X <~~> f X
You should be able to immediately provide a fixed point of the
-identity combinator I. In fact, you should be able to provide a whole
-bunch of distinct fixed points.
+identity combinator I. In fact, you should be able to provide a
+whole bunch of distinct fixed points.
With a little thought, you should be able to provide a fixed point of
the false combinator, KI. Here's how to find it: recall that KI
@@ -227,11 +227,11 @@ a fixed point. In fact, it will have infinitely many, non-equivalent
fixed points. And we don't just know that they exist: for any given
formula, we can explicit define many of them.
-Yes, even the formula that you're using the define the successor
-function will have a fixed point. Isn't that weird? Think about how it
-might be true. We'll return to this point below.
+Yes, as we've mentioned, even the formula that you're using the define
+the successor function will have a fixed point. Isn't that weird?
+Think about how it might be true. We'll return to this point below.
-###How fixed points help definie recursive functions###
+###How fixed points help define recursive functions###
Recall our initial, abortive attempt above to define the `length` function in the lambda calculus. We said "What we really want to do is something like this:
@@ -249,29 +249,34 @@ it's not complete, since we don't know what value to use for the
symbol `length`. Technically, it has the status of an unbound
variable.
-Imagine now binding the mysterious variable:
+Imagine now binding the mysterious variable, and calling the resulting
+function `h`:
h := \length \list . if empty list then zero else add one (length (tail list))
Now we have no unbound variables, and we have complete non-recursive
definitions of each of the other symbols.
-Let's call this function `h`. Then `h` takes an argument, and returns
-a function that accurately computes the length of a list---as long as
-the argument we supply is already the length function we are trying to
-define. (Dehydrated water: to reconstitute, just add water!)
+So `h` takes an argument, and returns a function that accurately
+computes the length of a list---as long as the argument we supply is
+already the length function we are trying to define. (Dehydrated
+water: to reconstitute, just add water!)
-But this is just another way of saying that we are looking for a fixed point.
-Assume that `h` has a fixed point, call it `LEN`. To say that `LEN`
-is a fixed point means that
+Here is where the discussion of fixed points becomes relevant. Saying
+that `h` is looking for an argument (call it `LEN`) that has the same
+behavior as the result of applying `h` to `LEN` is just another way of
+saying that we are looking for a fixed point for `h`.
h LEN <~~> LEN
-But this means that
+Replacing `h` with its definition, we have
(\list . if empty list then zero else add one (LEN (tail list))) <~~> LEN
-So at this point, we are going to search for fixed point.
+If we can find a value for `LEN` that satisfies this constraint, we'll
+have a function we can use to compute the length of an arbitrary list.
+All we have to do is find a fixed point for `h`.
+
The strategy we will present will turn out to be a general way of
finding a fixed point for any lambda term.
@@ -285,22 +290,24 @@ work, but examining the way in which it fails will lead to a solution.
h h <~~> \list . if empty list then zero else 1 + h (tail list)
-There's a problem. The diagnosis is that in the subexpression `h
-(tail list)`, we've applied `h` to a list, but `h` expects as its
-first argument the length function.
+The problem is that in the subexpression `h (tail list)`, we've
+applied `h` to a list, but `h` expects as its first argument the
+length function.
So let's adjust h, calling the adjusted function H:
H = \h \list . if empty list then zero else one plus ((h h) (tail list))
-This is the key creative step. Since `h` is expecting a
-length-computing function as its first argument, the adjustment
-tries supplying the closest candidate avaiable, namely, `h` itself.
+This is the key creative step. Instead of applying `h` to a list, we
+apply it first to itself. After applying `h` to an argument, it's
+ready to apply to a list, so we've solved the problem just noted.
+We're not done yet, of course; we don't yet know what argument to give
+to `H` that will behave in the desired way.
-We now reason about `H`. What exactly is H expecting as its first
-argument? Based on the excerpt `(h h) (tail l)`, it appears that `H`'s
-argument, `h`, should be a function that is ready to take itself as an
-argument, and that returns a function that takes a list as an
+So let's reason about `H`. What exactly is H expecting as its first
+argument? Based on the excerpt `(h h) (tail l)`, it appears that
+`H`'s argument, `h`, should be a function that is ready to take itself
+as an argument, and that returns a function that takes a list as an
argument. `H` itself fits the bill:
H H <~~> (\h \list . if empty list then zero else 1 + ((h h) (tail list))) H
@@ -322,11 +329,9 @@ finite lambda term with no undefined symbols.
Since `H H` turns out to be the length function, we can think of `H`
by itself as half of the length function (which is why we called it
-`H`, of course). Given the implementation of addition as function
-application for Church numerals, this (H H) is quite literally H + H.
-Can you think up a recursion strategy that involves "dividing" the
-recursive function into equal thirds `T`, such that the length
-function <~~> T T T?
+`H`, of course). Can you think up a recursion strategy that involves
+"dividing" the recursive function into equal thirds `T`, such that the
+length function <~~> T T T?
We've starting with a particular recursive definition, and arrived at
a fixed point for that definition.
@@ -352,9 +357,44 @@ That is, Yh is a fixed point for h.
Works!
+Let's do one more example to illustrate. We'll do `K`, since we
+wondered above whether it had a fixed point.
+
+ h := \xy.x
+ H := \f.h(ff) == \f.(\xy.x)(ff) ~~> \fy.ff
+ H H := (\fy.ff)(\fy.ff) ~~> \y.(\fy.ff)(\fy.ff)
+
+Ok, it doesn't have a normal form. But let's check that it is in fact
+a fixed point:
+
+ K(H H) == (\xy.x)((\fy.ff)(\fy.ff)
+ ~~> \y.(\fy.ff)(\fy.ff)
+
+Yep, `H H` and `K(H H)` both reduce to the same term.
+
+This fixed point is bit wierd. Let's reduce it a bit more:
+
+ H H == (\fy.ff)(\fy.ff)
+ ~~> \y.(\fy.ff)(\fy.ff)
+ ~~> \yy.(\fy.ff)(\fy.ff)
+ ~~> \yyy.(\fy.ff)(\fy.ff)
+
+It appears that where `K` is a function that ignores (only) the first
+argument you feed to it, the fixed point of `K` ignores an endless,
+infinite series of arguments. It's a write-only memory, a black hole.
+
##What is a fixed point for the successor function?##
+As we've seen, the recipe just given for finding a fixed point worked
+great for our `h`, which we wrote as a definition for the length
+function. But the recipe doesn't make any assumptions about the
+internal structure of the function it works with. That means it can
+find a fixed point for literally any function whatsoever.
+
+In particular, what could the fixed point for the
+successor function possibly be like?
+
Well, you might think, only some of the formulas that we might give to the `successor` as arguments would really represent numbers. If we said something like:
successor make-pair
@@ -376,6 +416,8 @@ would also let us define arithmetic and list functions on the "version
1" and "version 2" implementations, where it wasn't always clear how
to force the computation to "keep going."
+###Varieties of fixed-point combinators###
+
OK, so how do we make use of this?
Many fixed-point combinators have been discovered. (And some
--
2.11.0