`x`_{n}

and `zero`, where `x`_{n}

is the last element of the list. This gives us `successor zero`, or `one`. That's the value we've accumuluted "so far." Then we go apply the function `\x sofar. successor sofar` to the two arguments `x`_{n-1}

and the value `one` that we've accumulated "so far." This gives us `two`. We continue until we get to the start of the list. The value we've then built up "so far" will be the length of the list.
+ What's happening here? We start with the value `0`, then we apply the function `\x sofar. successor sofar` to the two arguments `x`_{n}

and `0`, where `x`_{n}

is the last element of the list. This gives us `successor 0`, or `1`. That's the value we've accumuluted "so far." Then we go apply the function `\x sofar. successor sofar` to the two arguments `x`_{n-1}

and the value `1` that we've accumulated "so far." This gives us `two`. We continue until we get to the start of the list. The value we've then built up "so far" will be the length of the list.
We can use similar techniques to define many recursive operations on
lists and numbers. The reason we can do this is that our "version 3,"
-fold-based implementation of lists, and Church's implementations of
+fold-based encoding of lists, and Church's encodings of
numbers, have a internal structure that *mirrors* the common recursive
operations we'd use lists and numbers for. In a sense, the recursive
structure of the `length` operation is built into the data
@@ -146,7 +148,7 @@ This is one of the themes of the course: using data structures to
encode the state of some recursive operation. See discussions of the
[[zipper]] technique, and [[defunctionalization]].
-As we said before, it does take some ingenuity to define functions like `extract-tail` or `predecessor` for these implementations. However it can be done. (And it's not *that* difficult.) Given those functions, we can go on to define other functions like numeric equality, subtraction, and so on, just by exploiting the structure already present in our implementations of lists and numbers.
+As we said before, it does take some ingenuity to define functions like `tail` or `predecessor` for these encodings. However it can be done. (And it's not *that* difficult.) Given those functions, we can go on to define other functions like numeric equality, subtraction, and so on, just by exploiting the structure already present in our encodings of lists and numbers.
With sufficient ingenuity, a great many functions can be defined in the same way. For example, the factorial function is straightforward. The function which returns the nth term in the Fibonacci series is a bit more difficult, but also achievable.
@@ -166,17 +168,17 @@ requires recursion in the function definition.)
Neither do the resources we've so far developed suffice to define the
[[!wikipedia Ackermann function]]:
- A(m,n) =
- | when m == 0 -> n + 1
- | else when n == 0 -> A(m-1,1)
- | else -> A(m-1, A(m,n-1))
+ A(m,n) =
+ | when m == 0 -> n + 1
+ | else when n == 0 -> A(m-1,1)
+ | else -> A(m-1, A(m,n-1))
- A(0,y) = y+1
- A(1,y) = 2+(y+3) - 3
- A(2,y) = 2(y+3) - 3
- A(3,y) = 2^(y+3) - 3
- A(4,y) = 2^(2^(2^...2)) [where there are y+3 2s] - 3
- ...
+ A(0,y) = y+1
+ A(1,y) = 2+(y+3) - 3
+ A(2,y) = 2(y+3) - 3
+ A(3,y) = 2^(y+3) - 3
+ A(4,y) = 2^(2^(2^...2)) [where there are y+3 2s] - 3
+ ...
Many simpler functions always *could* be defined using the resources we've so far developed, although those definitions won't always be very efficient or easily intelligible.
@@ -207,9 +209,9 @@ attention to the natural numbers, then this function has no fixed
point. (See the discussion below concerning a way of understanding
the successor function on which it does have a fixed point.)
-In the lambda calculus, we say a fixed point of a term `f` is any term `X` such that:
+In the Lambda Calculus, we say a fixed point of a term `f` is any term `X` such that:
- X <~~> f X
+ X <~~> f X
You should be able to immediately provide a fixed point of the
identity combinator I. In fact, you should be able to provide a
@@ -225,7 +227,7 @@ fixed point of KI.
What about K? Does it have a fixed point? You might not think so,
after trying on paper for a while.
-However, it's a theorem of the lambda calculus that every formula has
+However, it's a theorem of the Lambda Calculus that every formula has
a fixed point. In fact, it will have infinitely many, non-equivalent
fixed points. And we don't just know that they exist: for any given
formula, we can explicit define many of them.
@@ -236,16 +238,16 @@ Think about how it might be true. We'll return to this point below.
###How fixed points help define recursive functions###
-Recall our initial, abortive attempt above to define the `length` function in the lambda calculus. We said "What we really want to do is something like this:
+Recall our initial, abortive attempt above to define the `length` function in the Lambda Calculus. We said "What we really want to do is something like this:
- \list. if empty list then zero else add one (... (tail lst))
+ \xs. if empty? xs then 0 else succ (... (tail xs))
where this very same formula occupies the `...` position."
Imagine replacing the `...` with some function that computes the
length function. Call that function `length`. Then we have
- \list. if empty list then zero else add one (length (tail lst))
+ \xs. if empty? xs then 0 else succ (length (tail xs))
At this point, we have a definition of the length function, though
it's not complete, since we don't know what value to use for the
@@ -255,7 +257,7 @@ variable.
Imagine now binding the mysterious variable, and calling the resulting
function `h`:
- h := \length \list . if empty list then zero else add one (length (tail list))
+ h := \length \xs. if empty? xs then 0 else succ (length (tail xs))
Now we have no unbound variables, and we have complete non-recursive
definitions of each of the other symbols.
@@ -274,7 +276,7 @@ saying that we are looking for a fixed point for `h`.
Replacing `h` with its definition, we have
- (\list . if empty list then zero else add one (LEN (tail list))) <~~> LEN
+ (\xs. if empty? xs then 0 else succ (LEN (tail xs))) <~~> LEN
If we can find a value for `LEN` that satisfies this constraint, we'll
have a function we can use to compute the length of an arbitrary list.
@@ -291,7 +293,7 @@ list. The function `h` is *almost* a function that computes the
length of a list. Let's try applying `h` to itself. It won't quite
work, but examining the way in which it fails will lead to a solution.
- h h <~~> \list . if empty list then zero else 1 + h (tail list)
+ h h <~~> \xs. if empty? xs then 0 else 1 + h (tail xs)
The problem is that in the subexpression `h (tail list)`, we've
applied `h` to a list, but `h` expects as its first argument the
@@ -299,7 +301,7 @@ length function.
So let's adjust h, calling the adjusted function H:
- H = \h \list . if empty list then zero else one plus ((h h) (tail list))
+ H = \h \xs. if empty? xs then 0 else 1 + ((h h) (tail xs))
This is the key creative step. Instead of applying `h` to a list, we
apply it first to itself. After applying `h` to an argument, it's
@@ -313,11 +315,11 @@ argument? Based on the excerpt `(h h) (tail l)`, it appears that
as an argument, and that returns a function that takes a list as an
argument. `H` itself fits the bill:
- H H <~~> (\h \list . if empty list then zero else 1 + ((h h) (tail list))) H
- <~~> \list . if empty list then zero else 1 + ((H H) (tail list))
- == \list . if empty list then zero else 1 + ((\list . if empty list then zero else 1 + ((H H) (tail list))) (tail list))
- <~~> \list . if empty list then zero
- else 1 + (if empty (tail list) then zero else 1 + ((H H) (tail (tail list))))
+ H H <~~> (\h \xs. if empty? xs then 0 else 1 + ((h h) (tail xs))) H
+ <~~> \xs. if empty? xs then 0 else 1 + ((H H) (tail xs))
+ == \xs. if empty? xs then 0 else 1 + ((\xs. if empty? xs then 0 else 1 + ((H H) (tail xs))) (tail xs))
+ <~~> \xs. if empty? xs then 0
+ else 1 + (if empty? (tail xs) then 0 else 1 + ((H H) (tail (tail xs))))
We're in business!
@@ -423,7 +425,7 @@ successor function possibly be like?
Well, you might think, only some of the formulas that we might give to the `successor` as arguments would really represent numbers. If we said something like:
- successor make-pair
+ successor make-pair
who knows what we'd get back? Perhaps there's some non-number-representing formula such that when we feed it to `successor` as an argument, we get the same formula back.
@@ -439,7 +441,7 @@ can ensure that we *do* get terminating fixed points. And this gives
us a principled, fully general strategy for doing recursion. It lets
us define even functions like the Ackermann function, which were until
now out of our reach. It would also let us define arithmetic and list
-functions on the "version 1" and "version 2" implementations, where it
+functions on the "version 1" and "version 2" encodings, where it
wasn't always clear how to force the computation to "keep going."
###Varieties of fixed-point combinators###
@@ -472,27 +474,27 @@ Of course, if you define your `\self. BODY` stupidly, your formula will never te
When you try to evaluate the application of that to some argument `M`, it's going to try to give you back:
- (\n. self n) M
+ (\n. self n) M
where `self` is equivalent to the very formula `\n. self n` that contains it. So the evaluation will proceed:
- (\n. self n) M ~~>
- self M ~~>
- (\n. self n) M ~~>
- self M ~~>
- ...
+ (\n. self n) M ~~>
+ self M ~~>
+ (\n. self n) M ~~>
+ self M ~~>
+ ...
You've written an infinite loop!
However, when we evaluate the application of our:
-`Ψ (\self (\lst. (isempty lst) zero (add one (self (extract-tail lst))) ))`

+`Ψ (\self (\xs. (empty? xs) 0 (succ (self (tail xs))) ))`

to some list `L`, we're not going to go into an infinite evaluation loop of that sort. At each cycle, we're going to be evaluating the application of:
- \lst. (isempty lst) zero (add one (self (extract-tail lst)))
+ \xs. (empty? xs) 0 (succ (self (tail xs)))
-to *the tail* of the list we were evaluating its application to at the previous stage. Assuming our lists are finite (and the implementations we're using don't permit otherwise), at some point one will get a list whose tail is empty, and then the evaluation of that formula to that tail will return `zero`. So the recursion eventually bottoms out in a base value.
+to *the tail* of the list we were evaluating its application to at the previous stage. Assuming our lists are finite (and the encodings we're using don't permit otherwise), at some point one will get a list whose tail is empty, and then the evaluation of that formula to that tail will return `0`. So the recursion eventually bottoms out in a base value.
##Fixed-point Combinators Are a Bit Intoxicating##
@@ -504,11 +506,11 @@ I used `Ψ`

above to stand in for an arbitrary fixed-point combina
As we said, there are many other fixed-point combinators as well. For example, Jan Willem Klop pointed out that if we define `L` to be:
- \a b c d e f g h i j k l m n o p q s t u v w x y z r. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))
+ \a b c d e f g h i j k l m n o p q s t u v w x y z r. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))
then this is a fixed-point combinator:
- L L L L L L L L L L L L L L L L L L L L L L L L L L
+ L L L L L L L L L L L L L L L L L L L L L L L L L L
##Watching Y in action##
@@ -588,7 +590,7 @@ factorial of `n-1`. But if we leave out the base case, we get
That's why it's crucial to declare that 0! = 1, in which case the
recursive rule does not apply. In our terms,
- fac = Y (\fac n. iszero n 1 (fac (predecessor n)))
+ fac = Y (\fac n. zero? n 1 (fac (predecessor n)))
If `n` is 0, `fac` reduces to 1, without computing the recursive case.
@@ -686,9 +688,9 @@ for any choice of X whatsoever.
So the Y combinator is only guaranteed to give us one fixed point out
of infinitely many---and not always the intuitively most useful
-one. (For instance, the squaring function has zero as a fixed point,
-since 0 * 0 = 0, and 1 as a fixed point, since 1 * 1 = 1, but `Y
-(\x. mul x x)` doesn't give us 0 or 1.) So with respect to the
+one. (For instance, the squaring function has `0` as a fixed point,
+since `0 * 0 = 0`, and `1` as a fixed point, since `1 * 1 = 1`, but `Y
+(\x. mul x x)` doesn't give us `0` or `1`.) So with respect to the
truth-teller paradox, why in the reasoning we've
just gone through should we be reaching for just this fixed point at
just this juncture?