From: jim Date: Thu, 19 Feb 2015 18:06:47 +0000 (-0500) Subject: another round of cleanups X-Git-Url: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=commitdiff_plain;h=fcd4682b70fd70f1f34698896a0b2706655ac4f1;ds=sidebyside another round of cleanups --- diff --git a/topics/_week4_fixed_point_combinator.mdwn b/topics/_week4_fixed_point_combinator.mdwn index 77244b92..39747b0e 100644 --- a/topics/_week4_fixed_point_combinator.mdwn +++ b/topics/_week4_fixed_point_combinator.mdwn @@ -265,7 +265,7 @@ function `h`: h ≡ \length \xs. (empty? xs) 0 (succ (length (tail xs))) Now we have no unbound variables, and we have complete non-recursive -definitions of each of the other symbols. +definitions of each of the other symbols (`empty?`, `0`, `succ`, and `tail`). So `h` takes an argument, and returns a function that accurately computes the length of a list --- as long as the argument we supply is @@ -305,27 +305,31 @@ The problem is that in the subexpression `h (tail list)`, we've applied `h` to a list, but `h` expects as its first argument the length function. -So let's adjust h, calling the adjusted function H: +So let's adjust `h`, calling the adjusted function `H`. (We'll use `u` as the variable +that expects to be bound to `H`'s fixed point, rather than `length`. This will make it easier +to discuss generalizations of this strategy.) - H = \h \xs. (empty? xs) 0 (succ ((h h) (tail xs))) + H ≡ \u \xs. (empty? xs) 0 (succ ((u u) (tail xs))) -This is the key creative step. Instead of applying `h` to a list, we -apply it first to itself. After applying `h` to an argument, it's -ready to apply to a list, so we've solved the problem just noted. +This is the key creative step. Instead of applying `u` to a list, we +apply it first to itself. After applying `u` to an argument, it's +ready to apply to a list, so we've solved the problem noted above with `h (tail list)`. We're not done yet, of course; we don't yet know what argument to give to `H` that will behave in the desired way. So let's reason about `H`. What exactly is H expecting as its first -argument? Based on the excerpt `(h h) (tail l)`, it appears that -`H`'s argument, `h`, should be a function that is ready to take itself +argument? Based on the excerpt `(u u) (tail xs)`, it appears that +`H`'s argument, `u`, should be a function that is ready to take itself as an argument, and that returns a function that takes a list as an argument. `H` itself fits the bill: H H <~~> (\u \xs. (empty? xs) 0 (succ ((u u) (tail xs)))) H <~~> \xs. (empty? xs) 0 (succ ((H H) (tail xs))) - ≡ \xs. (empty? xs) 0 (succ ((\xs. (empty? xs) 0 (succ ((H H) (tail xs)))) (tail xs))) - <~~> \xs. (empty? xs) 0 - (succ ((empty? (tail xs)) 0 (succ ((H H) (tail (tail xs)))))) + ≡ \xs. (empty? xs) 0 (succ ( + (\xs. (empty? xs) 0 (succ ((H H) (tail xs)))) + (tail xs) )) + <~~> \xs. (empty? xs) 0 (succ ( + (empty? (tail xs)) 0 (succ ((H H) (tail (tail xs)))) )) We're in business! @@ -342,7 +346,7 @@ Since `H H` turns out to be the length function, we can think of `H` by itself as half of the length function (which is why we called it `H`, of course). Can you think up a recursion strategy that involves "dividing" the recursive function into equal thirds `T`, such that the -length function <~~> T T T? +length function <~~> `T T T`? We've starting with a particular recursive definition, and arrived at a fixed point for that definition. @@ -353,7 +357,7 @@ What's the general recipe? 3. Then compute `H H ≡ ((\u . h (u u)) (\u . h (u u)))` 4. That's the fixed point, the recursive function we're trying to define -So here is a general method for taking an arbitrary h-style recursive function +So here is a general method for taking an arbitrary `h`-style recursive function and returning a fixed point for that function: Y ≡ \h. (\u. h (u u)) (\u. h (u u)) @@ -386,12 +390,11 @@ ignores its second argument. That means that no matter what we give if `KX <~~> X`, `X` had also better ignore its first argument. But we also have `KX ≡ (\xy.x)X ~~> \y.X`. This means that if `X` ignores its first argument, then `\y.X` will ignore its first two arguments. -So once again, if `KX <~~> X`, `X` also had better ignore at least its +So once again, if `KX <~~> X`, `X` also had better ignore (at least) its first two arguments. Repeating this reasoning, we realize that `X` -must be a function that ignores an infinite series of arguments. +must be a function that ignores as many arguments as you give it. Our expectation, then, is that our recipe for finding fixed points -will build us a function that somehow manages to ignore an infinite -series of arguments. +will build us a term that somehow manages to ignore arbitrarily many arguments. h ≡ \xy.x H ≡ \u.h(uu) ≡ \u.(\xy.x)(uu) ~~> \uy.uu @@ -446,15 +449,15 @@ Yes! That's exactly right. And which formula this is will depend on the particul One (by now obvious) upshot is that the recipes that enable us to name fixed points for any given formula aren't *guaranteed* to give us -*terminating* fixed points. They might give us formulas X such that -neither `X` nor `f X` have normal forms. (Indeed, what they give us -for the square function isn't any of the Church numerals, but is +*terminating* fixed points. They might give us formulas `ξ` such that +neither `ξ` nor `f ξ` have normal forms. (Indeed, what they give us +for the `square` function isn't any of the Church numerals, but is rather an expression with no normal form.) However, if we take care we can ensure that we *do* get terminating fixed points. And this gives us a principled, fully general strategy for doing recursion. It lets us define even functions like the Ackermann function, which were until -now out of our reach. It would also let us define arithmetic and list -functions on the "version 1" and "version 2" encodings, where it +now out of our reach. It would also let us define list +functions on [[the encodings we discussed last week|week3_lists#other-lists]], where it wasn't always clear how to force the computation to "keep going." ###Varieties of fixed-point combinators### @@ -503,19 +506,19 @@ However, when we evaluate the application of our: Ψ (\self (\xs. (empty? xs) 0 (succ (self (tail xs))) )) -to some list `L`, we're not going to go into an infinite evaluation loop of that sort. At each cycle, we're going to be evaluating the application of: +to some list, we're not going to go into an infinite evaluation loop of that sort. At each cycle, we're going to be evaluating the application of: \xs. (empty? xs) 0 (succ (self (tail xs))) -to *the tail* of the list we were evaluating its application to at the previous stage. Assuming our lists are finite (and the encodings we're using don't permit otherwise), at some point one will get a list whose tail is empty, and then the evaluation of that formula to that tail will return `0`. So the recursion eventually bottoms out in a base value. +to *the tail* of the list we were evaluating its application to at the previous stage. Assuming our lists are finite (and the encodings we've been using so far don't permit otherwise), at some point one will get a list whose tail is empty, and then the evaluation of that formula to that tail will return `0`. So the recursion eventually bottoms out in a base value. ##Fixed-point Combinators Are a Bit Intoxicating## -![tatoo](/y-combinator-fixed.jpg) +[[tatto|/images/y-combinator-fixed.jpg]] There's a tendency for people to say "Y-combinator" to refer to fixed-point combinators generally. We'll probably fall into that usage ourselves. Speaking correctly, though, the Y-combinator is only one of many fixed-point combinators. -I used `Ψ` above to stand in for an arbitrary fixed-point combinator. I don't know of any broad conventions for this. But this seems a useful one. +We used `Ψ` above to stand in for an arbitrary fixed-point combinator. We don't know of any broad conventions for this. But this seems a useful one. As we said, there are many other fixed-point combinators as well. For example, Jan Willem Klop pointed out that if we define `L` to be: @@ -701,7 +704,7 @@ for any choice of `X` whatsoever. So the `Y` combinator is only guaranteed to give us one fixed point out of infinitely many --- and not always the intuitively most useful -one. (For instance, the squaring function has `0` as a fixed point, +one. (For instance, the squaring function `\x. mul x x` has `0` as a fixed point, since `0 * 0 = 0`, and `1` as a fixed point, since `1 * 1 = 1`, but `Y (\x. mul x x)` doesn't give us `0` or `1`.) So with respect to the truth-teller paradox, why in the reasoning we've