+++ /dev/null
-[[!toc]]
-
-#Q: How do you know that every term in the untyped lambda calculus has a fixed point?#
-
-A: That's easy: let `T` be an arbitrary term in the lambda calculus. If
-`T` has a fixed point, then there exists some `X` such that `X <~~>
-TX` (that's what it means to *have* a fixed point).
-
-<pre><code>let L = \x. T (x x) in
-let X = L L in
-X ≡ L L ≡ (\x. T (x x)) L ~~> T (L L) ≡ T X
-</code></pre>
-
-Please slow down and make sure that you understand what justified each
-of the equalities in the last line.
-
-#Q: How do you know that for any term `T`, `Y T` is a fixed point of `T`?#
-
-A: Note that in the proof given in the previous answer, we chose `T`
-and then set `X = L L = (\x. T (x x)) (\x. T (x x))`. If we abstract over
-`T`, we get the Y combinator, `\T. (\x. T (x x)) (\x. T (x x))`. No matter
-what argument `T` we feed `Y`, it returns some `X` that is a fixed point
-of `T`, by the reasoning in the previous answer.
-
-#Q: So if every term has a fixed point, even `Y` has fixed point.#
-
-A: Right:
-
-<pre><code>let Y = \T. (\x. T (x x)) (\x. T (x x)) in
-Y Y ≡ \T. (\x. T (x x)) (\x. T (x x)) Y
-~~> (\x. Y (x x)) (\x. Y (x x))
-~~> Y ((\x. Y (x x)) (\x. Y (x x)))
-~~> Y (Y ((\x. Y (x x)) (\x. Y (x x))))
-~~> Y (Y (Y (...(Y (Y Y))...)))</code></pre>
-
-
-#Q: Ouch! Stop hurting my brain.#
-
-A: Is that a question?
-
-Let's come at it from the direction of arithmetic. Recall that we
-claimed that even `succ`---the function that added one to any
-number---had a fixed point. How could there be an X such that X = X+1?
-That would imply that
-
- X <~~> succ X <~~> succ (succ X) <~~> succ (succ (succ (X))) <~~> succ (... (succ X)...)
-
-In other words, the fixed point of `succ` is a term that is its own
-successor. Let's just check that `X = succ X`:
-
-<pre><code>let succ = \n s z. s (n s z) in
-let X = (\x. succ (x x)) (\x. succ (x x)) in
-succ X
-≡ succ ( (\x. succ (x x)) (\x. succ (x x)) )
-~~> succ (succ ( (\x. succ (x x)) (\x. succ (x x))))
-≡ succ (succ X)
-</code></pre>
-
-You should see the close similarity with `Y Y` here.
-
-
-#Q. So `Y` applied to `succ` returns a number that is not finite!#
-
-A. Yes! Let's see why it makes sense to think of `Y succ` as a Church
-numeral:
-
- [same definitions]
- succ X
- = (\n s z. s (n s z)) X
- = \s z. s (X s z)
- = succ (\s z. s (X s z)) ; using fixed-point reasoning
- = \s z. s ([succ (\s z. s (X s z))] s z)
- = \s z. s ([\s z. s ([succ (\s z. s (X s z))] s z)] s z)
- = \s z. s (s (succ (\s z. s (X s z))))
-
-So `succ X` looks like a numeral: it takes two arguments, `s` and `z`,
-and returns a sequence of nested applications of `s`...
-
-You should be able to prove that `add 2 (Y succ) <~~> Y succ`,
-likewise for `mult`, `minus`, `pow`. What happens if we try `minus (Y
-succ)(Y succ)`? What would you expect infinity minus infinity to be?
-(Hint: choose your evaluation strategy so that you add two `s`s to the
-first number for every `s` that you add to the second number.)
-
-This is amazing, by the way: we're proving things about a term that
-represents arithmetic infinity.
-
-It's important to bear in mind the simplest term in question is not
-infinite:
-
- Y succ = (\f. (\x. f (x x)) (\x. f (x x))) (\n s z. s (n s z))
-
-The way that infinity enters into the picture is that this term has
-no normal form: no matter how many times we perform beta reduction,
-there will always be an opportunity for more beta reduction. (Lather,
-rinse, repeat!)
-
-
-#Q. That reminds me, what about [[evaluation order]]?#
-
-A. For a recursive function that has a well-behaved base case, such as
-the factorial function, evaluation order is crucial. In the following
-computation, we will arrive at a normal form. Watch for the moment at
-which we have to make a choice about which beta reduction to perform
-next: one choice leads to a normal form, the other choice leads to
-endless reduction:
-
- let prefac = \f n. iszero n 1 (mult n (f (pred n))) in
- let fac = Y prefac in
- fac 2
- = [(\f.(\x.f(xx))(\x.f(xx))) prefac] 2
- = [(\x.prefac(xx))(\x.prefac(xx))] 2
- = [prefac((\x.prefac(xx))(\x.prefac(xx)))] 2
- = [prefac(prefac((\x.prefac(xx))(\x.prefac(xx))))] 2
- = [(\f n. iszero n 1 (mult n (f (pred n))))
- (prefac((\x.prefac(xx))(\x.prefac(xx))))] 2
- = [\n. iszero n 1 (mult n ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred n)))] 2
- = iszero 2 1 (mult 2 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred 2)))
- = mult 2 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] 1)
- ...
- = mult 2 (mult 1 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] 0))
- = mult 2 (mult 1 (iszero 0 1 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred 0))))
- = mult 2 (mult 1 1)
- = mult 2 1
- = 2
-
-The crucial step is the third from the last. We have our choice of
-either evaluating the test `iszero 0 1 ...`, which evaluates to `1`,
-no matter what the ... contains;
-or we can evaluate the `Y` pump, `(\x.prefac(xx))(\x.prefac(xx))`, to
-produce another copy of `prefac`. If we postpone evaluting the
-`iszero` test, we'll pump out copy after copy of `prefac`, and never
-realize that we've bottomed out in the recursion. But if we adopt a
-leftmost/call-by-name/normal-order evaluation strategy, we'll always
-start with the `iszero` predicate, and only produce a fresh copy of
-`prefac` if we are forced to.
-