`T` has a fixed point, then there exists some `X` such that `X <~~>
TX` (that's what it means to *have* a fixed point).
`T` has a fixed point, then there exists some `X` such that `X <~~>
TX` (that's what it means to *have* a fixed point).
Please slow down and make sure that you understand what justified each
of the equalities in the last line.
Please slow down and make sure that you understand what justified each
of the equalities in the last line.
-and then set `X = WW = (\x.T(xx))(\x.T(xx))`. If we abstract over
-`T`, we get the Y combinator, `\T.(\x.T(xx))(\x.T(xx))`. No matter
-what argument `T` we feed Y, it returns some `X` that is a fixed point
+and then set `X = L L = (\x. T (x x)) (\x. T (x x))`. If we abstract over
+`T`, we get the Y combinator, `\T. (\x. T (x x)) (\x. T (x x))`. No matter
+what argument `T` we feed `Y`, it returns some `X` that is a fixed point
of `T`, by the reasoning in the previous answer.
#Q: So if every term has a fixed point, even `Y` has fixed point.#
A: Right:
of `T`, by the reasoning in the previous answer.
#Q: So if every term has a fixed point, even `Y` has fixed point.#
A: Right:
- let Y = \T.(\x.T(xx))(\x.T(xx)) in
- Y Y = \T.(\x.T(xx))(\x.T(xx)) Y
- = (\x.Y(xx))(\x.Y(xx))
- = Y((\x.Y(xx))(\x.Y(xx)))
- = Y(Y((\x.Y(xx))(\x.Y(xx))))
- = Y(Y(Y(...(Y(YY))...)))
+<pre><code>let Y = \T. (\x. T (x x)) (\x. T (x x)) in
+Y Y ≡ \T. (\x. T (x x)) (\x. T (x x)) Y
+~~> (\x. Y (x x)) (\x. Y (x x))
+~~> Y ((\x. Y (x x)) (\x. Y (x x)))
+~~> Y (Y ((\x. Y (x x)) (\x. Y (x x))))
+~~> Y (Y (Y (...(Y (Y Y))...)))</code></pre>
+
claimed that even `succ`---the function that added one to any
number---had a fixed point. How could there be an X such that X = X+1?
That would imply that
claimed that even `succ`---the function that added one to any
number---had a fixed point. How could there be an X such that X = X+1?
That would imply that
In other words, the fixed point of `succ` is a term that is its own
successor. Let's just check that `X = succ X`:
In other words, the fixed point of `succ` is a term that is its own
successor. Let's just check that `X = succ X`:
- let succ = \n s z. s (n s z) in
- let X = (\x.succ(xx))(\x.succ(xx)) in
- succ X
- = succ ((\x.succ(xx))(\x.succ(xx)))
- = succ (succ ((\x.succ(xx))(\x.succ(xx))))
- = succ (succ X)
+<pre><code>let succ = \n s z. s (n s z) in
+let X = (\x. succ (x x)) (\x. succ (x x)) in
+succ X
+≡ succ ( (\x. succ (x x)) (\x. succ (x x)) )
+~~> succ (succ ( (\x. succ (x x)) (\x. succ (x x))))
+≡ succ (succ X)
+</code></pre>
+
+You should see the close similarity with `Y Y` here.
#Q. So `Y` applied to `succ` returns a number that is not finite!#
A. Yes! Let's see why it makes sense to think of `Y succ` as a Church
numeral:
#Q. So `Y` applied to `succ` returns a number that is not finite!#
A. Yes! Let's see why it makes sense to think of `Y succ` as a Church
numeral:
- [same definitions]
- succ X
- = (\n s z. s (n s z)) X
- = \s z. s (X s z)
- = succ (\s z. s (X s z)) ; using fixed-point reasoning
- = \s z. s ([succ (\s z. s (X s z))] s z)
- = \s z. s ([\s z. s ([succ (\s z. s (X s z))] s z)] s z)
- = \s z. s (s (succ (\s z. s (X s z))))
+ [same definitions]
+ succ X
+ = (\n s z. s (n s z)) X
+ = \s z. s (X s z)
+ = succ (\s z. s (X s z)) ; using fixed-point reasoning
+ = \s z. s ([succ (\s z. s (X s z))] s z)
+ = \s z. s ([\s z. s ([succ (\s z. s (X s z))] s z)] s z)
+ = \s z. s (s (succ (\s z. s (X s z))))
So `succ X` looks like a numeral: it takes two arguments, `s` and `z`,
and returns a sequence of nested applications of `s`...
So `succ X` looks like a numeral: it takes two arguments, `s` and `z`,
and returns a sequence of nested applications of `s`...
The way that infinity enters into the picture is that this term has
no normal form: no matter how many times we perform beta reduction,
there will always be an opportunity for more beta reduction. (Lather,
rinse, repeat!)
The way that infinity enters into the picture is that this term has
no normal form: no matter how many times we perform beta reduction,
there will always be an opportunity for more beta reduction. (Lather,
rinse, repeat!)
#Q. That reminds me, what about [[evaluation order]]?#
A. For a recursive function that has a well-behaved base case, such as
#Q. That reminds me, what about [[evaluation order]]?#
A. For a recursive function that has a well-behaved base case, such as
- let prefac = \f n. isZero n 1 (mult n (f (pred n))) in
- let fac = Y prefac in
- fac 2
- = [(\f.(\x.f(xx))(\x.f(xx))) prefac] 2
- = [(\x.prefac(xx))(\x.prefac(xx))] 2
- = [prefac((\x.prefac(xx))(\x.prefac(xx)))] 2
- = [prefac(prefac((\x.prefac(xx))(\x.prefac(xx))))] 2
- = [(\f n. isZero n 1 (mult n (f (pred n))))
- (prefac((\x.prefac(xx))(\x.prefac(xx))))] 2
- = [\n. isZero n 1 (mult n ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred n)))] 2
- = isZero 2 1 (mult 2 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred 2)))
- = mult 2 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] 1)
- ...
- = mult 2 (mult 1 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] 0))
- = mult 2 (mult 1 (isZero 0 1 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred 0))))
- = mult 2 (mult 1 1)
- = mult 2 1
- = 2
+ let prefac = \f n. iszero n 1 (mult n (f (pred n))) in
+ let fac = Y prefac in
+ fac 2
+ = [(\f.(\x.f(xx))(\x.f(xx))) prefac] 2
+ = [(\x.prefac(xx))(\x.prefac(xx))] 2
+ = [prefac((\x.prefac(xx))(\x.prefac(xx)))] 2
+ = [prefac(prefac((\x.prefac(xx))(\x.prefac(xx))))] 2
+ = [(\f n. iszero n 1 (mult n (f (pred n))))
+ (prefac((\x.prefac(xx))(\x.prefac(xx))))] 2
+ = [\n. iszero n 1 (mult n ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred n)))] 2
+ = iszero 2 1 (mult 2 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred 2)))
+ = mult 2 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] 1)
+ ...
+ = mult 2 (mult 1 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] 0))
+ = mult 2 (mult 1 (iszero 0 1 ([prefac((\x.prefac(xx))(\x.prefac(xx)))] (pred 0))))
+ = mult 2 (mult 1 1)
+ = mult 2 1
+ = 2
no matter what the ... contains;
or we can evaluate the `Y` pump, `(\x.prefac(xx))(\x.prefac(xx))`, to
produce another copy of `prefac`. If we postpone evaluting the
no matter what the ... contains;
or we can evaluate the `Y` pump, `(\x.prefac(xx))(\x.prefac(xx))`, to
produce another copy of `prefac`. If we postpone evaluting the
realize that we've bottomed out in the recursion. But if we adopt a
leftmost/call-by-name/normal-order evaluation strategy, we'll always
realize that we've bottomed out in the recursion. But if we adopt a
leftmost/call-by-name/normal-order evaluation strategy, we'll always
-<pre>
-A(m,n) =
- | when m == 0 -> n + 1
- | else when n == 0 -> A(m-1,1)
- | else -> A(m-1, A(m,n-1))
+ A(m,n) =
+ | when m == 0 -> n + 1
+ | else when n == 0 -> A(m-1,1)
+ | else -> A(m-1, A(m,n-1))
along to the continue-leftwards handler.
A **version 5** list encodes the kind of fold operation we're envisaging here, in
along to the continue-leftwards handler.
A **version 5** list encodes the kind of fold operation we're envisaging here, in