or `\m y n. m n y`, which is the `C` combinator. So in such a
context, (2) might denote
- Y C
- (\h. (\u. h (u u)) (\u. h (u u))) C
- (\u. C (u u)) (\u. C (u u)))
- C ((\u. C (u u)) (\u. C (u u)))
- C (C ((\u. C (u u)) (\u. C (u u))))
- C (C (C ((\u. C (u u)) (\u. C (u u)))))
+ Y C ≡
+ (\h. (\u. h (u u)) (\u. h (u u))) C ~~>
+ (\u. C (u u)) (\u. C (u u))) ~~>
+ C ((\u. C (u u)) (\u. C (u u))) ~~>
+ C (C ((\u. C (u u)) (\u. C (u u)))) ~~>
+ C (C (C ((\u. C (u u)) (\u. C (u u))))) ~~>
...
-And infinite sequence of `C`s, each one negating the remainder of the
+An infinite sequence of `C`s, each one negating the remainder of the
sequence. Yep, that feels like a reasonable representation of the
liar paradox.
A: Right:
- let Y = \N. (\u. N (u u)) (\u. N (u u)) in
- Y Y
- ≡ \N. (\u. N (u u)) (\u. N (u u)) Y
- ~~> (\u. Y (u u)) (\u. Y (u u))
- ~~> Y ((\u. Y (u u)) (\u. Y (u u)))
- ~~> Y ( Y ((\u. Y (u u)) (\u. Y (u u))))
- ~~> Y (Y (Y (...(Y (Y Y))...)))
+ let Y = \h. (\u. h (u u)) (\u. h (u u)) in
+ Y Y ≡
+ \h. (\u. h (u u)) (\u. h (u u)) Y ~~>
+ (\u. Y (u u)) (\u. Y (u u)) ~~>
+ Y ((\u. Y (u u)) (\u. Y (u u))) ~~>
+ Y ( Y ((\u. Y (u u)) (\u. Y (u u)))) <~~>
+ Y ( Y ( Y (...(Y (Y Y))...)))
A: Is that a question?
Let's come at it from the direction of arithmetic. Recall that we
-claimed that even `succ`---the function that added one to any
-number---had a fixed point. How could there be an `ξ` such that `ξ <~~> succ ξ`?
+claimed that even `succ` --- the function that added one to any
+number --- had a fixed point. How could there be an `ξ` such that `ξ <~~> succ ξ`?
That would imply that
ξ <~~> succ ξ <~~> succ (succ ξ) <~~> succ (succ (succ ξ)) <~~> succ (...(succ ξ)...)
`A 1 x` is to `A 0 x` as addition is to the successor function;
`A 2 x` is to `A 1 x` as multiplication is to addition;
-`A 3 x` is to `A 2 x` as exponentiation is to multiplication---
+`A 3 x` is to `A 2 x` as exponentiation is to multiplication ---
so `A 4 x` is to `A 3 x` as hyper-exponentiation is to exponentiation...
## Q: What other questions should I be asking? ##
-* What is it about the variant fixed-point combinators that makes
- them compatible with a call-by-value evaluation strategy?
+* What is it about the "primed" fixed-point combinators `Θ′` and `Y′` that
+ makes them compatible with a call-by-value evaluation strategy?
+
+* What *exactly* is primitive recursion?
* How do you know that the Ackermann function can't be computed
using primitive recursion techniques?
-* What *exactly* is primitive recursion?
-
* I hear that `Y` delivers the/a *least* fixed point. Least
according to what ordering? How do you know it's least?
Is leastness important?