+ X <~~> ...X...
+
+So on the right-hand side we have a complex expression, that contains some occurrences of whatever our fixed-point `X` is, and `X` is convertible with *that very complex, right-hand side expression.*
+
+So we really *can* define `get_length` in the way we were initially attempting, in the bare lambda calculus, where Scheme and OCaml's souped-up `let rec` constructions aren't primitively available. (In fact, what we're doing here is the natural way to implement `let rec`.)
+
+This all turns on having a way to generate a fixed-point for our "starting formula":
+
+ \self (\lst. (isempty lst) zero (add one (self (extract-tail lst))) )
+
+Where do we get it?
+
+Suppose we have some **fixed-point combinator**
+<code>Ψ</code>. That is, some function that returns, for any expression `f` we give it as argument, a fixed point for `f`. In other words:
+
+<pre><code>Ψ f <~~> f (Ψ f)</code></pre>
+
+Then applying <code>Ψ</code> to the "starting formula" displayed above would give us our fixed point `X` for the starting formula:
+
+<pre><code>Ψ (\self (\lst. (isempty lst) zero (add one (self (extract-tail lst))) ))</code></pre>
+
+And this is the fully general strategy for
+defining recursive functions in the lambda calculus. You begin with a "body formula":
+
+ ...self...
+
+containing free occurrences of `self` that you treat as being equivalent to the body formula itself. In the case we're considering, that was:
+
+ \lst. (isempty lst) zero (add one (self (extract-tail lst)))
+
+You bind the free occurrence of `self` as: `\self. BODY`. And then you generate a fixed point for this larger expression:
+
+<pre><code>Ψ (\self. BODY)</code></pre>
+
+using some fixed-point combinator <code>Ψ</code>.
+
+Isn't that cool?
+
+##Okay, then give me a fixed-point combinator, already!##
+
+Many fixed-point combinators have been discovered. (And some fixed-point combinators give us models for building infinitely many more, non-equivalent fixed-point combinators.)
+
+Two of the simplest:
+
+<pre><code>Θ′ ≡ (\u f. f (\n. u u f n)) (\u f. f (\n. u u f n))
+Y′ ≡ \f. (\u. f (\n. u u n)) (\u. f (\n. u u n))</code></pre>
+
+<code>Θ′</code> has the advantage that <code>f (Θ′ f)</code> really *reduces to* <code>Θ′ f</code>. Whereas <code>f (Y′ f)</code> is only *convertible with* <code>Y′ f</code>; that is, there's a common formula they both reduce to. For most purposes, though, either will do.
+
+You may notice that both of these formulas have eta-redexes inside them: why can't we simplify the two `\n. u u f n` inside <code>Θ′</code> to just `u u f`? And similarly for <code>Y′</code>?
+
+Indeed you can, getting the simpler:
+
+<pre><code>Θ ≡ (\u f. f (u u f)) (\u f. f (u u f))
+Y ≡ \f. (\u. f (u u)) (\u. f (u u))</code></pre>
+
+I stated the more complex formulas for the following reason: in a language whose evaluation order is *call-by-value*, the evaluation of <code>Θ (\self. BODY)</code> and `Y (\self. BODY)` will in general not terminate. But evaluation of the eta-unreduced primed versions will.
+
+Of course, if you define your `\self. BODY` stupidly, your formula will never terminate. For example, it doesn't matter what fixed point combinator you use for <code>Ψ</code> in:
+
+<pre><code>Ψ (\self. \n. self n)</code></pre>
+
+When you try to evaluate the application of that to some argument `M`, it's going to try to give you back:
+
+ (\n. self n) M
+
+where `self` is equivalent to the very formula `\n. self n` that contains it. So the evaluation will proceed:
+
+ (\n. self n) M ~~>
+ self M ~~>
+ (\n. self n) M ~~>
+ self M ~~>
+ ...
+
+You've written an infinite loop!
+
+However, when we evaluate the application of our:
+
+<pre><code>Ψ (\self (\lst. (isempty lst) zero (add one (self (extract-tail lst))) ))</code></pre>
+
+to some list `L`, we're not going to go into an infinite evaluation loop of that sort. At each cycle, we're going to be evaluating the application of:
+
+ \lst. (isempty lst) zero (add one (self (extract-tail lst)))
+
+to *the tail* of the list we were evaluating its application to at the previous stage. Assuming our lists are finite (and the implementations we're using don't permit otherwise), at some point one will get a list whose tail is empty, and then the evaluation of that formula to that tail will return `zero`. So the recursion eventually bottoms out in a base value.
+
+##Fixed-point Combinators Are a Bit Intoxicating##
+
+![tatoo](/y-combinator-fixed.jpg)
+
+There's a tendency for people to say "Y-combinator" to refer to fixed-point combinators generally. We'll probably fall into that usage ourselves. Speaking correctly, though, the Y-combinator is only one of many fixed-point combinators.
+
+I used <code>Ψ</code> above to stand in for an arbitrary fixed-point combinator. I don't know of any broad conventions for this. But this seems a useful one.
+
+As we said, there are many other fixed-point combinators as well. For example, Jan Willem Klop pointed out that if we define `L` to be:
+
+ \a b c d e f g h i j k l m n o p q s t u v w x y z r. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))
+
+then this is a fixed-point combinator:
+
+ L L L L L L L L L L L L L L L L L L L L L L L L L L
+
+
+##Watching Y in action##
+
+For those of you who like to watch ultra slow-mo movies of bullets
+piercing apples, here's a stepwise computation of the application of a
+recursive function. We'll use a function `sink`, which takes one
+argument. If the argument is boolean true (i.e., `\x y.x`), it
+returns itself (a copy of `sink`); if the argument is boolean false
+(`\x y. y`), it returns `I`. That is, we want the following behavior:
+
+ sink false ~~> I
+ sink true false ~~> I
+ sink true true false ~~> I
+ sink true true true false ~~> I
+
+So we make `sink = Y (\f b. b f I)`:
+
+ 1. sink false
+ 2. Y (\fb.bfI) false
+ 3. (\f. (\h. f (h h)) (\h. f (h h))) (\fb.bfI) false
+ 4. (\h. [\fb.bfI] (h h)) (\h. [\fb.bfI] (h h)) false
+ 5. [\fb.bfI] ((\h. [\fb.bsI] (h h))(\h. [\fb.bsI] (h h))) false
+ 6. (\b.b[(\h. [\fb.bsI] (h h))(\h. [\fb.bsI] (h h))]I) false
+ 7. false [(\h. [\fb.bsI] (h h))(\h. [\fb.bsI] (h h))] I
+ --------------------------------------------
+ 8. I
+
+So far so good. The crucial thing to note is that as long as we
+always reduce the outermost redex first, we never have to get around
+to computing the underlined redex: because `false` ignores its first
+argument, we can throw it away unreduced.
+
+Now we try the next most complex example:
+
+ 1. sink true false
+ 2. Y (\fb.bfI) true false
+ 3. (\f. (\h. f (h h)) (\h. f (h h))) (\fb.bfI) true false
+ 4. (\h. [\fb.bfI] (h h)) (\h. [\fb.bfI] (h h)) true false
+ 5. [\fb.bfI] ((\h. [\fb.bsI] (h h))(\h. [\fb.bsI] (h h))) true false
+ 6. (\b.b[(\h. [\fb.bsI] (h h))(\h. [\fb.bsI] (h h))]I) true false
+ 7. true [(\h. [\fb.bsI] (h h))(\h. [\fb.bsI] (h h))] I false
+ 8. [(\h. [\fb.bsI] (h h))(\h. [\fb.bsI] (h h))] false
+
+We've now arrived at line (4) of the first computation, so the result
+is again I.
+
+You should be able to see that `sink` will consume as many `true`s as
+we throw at it, then turn into the identity function after it
+encounters the first `false`.
+
+The key to the recursion is that, thanks to Y, the definition of
+`sink` contains within it the ability to fully regenerate itself as
+many times as is necessary. The key to *ending* the recursion is that
+the behavior of `sink` is sensitive to the nature of the input: if the
+input is the magic function `false`, the self-regeneration machinery
+will be discarded, and the recursion will stop.
+
+That's about as simple as recursion gets.
+
+##Base cases, and their lack##
+
+As any functional programmer quickly learns, writing a recursive
+function divides into two tasks: figuring out how to handle the
+recursive case, and remembering to insert a base case. The
+interesting and enjoyable part is figuring out the recursive pattern,
+but the base case cannot be ignored, since leaving out the base case
+creates a program that runs forever. For instance, consider computing
+a factorial: `n!` is `n * (n-1) * (n-2) * ... * 1`. The recursive
+case says that the factorial of a number `n` is `n` times the
+factorial of `n-1`. But if we leave out the base case, we get
+
+ 3! = 3 * 2! = 3 * 2 * 1! = 3 * 2 * 1 * 0! = 3 * 2 * 1 * 0 * -1! ...
+
+That's why it's crucial to declare that 0! = 1, in which case the
+recursive rule does not apply. In our terms,
+
+ fac = Y (\fac n. iszero n 1 (fac (predecessor n)))
+
+If `n` is 0, `fac` reduces to 1, without computing the recursive case.
+
+There is a well-known problem in philosophy and natural language
+semantics that has the flavor of a recursive function without a base
+case: the truth-teller paradox (and related paradoxes).
+
+(1) This sentence is true.
+
+If we assume that the complex demonstrative "this sentence" can refer
+to (1), then the proposition expressed by (1) will be true just in
+case the thing referred to by *this sentence* is true. Thus (1) will
+be true just in case (1) is true, and (1) is true just in case (1) is
+true, and so on. If (1) is true, then (1) is true; but if (1) is not
+true, then (1) is not true.
+
+Without pretending to give a serious analysis of the paradox, let's
+assume that sentences can have for their meaning boolean functions
+like the ones we have been working with here. Then the sentence *John
+is John* might denote the function `\x y. x`, our `true`.
+
+Then (1) denotes a function from whatever the referent of *this
+sentence* is to a boolean. So (1) denotes `\f. f true false`, where
+the argument `f` is the referent of *this sentence*. Of course, if
+`f` is a boolean, `f true false <~~> f`, so for our purposes, we can
+assume that (1) denotes the identity function `I`.
+
+If we use (1) in a context in which *this sentence* refers to the
+sentence in which the demonstrative occurs, then we must find a
+meaning `m` such that `I m = I`. But since in this context `m` is the
+same as the meaning `I`, so we have `m = I m`. In other words, `m` is
+a fixed point for the denotation of the sentence (when used in the
+appropriate context).
+
+That means that in a context in which *this sentence* refers to the
+sentence in which it occurs, the sentence denotes a fixed point for
+the identity function. Here's a fixed point for the identity
+function:
+
+<pre><code>Y I
+(\f. (\h. f (h h)) (\h. f (h h))) I
+(\h. I (h h)) (\h. I (h h)))
+(\h. (h h)) (\h. (h h)))
+ω ω
+&Omega
+</code></pre>
+
+Oh. Well! That feels right. The meaning of *This sentence is true*
+in a context in which *this sentence* refers to the sentence in which
+it occurs is <code>Ω</code>, our prototypical infinite loop...
+
+What about the liar paradox?
+
+(2) This sentence is false.
+
+Used in a context in which *this sentence* refers to the utterance of
+(2) in which it occurs, (2) will denote a fixed point for `\f.neg f`,
+or `\f l r. f r l`, which is the `C` combinator. So in such a
+context, (2) might denote
+
+ Y C
+ (\f. (\h. f (h h)) (\h. f (h h))) I
+ (\h. C (h h)) (\h. C (h h)))
+ C ((\h. C (h h)) (\h. C (h h)))
+ C (C ((\h. C (h h))(\h. C (h h))))
+ C (C (C ((\h. C (h h))(\h. C (h h)))))
+ ...
+
+And infinite sequence of `C`s, each one negating the remainder of the
+sequence. Yep, that feels like a reasonable representation of the
+liar paradox.
+
+See Barwise and Etchemendy's 1987 OUP book, [The Liar: an essay on
+truth and circularity](http://tinyurl.com/2db62bk) for an approach
+that is similar, but expressed in terms of non-well-founded sets
+rather than recursive functions.
+
+##However...##