+OK, so how do we make use of this?
+
+Recall our initial, abortive attempt above to define the `get_length` function in the lambda calculus. We said "What we really want to do is something like this:
+
+ \lst. (isempty lst) zero (add one (... (extract-tail lst)))
+
+where this very same formula occupies the `...` position."
+
+Now, what if we *were* somehow able to get ahold of this formula, as an additional argument? We could take that argument and plug it into the `...` position. Something like this:
+
+ \self (\lst. (isempty lst) zero (add one (self (extract-tail lst))) )
+
+This is an abstract of the form:
+
+ \self. body
+
+where `body` is the expression:
+
+ \lst. (isempty lst) zero (add one (self (extract-tail lst)))
+
+containing an occurrence of `self`.
+
+Now consider what would be a fixed point of our expression `\self. body`? That would be some expression `X` such that:
+
+ X <~~> (\self.body) X
+
+Beta-reducing the right-hand side, we get:
+
+ X <~~> body [self := X]
+
+Think about what this says. It says if you substitute `X` for `self` in our formula body:
+
+ \lst. (isempty lst) zero (add one (X (extract-tail lst)))
+
+what you get is "equivalent" to (that is, convertible with) X itself. That is, the `X` inside the above expression is equivalent to the whole expression. So the expression *does*, in a sense, contain itself!
+
+Let's go over that again. If we had a fixed point `X` for our expression `\self. ...self...`, then by the definition of a fixed-point, this has to be true:
+
+ X <~~> (\self. ...self...) X
+
+but beta-reducing the right-hand side, we get something of the form:
+
+ X <~~> ...X...
+
+So on the right-hand side we have a complex expression, that contains some occurrences of whatever our fixed-point `X` is, and `X` is convertible with *that very complex, right-hand side expression.*
+
+So we really *can* define `get_length` in the way we were initially attempting, in the bare lambda calculus, where Scheme and OCaml's souped-up `let rec` constructions aren't primitively available. (In fact, what we're doing here is the natural way to implement `let rec`.)
+
+This all turns on having a way to generate a fixed-point for our "starting formula":
+
+ \self (\lst. (isempty lst) zero (add one (self (extract-tail lst))) )
+
+Where do we get it?
+
+Suppose we have some **fixed-point combinator**
+<code>Ψ</code>. That is, some function that returns, for any expression `f` we give it as argument, a fixed point for `f`. In other words:
+
+<pre><code>Ψ f <~~> f (Ψ f)</code></pre>
+
+Then applying <code>Ψ</code> to the "starting formula" displayed above would give us our fixed point `X` for the starting formula:
+
+<pre><code>Ψ (\self (\lst. (isempty lst) zero (add one (self (extract-tail lst))) ))</code></pre>
+
+And this is the fully general strategy for
+defining recursive functions in the lambda calculus. You begin with a "body formula":
+
+ ...self...
+
+containing free occurrences of `self` that you treat as being equivalent to the body formula itself. In the case we're considering, that was:
+
+ \lst. (isempty lst) zero (add one (self (extract-tail lst)))
+
+You bind the free occurrence of `self` as: `\self. body`. And then you generate a fixed point for this larger expression:
+
+<pre><code>Ψ (\self. body)</code></pre>
+
+using some fixed-point combinator <code>Ψ</code>.
+
+Isn't that cool?
+
+##Okay, then give me a fixed-point combinator, already!##
+
+Many fixed-point combinators have been discovered. (And given a fixed-point combinators, there are ways to use it as a model to build infinitely many more, non-equivalent fixed-point combinators.)
+
+Two of the simplest:
+
+<pre><code>Θ′ ≡ (\u f. f (\n. u u f n)) (\u f. f (\n. u u f n))
+Y′ ≡ \f. (\u. f (\n. u u n)) (\u. f (\n. u u n))</code></pre>
+
+Θ′ has the advantage that <code>f (Θ′ f)</code> really *reduces to* <code>Θ′ f</code>.
+
+<code>f (Y′ f)</code> is only convertible with <code>Y′ f</code>; that is, there's a common formula they both reduce to. For most purposes, though, either will do.
+
+You may notice that both of these formulas have eta-redexes inside them: why can't we simplify the two `\n. u u f n` inside <code>Θ′</code> to just `u u f`? And similarly for <code>Y′</code>?
+
+Indeed you can, getting the simpler:
+
+<pre><code>Θ ≡ (\u f. f (u u f)) (\u f. f (u u f))
+Y ≡ \f. (\u. f (u u)) (\u. f (u u))</code></pre>
+
+I stated the more complex formulas for the following reason: in a language whose evaluation order is *call-by-value*, the evaluation of <code>Θ (\self. body)</code> and `Y (\self. body)` will in general not terminate. But evaluation of the eta-unreduced primed versions will.
+
+Of course, if you define your `\self. body` stupidly, your formula will never terminate. For example, it doesn't matter what fixed point combinator you use for <code>Ψ</code> in:
+
+<pre><code>Ψ (\self. \n. self n)</code></pre>
+
+When you try to evaluate the application of that to some argument `M`, it's going to try to give you back:
+
+ (\n. self n) M
+
+where `self` is equivalent to the very formula `\n. self n` that contains it. So the evaluation will proceed:
+
+ (\n. self n) M ~~>
+ self M ~~>
+ (\n. self n) M ~~>
+ self M ~~>
+ ...
+
+You've written an infinite loop!
+
+However, when we evaluate the application of our:
+
+<pre><code>Ψ (\self (\lst. (isempty lst) zero (add one (self (extract-tail lst))) ))</code></pre>
+
+to some list `L`, we're not going to go into an infinite evaluation loop of that sort. At each cycle, we're going to be evaluating the application of:
+
+ \lst. (isempty lst) zero (add one (self (extract-tail lst)))
+
+to *the tail* of the list we were evaluating its application to at the previous stage. Assuming our lists are finite (and the implementations we're using don't permit otherwise), at some point one will get a list whose tail is empty, and then the evaluation of that formula to that tail will return `zero`. So the recursion eventually bottoms out in a base value.
+
+##Fixed-point Combinators Are a Bit Intoxicating##
+
+![tatoo](/y-combinator.jpg)
+
+There's a tendency for people to say "Y-combinator" to refer to fixed-point combinators generally. We'll probably fall into that usage ourselves. Speaking correctly, though, the Y-combinator is only one of many fixed-point combinators.
+
+I used <code>Ψ</code> above to stand in for an arbitrary fixed-point combinator. I don't know of any broad conventions for this. But this seems a useful one.
+
+As we said, there are many other fixed-point combinators as well. For example, Jan Willem Klop pointed out that if we define `L` to be:
+
+ \a b c d e f g h i j k l m n o p q s t u v w x y z r. (r (t h i s i s a f i x e d p o i n t c o m b i n a t o r))
+
+then this is a fixed-point combinator:
+
+ L L L L L L L L L L L L L L L L L L L L L L L L L L
+