``successor ∘ square``
and in general:
``(s ∘ f) z``
should be understood as: s (f z) Now consider the following series: z s z s (s z) s (s (s z)) ... Remembering that I is the identity combinator, this could also be written:
``````(I) z
(s) z
(s ∘ s) z
(s ∘ s ∘ s) z
...``````
And we might adopt the following natural shorthand for this:
``````s0 z
s1 z
s2 z
s3 z
...``````
We haven't introduced any new constants 0, 1, 2 into the object language, nor any new form of syntactic combination. This is all just a metalanguage abbreviation for: z s z s (s z) s (s (s z)) ... Church had the idea to implement the number *n* by an operation that accepted an arbitrary function `s` and base value `z` as arguments, and returned `sn z` as a result. In other words:
``````zero ≡ \s z. s0 z ≡ \s z. z
one ≡ \s z. s1 z ≡ \s z. s z
two ≡ \s z. s2 z ≡ \s z. s (s z)
three ≡ \s z. s3 z ≡ \s z. s (s (s z))
...``````
This is a very elegant idea. Implementing numbers this way, we'd let the successor function be:
``succ ≡ \n. \s z. s (n s z)``
So, for example:
``````    succ two
≡ (\n. \s z. s (n s z)) (\s z. s (s z))
~~> \s z. s ((\s z, s (s z)) s z)
~~> \s z. s (s (s z))``````
Adding *m* to *n* is a matter of applying the successor function to *n* *m* times. And we know how to apply an arbitrary function s to *n* *m* times: we just give that function s, and the base-value *n*, to *m* as arguments. Because that's what the function we're using to implement *m* *does*. Hence **add** can be defined to be, simply: \m \n. m succ n Isn't that nice? How would we tell whether a number was 0? Well, look again at the implementations of the first few numbers:
``````zero ≡ \s z. s0 z ≡ \s z. z
one ≡ \s z. s1 z ≡ \s z. s z
two ≡ \s z. s2 z ≡ \s z. s (s z)
three ≡ \s z. s3 z ≡ \s z. s (s (s z))
...``````
We can see that with the non-zero numbers, the function s is always applied to an argument at least once. With zero, on the other hand, we just get back the base-value. Hence we can determine whether a number is zero as follows: some-number (\x. false) true If some-number is zero, this will evaluate to the base value true. If some-number is non-zero, then it will evaluate to the result of applying (\x. false) to the result of applying ... to the result of applying (\x. false) to the base value true. But the result of applying (\x. false) to any argument is always false. So when some-number is non-zero, this expressions evaluates to false. Perhaps not as elegant as addition, but still decently principled. Multiplication is even more elegant. Consider that applying an arbitrary function s to a base value z *m × n* times is a matter of applying s to z *n* times, and then doing that again, and again, and so on...for *m* repetitions. In other words, it's a matter of applying the function (\z. n s z) to z *m* times. In other words, *m × n* can be represented as:
``````    \s z. m (\z. n s z) z
~~> \s z. m n s z``````
which eta-reduces to: m n Isn't that nice? However, at this point the elegance gives out. The predecessor function is substantially more difficult to construct on this implementation. As with all of these operations, there are several ways to do it, but they all take at least a bit of ingenuity. If you're only first learning programming right now, it would be unreasonable to expect you to be able to figure out how to do it. However, if on the other hand you do have some experience programming, consider how you might construct a predecessor function for numbers implemented in this way. Using only the resources we've so far discussed. (So you have no general facility for performing recursion, for instance.)