`successor ∘ square`

and in general:
`(s ∘ f) z`

should be understood as:
s (f z)
Now consider the following series:
z
s z
s (s z)
s (s (s z))
...
Remembering that I is the identity combinator, this could also be written:
```
(I) z
(s) z
(s ∘ s) z
(s ∘ s ∘ s) z
...
```

And we might adopt the following natural shorthand for this:
`s`^{0} z
s^{1} z
s^{2} z
s^{3} z
...

We haven't introduced any new constants 0, 1, 2 into the object language, nor any new form of syntactic combination. This is all just a metalanguage abbreviation for:
z
s z
s (s z)
s (s (s z))
...
Church had the idea to implement the number *n* by an operation that accepted an arbitrary function `s` and base value `z` as arguments, and returned `s`^{n} z

as a result. In other words:
`zero ≡ \s z. s`^{0} z ≡ \s z. z
one ≡ \s z. s^{1} z ≡ \s z. s z
two ≡ \s z. s^{2} z ≡ \s z. s (s z)
three ≡ \s z. s^{3} z ≡ \s z. s (s (s z))
...

This is a very elegant idea. Implementing numbers this way, we'd let the successor function be:
`succ ≡ \n. \s z. s (n s z)`

So, for example:
```
succ two
≡ (\n. \s z. s (n s z)) (\s z. s (s z))
~~> \s z. s ((\s z, s (s z)) s z)
~~> \s z. s (s (s z))
```

Adding *m* to *n* is a matter of applying the successor function to *n* *m* times. And we know how to apply an arbitrary function s to *n* *m* times: we just give that function s, and the base-value *n*, to *m* as arguments. Because that's what the function we're using to implement *m* *does*. Hence **add** can be defined to be, simply:
\m n. m succ n
Isn't that nice?
Alternatively, one could do:
\m n. \s z. m s (n s z)
How would we tell whether a number was 0? Well, look again at the implementations of the first few numbers:
`zero ≡ \s z. s`^{0} z ≡ \s z. z
one ≡ \s z. s^{1} z ≡ \s z. s z
two ≡ \s z. s^{2} z ≡ \s z. s (s z)
three ≡ \s z. s^{3} z ≡ \s z. s (s (s z))
...

We can see that with the non-zero numbers, the function s is always applied to an argument at least once. With zero, on the other hand, we just get back the base-value. Hence we can determine whether a number is zero as follows:
some-number (\x. false) true
If some-number is zero, this will evaluate to the base value true. If some-number is non-zero, then it will evaluate to the result of applying (\x. false) to the result of applying ... to the result of applying (\x. false) to the base value true. But the result of applying (\x. false) to any argument is always false. So when some-number is non-zero, this expressions evaluates to false.
Perhaps not as elegant as addition, but still decently principled.
Multiplication is even more elegant. Consider that applying an arbitrary function s to a base value z *m × n* times is a matter of applying s to z *n* times, and then doing that again, and again, and so on...for *m* repetitions. In other words, it's a matter of applying the function (\z. n s z) to z *m* times. In other words, *m × n* can be represented as:
\s z. m (\z. n s z) z
which can be eta-reduced to:
\s. m (n s)
and we might abbreviate that as:
`m ∘ n`

Isn't that nice?
And if we *apply* `m` to `n` instead of composing it, we get a implementation of exponentiation.
However, at this point the elegance gives out. The predecessor function is substantially more difficult to construct on this implementation. As with all of these operations, there are several ways to do it, but they all take at least a bit of ingenuity. If you're only first learning programming right now, it would be unreasonable to expect you to be able to figure out how to do it.
However, if on the other hand you do have some experience programming, consider how you might construct a predecessor function for numbers implemented in this way. Using only the resources we've so far discussed. (So you have no general facility for performing recursion, for instance.)
Lists, version 3
----------------
It's possible to follow the same design for implementing lists, too. To see this, let's first step back and consider some of the more complex things you might do with a list. We don't need to think specifically inside the confines of the lambda calculus right now. These are general reflections.
Assume you have a list of five integers, which I'll write using the OCaml notation: `[1; 2; 3; 4; 5]`.
Now one thing you might want to do with the list is to double every member. Another thing you might want to do is to increment every number. More generally, given an arbitrary function `f`, you might want to get the list which is `[f 1; f 2; f 3; f 4; f 5]`. Computer scientists call this **mapping** the function `f` over the list `[1; 2; 3; 4; 5]`.
Another thing you might want to do with the list is to retrieve every member which is even. Or every member which is prime. Or, given an arbitrary function f, you might want to **filter** the original list to a shorter list containing only those elements `x` for which `f x` evaluates to true.
These are very basic, frequently-used operations on lists.
Another operation on lists is a bit harder to get a mental hold of, but is even more fundamental than the two just mentioned. An example of this operation would be if you were to **sum up** the members of the list. What would you do? We'll you'd start with the first element of the list. Actually, for generality, let's say you start with a *seed value*. In this case the seed value can be 0. Then you take the first element of the list and add it to the seed value. Now you have 1. You take the second element of the list, and add it to the result so far. Now you have 3. You take the third element of the list, and add it to the result so far. And so on.
This general form of operation is known as **folding** an operation---in this case, the addition operation---over the list. Addition is symmetric, so it doesn't matter whether you start at the left side of the list or the right. But we can't in general rely on the operations to be symmetric. So there are two notions. This is the **left-fold** of an operation f over our list `[1; 2; 3; 4; 5]` given a seed value z:
f (f (f (f (f z 1) 2) 3) 4) 5
and this is the **right-fold**:
f 1 (f 2 (f 3 (f 4 (f 5 z))))
Church's proposal for implementing the numbers identified the essential behavior of a number *m* to be applying an arbitary function s to a base value z *m* times. In a similar spirit, we can identify the essential behavior of a list to be folding an arbitrary operation f over the elements of the list and a seed value z. In other words, we could represent the list `[1; 2; 3; 4; 5]` as a function that accepted arbitrary `f` and `z` as arguments, and returned one of the folds above.
You could do this using either sort of fold, but choosing the right fold gives us an implementation closest to Church's encoding of the numbers. Then we'd define `[1; 2; 3; 4; 5]` to be:
\f z. f 1 (f 2 (f 3 (f 4 (f 5 z))))
Compare Church's definition of the number five:
\s z. s (s (s (s (s z))))
This has real elegance, and it makes it easy to implement a number of primitive list operatioons. For example, checking whether a list implemented in this way is empty is easy. So too is extracting the head of a list known to be non-empty. However, other operations require some ingenuity. Extracting the tail of a list is about as difficult as retrieving the predecessor of a Church number. (This should not be surprising, given how similar in design these implementations are.)