XGitUrl: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blobdiff_plain;f=topics%2Fweek3_combinatory_logic.mdwn;h=60cb3da16b757ec3015637401a81a3db6120fa96;hp=0b3390ee823ff59b2282817acff95a64fbba953b;hb=fac1570c5e308143c929a1a0c686d4a45ccaae61;hpb=95124aa2037f01f4d4aeaa55ecb2dacf785b6cd6
diff git a/topics/week3_combinatory_logic.mdwn b/topics/week3_combinatory_logic.mdwn
index 0b3390ee..60cb3da1 100644
 a/topics/week3_combinatory_logic.mdwn
+++ b/topics/week3_combinatory_logic.mdwn
@@ 2,13 +2,13 @@ Combinators and Combinatory Logic
=================================
Combinatory logic is of interest here in part because it provides a
useful computational system that is equivalent to the lambda calculus,
but different from it. In addition, Combinatory Logic has a number of
+useful computational system that is equivalent to the Lambda Calculus,
+but different from it. In addition, Combinatory Logic has a number of
applications in natural language semantics. Exploring Combinatory
Logic will involve defining a difference notion of reduction from the
one we have been using for the lambda calculus. This will provide us
with a second parallel example later when we're thinking through
such topics as evaluation strategies and recursion.
+Logic will involve defining a notion of reduction different from the
+one we have been using for the Lambda Calculus. This will provide us
+with a second parallel example when we're thinking through
+topics such as evaluation strategies and recursion.
Lambda expressions that have no free variables are known as **combinators**. Here are some common ones:
@@ 24,34 +24,27 @@ complicated operation, but is extremely versatile and useful
(see below): it copies its third argument and distributes it
over the first two arguments.
> **getfirst** was our function for extracting the first element of an ordered pair: `\fst snd. fst`. Compare this to `K` and `true` as well.
+> **fst** was our function for extracting the first element of an ordered pair: `\a b. a`. Compare this to `K` and `true` as well.
> **getsecond** was our function for extracting the second element of an ordered pair: `\fst snd. snd`. Compare this to our definition of `false`.
+> **snd** was our function for extracting the second element of an ordered pair: `\a b. b`. Compare this to our definition of `false`.
> **B** is defined to be: `\f g x. f (g x)`. (So `B f g` is the composition `\x. f (g x)` of `f` and `g`.)
> **C** is defined to be: `\f x y. f y x`. (So `C f` is a function like `f` except it expects its first two (curried) arguments in flipped order.)
> **W** is defined to be: `\f x . f x x`. (So `W f` accepts one argument and gives it to `f` twice. What is the meaning of `W multiply`?)
+> **T** is defined to be: `\x y. y x`. (So `C` and `T` both reorder arguments, just in different ways.)
> **ω** (that is, lowercase omega) is defined to be: `\x. x x`. Sometimes this combinator is called **M**.
+> **W** is defined to be: `\f x . f x x`. (So `W f` accepts one argument and gives it to `f` twice. What is the meaning of `W multiply`?)

+> **Ï** (that is, lowercase omega) is defined to be: `\x. x x`. Sometimes this combinator is called **M**. It and `W` both duplicate arguments, just in different ways.
It's possible to build a logical system equally powerful as the lambda calculus (and readily intertranslatable with it) using just combinators, considered as atomic operations. Such a language doesn't have any variables in it: not just no free variables, but no variables at all.
+It's possible to build a logical system equally powerful as the Lambda
+Calculus (and readily intertranslatable with it) using just
+combinators, considered as *primitive operations*. (That is, we
+refrain from defining them in terms of lambda expressions, as we did
+above.) Such a language doesn't have any variables in it: not just no
+free variables, but no variables (or "bound positions") at all.
One can do that with a very spare set of basic combinators. These days
the standard base is just three combinators: `S`, `K`, and `I`.
@@ 72,36 +65,42 @@ duplicators.
everyone hit himself
S/(S!NP) (S!NP)/NP (S!NP)!((S!NP)/NP)
 \fAx[fx] \y\z[HIT y z] \h\u[huu]
 
+ \fâx[fx] \y\z[HIT y z] \h\u[huu]
+  here "hit" is an argument to "himself"
S!NP \u[HIT u u]
 
 S Ax[HIT x x]
+  here "hit himself" is an argument to "everyone"
+ S âx[HIT x x]
Here, "A" is our crude markdown approximation of the universal quantifier.
Notice that the semantic value of *himself* is exactly `W`.
The reflexive pronoun in direct object position combines with the transitive verb. The result is an intransitive verb phrase that takes a subject argument, duplicates that argument, and feeds the two copies to the transitive verb meaning.
+Notice that the semantic value of *himself* is exactly `W`. The reflexive
+pronoun in direct object position combines with the transitive verb "hit". The
+result is an intransitive verb phrase "hit himself" that takes a subject argument `u`, duplicates
+that argument, and feeds the two copies to the transitive verb meaning.
Note that `W <~~> S(CI)`:
S(CI) ≡
S((\fxy.fyx)(\x.x)) ~~>
S(\xy.(\x.x)yx) ~~>
S(\xy.yx) ≡
(\fgx.fx(gx))(\xy.yx) ~~>
\gx.(\xy.yx)x(gx) ~~>
\gx.(gx)x ≡
W
+ S(CI) â¡
+ S ((\f x y. f y x) (\x x)) ~~>
+ S (\x y. (\x x) y x) ~~>
+ S (\x y. y x) â¡
+ (\f g x. f x (g x)) (\x y. y x) ~~>
+ \g x. (\x y. y x) x (g x) ~~>
+ \g x. (g x) x â¡
+ W
###A different set of reduction rules###
Ok, here comes a shift in thinking. Instead of defining combinators as equivalent to certain lambda terms,
we can define combinators by what they do. If we have the `I` combinator followed by any expression X,
`I` will take that expression as its argument and return that same expression as the result. In pictures,
+Instead of defining combinators in terms of antecedently understood lambda terms, we want to consider the view that takes the combinators as primitive, and understands them in terms of *what they do*. If we have the `I` combinator followed by any expression `X`,
+`I` will take that expression as its argument and return that same expression as the result. Diagrammatically:
IX ~~> X
Thinking of this as a reduction rule, we can perform the following computation
+That is, asume that `X` stands in for any expression. Then if `X`
+happens to be the expression `I`, this schematic pattern guarantees
+that `II ~~> I`; if `X` happens to be the expression `SK`, the pattern
+guarantees that `I(SK) ~~> SK`; and so on. That is, `X` here is a
+metavariable over expressions.
+
+Thinking of this as a reduction rule, we can perform the following computation:
II(IX) ~~> I(IX) ~~> IX ~~> X
@@ 112,7 +111,7 @@ The reduction rule for `K` is also straightforward:
That is, `K` throws away its second argument. The reduction rule for `S` can be constructed by examining
the defining lambda term:
S ≡ \fgx.fx(gx)
+ S â¡ \f g x. f x (g x)
`S` takes three arguments, duplicates the third argument, and feeds one copy to the first argument and the second copy to the second argument. So:
@@ 127,26 +126,30 @@ certain crafty combination of `S`s and `K`s:
SKKX ~~> KX(KX) ~~> X
So the combinator `SKK` is equivalent to the combinator `I`. (Really, it could be `SKy` for any `y`.)
+So the combinator `SKK` is equivalent to the combinator `I`. (Really, it could be `SKY` for any `Y`. Hindley & Seldin p. 26 points to discussion later in their book of why it's theoretically more elegant to keep `I` around, anyway.)
These reduction rule have the same status with respect to Combinatory
Logic as beta reduction and eta reduction, etc., have with respect to
the lambda calculus: they are purely syntactic rules for transforming
+Logic as betareduction and etareduction have with respect to
+the Lambda Calculus: they are purely syntactic rules for transforming
one sequence of symbols (e.g., a redex) into another (a reduced
form). It's worth noting that the reduction rules for Combinatory
Logic are considerably more simple than, say, beta reduction. Also, since
there are no variables in Combiantory Logic, there is no need to worry
about variable collision.
+there are no variables in Combinatory Logic, there is no need to worry
+about variables colliding when we substitute.
Combinatory Logic is what you have when you choose a set of combinators and regulate their behavior with a set of reduction rules. As we said, the most common system uses `S`, `K`, and `I` as defined here.
+Combinatory Logic is what you have when you choose a set of
+combinators and regulate their behavior with a set of reduction
+rules. As we said, the most common system uses `S`, `K`, and `I` as
+defined here.
###The equivalence of the untyped lambda calculus and combinatory logic###
+###The equivalence of the untyped Lambda Calculus and Combinatory Logic###
We've claimed that Combinatory Logic is equivalent to the lambda calculus. If
+We've claimed that Combinatory Logic is "equivalent to" the Lambda Calculus. If
that's so, then `S`, `K`, and `I` must be enough to accomplish any computational task
imaginable. Actually, `S` and `K` must suffice, since we've just seen that we can
simulate `I` using only `S` and `K`. In order to get an intuition about what it
takes to be Turing complete, recall our discussion of the lambda calculus in
+takes to be Turing Complete,
+recall our discussion of the Lambda Calculus in
terms of a text editor. A text editor has the power to transform any arbitrary
text into any other arbitrary text.
The way it does this is by deleting, copying, and reordering characters. We've
@@ 156,147 +159,274 @@ enough to define arbitrary functions.
We've already established that the behavior of combinatory terms can be
perfectly mimicked by lambda terms: just replace each combinator with its
equivalent lambda term, i.e., replace `I` with `\x.x`, replace `K` with `\xy.x`,
and replace `S` with `\fgx.fx(gx)`. So the behavior of any combination of
+equivalent lambda term, i.e., replace `I` with `\x. x`, replace `K` with `\x y. x`,
+and replace `S` with `\f g x. f x (g x)`. So the behavior of any combination of
combinators in Combinatory Logic can be exactly reproduced by a lambda term.
How about the other direction? Here is a method for converting an arbitrary
lambda term into an equivalent Combinatory Logic term using only `S`, `K`, and `I`.
Besides the intrinsic beauty of this mapping, and the importance of what it
+Besides the intrinsic beauty of such mappings, and the importance of what it
says about the nature of binding and computation, it is possible to hear an
echo of computing with continuations in this conversion strategy (though you
wouldn't be able to hear these echos until we've covered a considerable portion
of the rest of the course). In addition, there is a direct linguistic
appliction of this mapping in chapter 17 of Barker and Shan 2014, where it is
used to establish a correpsondence between two natural language grammars, one
+application of this mapping in chapter 17 of Barker and Shan 2014, where it is
+used to establish a correspondence between two natural language grammars, one
of which is based on lambdalike abstraction, the other of which is based on
Combinatory Logic like manipulations.
+Combinatory Logiclike manipulations.
Assume that for any lambda term T, [T] is the equivalent combinatory logic term. The we can define the [.] mapping as follows:
+
+
+(*Warning* This is a different mapping from the Lambda Calculus to Combinatory Logic than we presented in class (and was posted here earlier). It now matches the presentation in Barendregt 1984, and in Hankin Chapter 4 (esp. pp. 61, 65) and in Hindley & Seldin Chapter 2 (esp. p. 26). In some ways this translation is cleaner and more elegant, which is why we're presenting it.)
+
+In order to establish the correspondence, we need to get a bit more
+official about what counts as an expression in CL. Of course, we count
+the primitive combinators `S`, `K`, and `I` as expressions in CL. But
+we will also endow CL with an infinite stock of *variable symbols*, just like the lambda
+calculus, including `x`, `y`, and `z`. Finally, `(XY)` is in CL for any CL
+expressions `X` and `Y`. So examples of CL expressions include
+`x`, `(xy)`, `Sx`, `SK`, `(x(SK))`, `(K(IS))`, and so on. When we
+omit parentheses, we assume left associativity, so
+`XYZ â¡ ((XY)Z)`.
+
+It may seem weird to allow variables in CL. The reason this is
+necessary is because we're trying to show that *every* lambda term can
+be translated into an equivalent CL term. Since some lambda terms
+contain *free* variables, we need to provide a translation in CL for those free
+variables. As you might expect, it will turn out that whenever the
+lambda term in question contains *no* free variables (i.e., is a Lambda Calculus
+combinator), its translation in CL will *also* contain no variables, but will
+instead just be made up of primitive combinators and parentheses.
+
+Let's say that for any lambda term T, [T] is the equivalent Combinatory
+Logic term. Then we define the [.] mapping as follows.
+
+ 1. [a] = a
+ 2. [(\aX)] = @a[X]
+ 3. [(XY)] = ([X][Y])
+
+Wait, what is that `@a ...` business? Well, that's another operation on (a variable and) a CL expression, that we can define like this:
+
+ 4. @aa = I
+ 5. @aX = KX if a is not in X
+ 6. @a(Xa) = X if a is not in X
+ 7. @a(XY) = S(@aX)(@aY)
(*Fussy notes:* if the original lambda term has free variables in it, so will the combinatory logic translation. Feel free to worry about this, though you should be confident that it makes sense. You should also convince yourself that if the original lambda term contains no free variablesi.e., is a combinatorthen the translation will consist only of `S`, `K`, and `I` (plus parentheses). One other detail: this translation algorithm builds expressions that combine lambdas with combinators. For instance, the translation of our boolean false `\x.\y.y` is `[\x[\y.y]] = [\x.I] = KI`. In the intermediate stage, we have `\x.I`, which mixes combinators in the body of a lambda abstract. It's possible to avoid this if you want to, but it takes some careful thought. See, e.g., Barendregt 1984, page 156.)
+Think of `@aX` as a pseudolambda abstract. (Hankin and Barendregt write it as λ*a. X
; Hindley & Seldin write it as `[a] X`.) It is possible to omit line 6, and some presentations do, but Hindley & Seldin observe that this "enormously increases" the length of "most" translations.
(Various, slightly differing translation schemes from combinatory logic to the
lambda calculus are also possible. These generate different metatheoretical
correspondences between the two calculii. Consult Hindley and Seldin for
details. Also, note that the combinatorial proof theory needs to be
+It's easy to understand these rules based on what `S`, `K` and `I` do.
+
+Rule (1) says that variables are mapped to themselves. If the original
+lambda expression had no free variables in it, then any such
+translations will only be temporary. The variable will later get
+eliminated by the application of other rules.
+
+Rule (2) says that the way to translate an application is to
+first translate the body (i.e., `[X]`), and then prefix a kind of
+temporary psuedolambda built from `@` and the original variable.
+
+Rule (3) says that the translation of an application of `X` to `Y` is
+the application of the translation of `X` to the translation of `Y`.
+
+As we'll see, the first three rules sweep through the lambda term,
+changing each lambda to an @.
+
+Rules (4) through (7) tell us how to eliminate all the `@`'s.
+
+In rule (4), if we have `@aa`, we need a CL expression that behaves
+like the lambda term `\aa`. Obviously, `I` is the right choice here.
+
+In rule (5), if we're binding into an expression that doesn't contain
+any variables that need binding, then we need a CL term that behaves
+the same as `\aX` would if `X` didn't contain `a` as a free variable.
+Well, how does `\aX` behave? When `\aX` occurs in the head position
+of a redex, then no matter what argument it occurs with, it throws
+away its argument and returns `X`. In other words, `\aX` is a
+constant function returning `X`, which is exactly the behavior
+we get by prefixing `K`.
+
+Rule (6) should be intuitive; and as we said, we could in principle omit it and just handle such cases under the final rule.
+
+The easiest way to grasp rule (7) is to consider the following claim:
+
+ \a(XY) <~~> S(\aX)(\aY)
+
+To prove it to yourself, just consider what would happen when each term is applied to an argument `a`. Or substitute `\x y a. x a (y a)` in for `S`
+and reduce.
+
+Persuade yourself that if the original lambda term contains no free
+variables  i.e., is a Lambda Calculus combinator  then the translation will
+consist only of `S`, `K`, and `I` (plus parentheses).
+
+Various, slightly differing translation schemes from Combinatory Logic to the
+Lambda Calculus are also possible. These generate different metatheoretical
+correspondences between the two calculi. Consult Hindley & Seldin for
+details.
+
+Also, note that the combinatorial proof theory needs to be
strengthened with axioms beyond anything we've here described in order to make
[M] convertible with [N] whenever the original lambdaterms M and N are
+`[M]` convertible with `[N]` whenever the original lambdaterms `M` and `N` are
convertible. But then, we've been a bit cavalier about giving the full set of
reduction rules for the lambda calculus in a similar way. For instance, one
issue is whether reduction rules (in either the lambda calculus or Combinatory
Logic) apply to embedded expressions. Generally, we want that to happen, but
making it happen requires adding explicit axioms.)

+reduction rules for the Lambda Calculus in a similar way.
+
+For instance, one issue we mentioned in the notes on [[Reduction
+Strategiesweek3_reduction_strategies]] is whether reduction rules (in
+either the Lambda Calculus or Combinatory Logic) apply to embedded
+expressions. Often, we do want that to happen, but making it happen
+requires adding explicit axioms.
+
+Let's see the translation rules in action. We'll start by translating
+the combinator we use to represent false:
+
+ [\y (\n n)]
+ == @y [\n n] rule 2
+ == @y (@n n) rule 2
+ == @y I rule 4
+ == KI rule 5
+
Let's check that the translation of the `false` boolean behaves as expected by feeding it two arbitrary arguments:
KIXY ~~> IY ~~> Y
Throws away the first argument, returns the second argumentyep, it works.
Here's a more elaborate example of the translation. The goal is to establish that combinators can reverse order, so we use the **T** combinator, where T ≡ \x\y.yx
:

 [\x\y.yx] = [\x[\y.yx]] = [\x.S[\y.y][\y.x]] = [\x.(SI)(Kx)] = S[\x.SI][\x.Kx] = S(K(SI))(S[\x.K][\x.x]) = S(K(SI))(S(KK)I)
+Here's a more elaborate example of the translation. Let's say we want
+to establish that combinators can reverse order, so we set out to
+translate the **T** combinator (`\x y. y x`):
+
+ [\x(\y(yx))]
+ == @x[\y(yx)]
+ == @x(@y[yx])
+ == @x(@y([y][x]))
+ == @x(@y(yx))
+ == @x(S(@yy)(@yx))
+ == @x(S I (@yx))
+ == @x(S I (Kx))
+ == S(@x(SI))(@x(Kx))
+ == S (K(SI))(S(@xK)(@xx))
+ == S (K(SI))(S (KK) I)
+
+By now, you should realize that all rules (1) through (3) do is sweep
+through the lambda term turning lambdas into @'s.
We can test this translation by seeing if it behaves like the original lambda term does.
The orginal lambda term lifts its first argument (think of it as reversing the order of its two arguments):
+The original lambda term "lifts" its first argument `x`, in the sense of wrapping it into a "onetuple" or a package that accepts an operation `y` as a further argument, and then applies `y` to `x`. (Or just think of **T** as reversing the order of its two arguments.)
 S(K(SI))(S(KK)I) X Y ~~>
 (K(SI))X ((S(KK)I) X) Y ~~>
 SI ((KK)X (IX)) Y ~~>
 SI (KX) Y ~~>
 IY (KXY) ~~>
 Y X
+ S (K(SI)) (S(KK)I) X Y ~~>
+ (K(SI))X ((S(KK)I) X) Y ~~>
+ SI ((KK)X (IX)) Y ~~>
+ SI (K X) Y ~~>
+ IY (KXY) ~~>
+ Y X
Voilà: the combinator takes any X and Y as arguments, and returns Y applied to X.
+VoilÃ : the combinator takes any X and Y as arguments, and returns Y applied to X.
One very nice property of combinatory logic is that there is no need to worry about alphabetic variance, or
+One very nice property of Combinatory Logic is that there is no need to worry about alphabetic variance, or
variable collisionsince there are no (bound) variables, there is no possibility of accidental variable capture,
and so reduction can be performed without any fear of variable collision. We haven't mentioned the intricacies of
alpha equivalence or safe variable substitution, but they are in fact quite intricate. (The best way to gain
an appreciation of that intricacy is to write a program that performs lambda reduction.)
Back to linguistic applications: one consequence of the equivalence between the lambda calculus and combinatory
logic is that anything that can be done by binding variables can just as well be done with combinators.
This has given rise to a style of semantic analysis called Variable Free Semantics (in addition to
+Back to linguistic applications: one consequence of the equivalence between the Lambda Calculus and Combinatory
+Logic is that anything that can be done by binding variables can just as well be done with combinators.
+This has given rise to a style of semantic analysis called VariableFree Semantics (in addition to
Szabolcsi's papers, see, for instance,
Pauline Jacobson's 1999 *Linguistics and Philosophy* paper, "Towards a variablefree Semantics").
Somewhat ironically, reading strings of combinators is so difficult that most practitioners of variablefree semantics
express their meanings using the lambdacalculus rather than combinatory logic; perhaps they should call their
enterprise Free Variable Free Semantics.
+express their meanings using the Lambda Calculus rather than Combinatory Logic. Perhaps they should call their
+enterprise *Free Variable*Free Semantics.
A philosophical connection: Quine went through a phase in which he developed a variable free logic.
+A philosophical connection: Quine went through a phase in which he developed a variablefree logic.
 Quine, Willard. 1960. "Variables explained away" Proceedings of the American Philosophical Society. Volume 104: 343347. Also in W. V. Quine. 1960. Selected Logical Papers. Random House: New
 York. 227235.
+> Quine, Willard. 1960. "Variables explained away" Proceedings of the American Philosophical Society. Volume 104: 343347. Also in W. V. Quine. 1960. Selected Logical Papers. Random House: New York. 227235.
The reason this was important to Quine is similar to the worry that using
nonreferring expressions such as Santa Claus might commit one to believing in
nonexistant things. Quine's slogan was that "to be is to be the value of a
+nonreferring expressions such as `Santa Claus` might commit one to believing in
+nonexistent things. Quine's slogan was that "to be is to be the value of a
variable." What this was supposed to mean is that if and only if an object
could serve as the value of some variable, we are committed to recognizing the
existence of that object in our ontology. Obviously, if there ARE no
+existence of that object in our ontology. Obviously, if there *are* no
variables, this slogan has to be rethought.
Quine did not appear to appreciate that Shoenfinkel had already invented combinatory logic, though
+Quine did not appear to appreciate that Shoenfinkel had already invented Combinatory Logic, though
he later wrote an introduction to Shoenfinkel's key paper reprinted in Jean
van Heijenoort (ed) 1967 From Frege to Goedel, a source book in mathematical logic, 18791931.
Cresswell has also developed a variablefree approach of some philosophical and linguistic interest
in two books in the 1990's.
+Cresswell also developed a variablefree approach of some philosophical and linguistic interest
+in two books in the 1990s.
A final linguistic application: Steedman's Combinatory Categorial Grammar, where the "Combinatory" is
from combinatory logic (see especially his 2012 book, Taking Scope). Steedman attempts to build
a syntax/semantics interface using a small number of combinators, including `T` ≡ `\xy.yx`, `B` ≡ `\fxy.f(xy)`,
+from Combinatory Logic (see especially his 2012 book, Taking Scope). Steedman attempts to build
+a syntax/semantics interface using a small number of combinators, including `T` (`\x y. y x`), `B` (`\f g x. f (g x)`),
and our friend `S`. Steedman used Smullyan's fanciful bird
names for the combinators, Thrush, Bluebird, and Starling.
+names for these combinators: Thrush, Bluebird, and Starling.
Many of these combinatory logics, in particular, the SKI system,
are Turing complete. In other words: every computation we know how to describe can be represented in a logical system consisting of only a single primitive operation!

The combinators `K` and `S` correspond to two wellknown axioms of sentential logic:
+are Turing Complete. In other words: every computation we know how to describe can be represented in a logical system consisting of only primitive combinators, even some systems with only a *single* primitive combinator.
+
###A connection between Combinatory Logic and Sentential Logic###
One way of getting a feel for the power of the SK basis is to note
that the following two axioms
+The combinators `K` and `S` correspond to two wellknown axioms of sentential logic:
 AK: A > (B > A)
 AS: (A > (B > C)) > ((A > B) > (A > C))
+ AK: A â (B â A)
+ AS: (A â (B â C)) â ((A â B) â (A â C))
when combined with modus ponens (from `A` and `A > B`, conclude `B`)
are complete for the implicational fragment of intuitionistic logic.
(To get a complete proof theory for *classical* sentential logic, you
need only add one more axiom, constraining the behavior of a new connective "not".)
The way we'll favor for viewing the relationship between these axioms
and the `S` and `K` combinators is that the axioms correspond to type
schemas for the combinators. This will become more clear once we have
+When these two axiom schemas are combined with the rule of modus ponens (from `A` and `A â B`, conclude `B`), the resulting proof system
+is complete for the "implicational fragment" of intuitionistic logic. (That is, the part of intuitionistic logic you get when `â` is your only connective. To get a complete proof system for *classical* sentential logic, you
+need only add one more axiom schema, constraining the behavior of a new connective `Â¬`.)
+The way we'll favor viewing the relationship between these axioms
+and the `S` and `K` combinators is that the axioms correspond to *type
+schemas* for the combinators. This will become more clear once we have
a theory of types in view.
Here's more to read about combinatory logic.
Surely the most entertaining exposition is Smullyan's [[!wikipedia To_Mock_a_Mockingbird]].
Other sources include

* [[!wikipedia Combinatory logic]] at Wikipedia
* [Combinatory logic](http://plato.stanford.edu/entries/logiccombinatory/) at the Stanford Encyclopedia of Philosophy
* [[!wikipedia SKI combinatory calculus]]
* [[!wikipedia B,C,K,W system]]
* [Chris Barker's Iota and Jot](http://semarch.linguistics.fas.nyu.edu/barker/Iota/)
* Jeroen Fokker, "The Systematic Construction of a Onecombinator Basis for LambdaTerms" Formal Aspects of Computing 4 (1992), pp. 776780.

+Here's more to read about Combinatory Logic. Surely the most entertaining exposition is Smullyan's [[!wikipedia To_Mock_a_Mockingbird]].
+Other sources include:
+
+* [[!wikipedia Combinatory logic]] at Wikipedia
+* [Combinatory logic](http://plato.stanford.edu/entries/logiccombinatory/) at the Stanford Encyclopedia of Philosophy
+* [[!wikipedia SKI combinatory calculus]]
+* [[!wikipedia B,C,K,W system]]
+* [Chris Barker's Iota and Jot](http://semarch.linguistics.fas.nyu.edu/barker/Iota/)
+* Jeroen Fokker, "The Systematic Construction of a Onecombinator Basis for LambdaTerms" Formal Aspects of Computing 4 (1992), pp. 776780.