Syntactic equality, reduction, convertibility ============================================= Define T to be `(\x. x y) z`. Then T and `(\x. x y) z` are syntactically equal, and we're counting them as syntactically equal to `(\z. z y) z` as well, which we will write as:
T ≡ (\x. x y) z ≡ (\z. z y) z
[Fussy note: the justification for counting `(\x. x y) z` as equivalent to `(\z. z y) z` is that when a lambda binds a set of occurrences, it doesn't matter which variable serves to carry out the binding. Either way, the function does the same thing and means the same thing. Look in the standard treatments for discussions of alpha equivalence for more detail.] This: T ~~> z y means that T beta-reduces to `z y`. This: M <~~> T means that M and T are beta-convertible, that is, that there's something they both reduce to in zero or more steps. Combinators and Combinatorial Logic =================================== Lambda expressions that have no free variables are known as **combinators**. Here are some common ones: > **I** is defined to be `\x x` > **K** is defined to be `\x y. x`, That is, it throws away its second argument. So `K x` is a constant function from any (further) argument to `x`. ("K" for "constant".) Compare K to our definition of **true**. > **get-first** was our function for extracting the first element of an ordered pair: `\fst snd. fst`. Compare this to **K** and **true** as well. > **get-second** was our function for extracting the second element of an ordered pair: `\fst snd. snd`. Compare this to our definition of **false**. > **ω** is defined to be: `\x. x x (\x. x x)` It's possible to build a logical system equally powerful as the lambda calculus (and readily intertranslatable with it) using just combinators, considered as atomic operations. Such a language doesn't have any variables in it: not just no free variables, but no variables at all. One can do that with a very spare set of basic combinators. These days the standard base is just three combinators: K and I from above, and also one more, **S**, which behaves the same as the lambda expression `\f g x. f x (g x)`. behaves. But it's possible to be even more minimalistic, and get by with only a single combinator. (And there are different single-combinator bases you can choose.) There are some well-known linguistic applications of Combinatory Logic, due to Anna Szabolcsi, Mark Steedman, and Pauline Jacobson. Szabolcsi supposed that the meanings of certain expressions could be insightfully expressed in the form of combinators. A couple more combinators: **C** is defined to be: `\f x y. f y x` [swap arguments] **W** is defined to be: `\f x . f x x` [duplicate argument] For instance, Szabolcsi argues that reflexive pronouns are argument duplicators. ![Szabolcsi's analysis of *himself* as the duplicator combinator](szabolcsi-reflexive.png) These systems are Turing complete. In other words: every computation we know how to describe can be represented in a logical system consisting of only a single primitive operation! Here's more to read about combinatorial logic. Surely the most entertaining exposition is Smullyan's [[!wikipedia To_Mock_a_Mockingbird]]. Other sources include * [[!wikipedia Combinatory logic]] at Wikipedia * [Combinatory logic](http://plato.stanford.edu/entries/logic-combinatory/) at the Stanford Encyclopedia of Philosophy * [[!wikipedia SKI combinatory calculus]] * [[!wikipedia B,C,K,W system]] * [Chris Barker's Iota and Jot](http://semarch.linguistics.fas.nyu.edu/barker/Iota/) * Jeroen Fokker, "The Systematic Construction of a One-combinator Basis for Lambda-Terms" Formal Aspects of Computing 4 (1992), pp. 776-780. Evaluation Strategies and Normalization ======================================= In the assignment we asked you to reduce various expressions until it wasn't possible to reduce them any further. For two of those expressions, this was impossible to do. One of them was this: (\x. x x) (\x. x x) As we saw above, each of the halves of this formula are the combinator ω; so this can also be written:
ω ω
This compound expression---the self-application of ω---is named Ω. It has the form of an application of an abstract (ω) to an argument (which also happens to be ω), so it's a redex and can be reduced. But when we reduce it, we get ω ω again. So there's no stage at which this expression has been reduced to a point where it can't be reduced any further. In other words, evaluation of this expression "never terminates." (This is the standard language, however it has the unfortunate connotation that evaluation is a process or operation that is performed in time. You shouldn't think of it like that. Evaluation of this expression "never terminates" in the way that the decimal expansion of π never terminates. These are static, atemporal facts about their mathematical properties.) There are infinitely many formulas in the lambda calculus that have this same property. Ω is the syntactically simplest of them. In our meta-theory, it's common to assign such formulas a special value, , pronounced "bottom." When we get to discussing types, you'll see that this value is counted as belonging to every type. To say that a formula has the bottom value means that the computation that formula represents never terminates and so doesn't evaluate to any orthodox, computed value. From a "Fregean" or "weak Kleene" perspective, if any component of an expression fails to be evaluable (to an orthodox, computed value), then the whole expression should be unevaluable as well. However, in some such cases it seems *we could* sensibly carry on evaluation. For instance, consider:

(\x. y) (ω ω)