[[!toc]] ##The simply-typed lambda calculus## The untyped lambda calculus is pure. Pure in many ways: nothing but variables and lambdas, with no constants or other special symbols; also, all functions without any types. As we'll see eventually, pure also in the sense of having no side effects, no mutation, just pure computation. But we live in an impure world. It is much more common for practical programming languages to be typed, either implicitly or explicitly. Likewise, systems used to investigate philosophical or linguistic issues are almost always typed. Types will help us reason about our computations. They will also facilitate a connection between logic and computation. From a linguistic perspective, types are generalizations of (parts of) programs. To make this comment more concrete: types are to (e.g., lambda) terms as syntactic categories are to expressions of natural language. If so, if it makes sense to gather a class of expressions together into a set of Nouns, or Verbs, it may also make sense to gather classes of terms into a set labelled with some computational type. To develop this analogy just a bit further, syntactic categories determine which expressions can combine with which other expressions. If a word is a member of the category of prepositions, it had better not try to combine (merge) with an expression in the category of, say, an auxilliary verb, since *under has* is not a well-formed constituent in English. Likewise, types in formal languages will determine which expressions can be sensibly combined. Now, of course it is common linguistic practice to supply an analysis of natural language both with syntactic categories and with semantic types. And there is a large degree of overlap between these type systems. However, there are mismatches in both directions: there are syntactic distinctions that do not correspond to any salient semantic difference (why can't adjectives behave syntactically like verb phrases, since they both denote properties with (extensional) type ``?); and in some analyses there are semantic differences that do not correspond to any salient syntactic distinctions (as in any analysis that involves silent type-shifters, such as Herman Hendriks' theory of quantifier scope, in which expressions change their semantic type without any effect on the syntactic expressions they can combine with syntactically). We will consider again the relationship between syntactic types and semantic types later in the course. Soon we will consider polymorphic type systems. First, however, we will consider the simply-typed lambda calculus. [Pedantic on. Why "*simply* typed"? Well, the type system is particularly simple. As mentioned to us by Koji Mineshima, Church tells us that "The simple theory of types was suggested as a modification of Russell's ramified theory of types by Leon Chwistek in 1921 and 1922 and by F. P. Ramsey in 1926." This footnote appears in Church's 1940 paper [A formulation of the simple theory of types](church-simple-types.pdf). In this paper, Church writes types by simple apposition, without the ugly angle brackets and commas used by Montague. Furthermore, he omits parentheses under the convention that types associated to the *left*---the opposite of the modern convention. This is ok, however, because he also reverses the order, so that `te` is a function from objects of type `e` to objects of type `t`. Cool paper! If you ever want to see Church numerals in their native setting--but I'm getting ahead of my story. Pedantic off.] There's good news and bad news: the good news is that the simply-typed lambda calculus is strongly normalizing: every term has a normal form. We shall see that self-application is outlawed, so Ω can't even be written, let alone undergo reduction. The bad news is that fixed-point combinators are also forbidden, so recursion is neither simple nor direct. #Types# We will have at least one ground type. For the sake of linguistic familiarity, we'll use `e`, the type of individuals, and `t`, the type of truth values. In addition, there will be a recursively-defined class of complex types `T`, the smallest set such that * ground types, including `e` and `t`, are in `T` * for any types σ and τ in `T`, the type σ --> τ is in `T`. For instance, here are some types in `T`: e e --> t e --> e --> t (e --> t) --> t (e --> t) --> e --> t and so on. #Typed lambda terms# Given a set of types `T`, we define the set of typed lambda terms Λ_T, which is the smallest set such that * each type `t` has an infinite set of distinct variables, {x^t}_1, {x^t}_2, {x^t}_3, ... * If a term `M` has type σ --> τ, and a term `N` has type σ, then the application `(M N)` has type τ. * If a variable `a` has type σ, and term `M` has type τ, then the abstract λ a M has type σ --> τ. The definitions of types and of typed terms should be highly familiar to semanticists, except that instead of writing σ --> τ, linguists write <σ, τ>. We will use the arrow notation, since it is more iconic. Some examples (assume that `x` has type `o`): x o \x.x o --> o ((\x.x) x) o Excercise: write down terms that have the following types: o --> o --> o (o --> o) --> o --> o (o --> o --> o) --> o #Associativity of types versus terms# As we have seen many times, in the lambda calculus, function application is left associative, so that `f x y z == (((f x) y) z)`. Types, *THEREFORE*, are right associative: if `x`, `y`, and `z` have types `a`, `b`, and `c`, respectively, then `f` has type `a --> b --> c --> d == (a --> (b --> (c --> d)))`, where `d` is the type of the complete term. It is a serious faux pas to associate to the left for types. You may as well use your salad fork to stir your tea. #The simply-typed lambda calculus is strongly normalizing# If `M` is a term with type τ in Λ_T, then `M` has a normal form. The proof is not particularly complex, but we will not present it here; see Berendregt or Hankin. Since Ω does not have a normal form, it follows that Ω cannot have a type in Λ_T. We can easily see why: Ω = (\x.xx)(\x.xx) Assume Ω has type τ, and `\x.xx` has type σ. Then because `\x.xx` takes an argument of type σ and returns something of type τ, `\x.xx` must also have type σ --> τ. By repeating this reasoning, `\x.xx` must also have type (σ --> τ) --> τ; and so on. Since variables have finite types, there is no way to choose a type for the variable `x` that can satisfy all of the requirements imposed on it. In general, there is no way for a function to have a type that can take itself for an argument. It follows that there is no way to define the identity function in such a way that it can take itself as an argument. Instead, there must be many different identity functions, one for each type. Some of those types can be functions, and some of those functions can be (type-restricted) identity functions; but a simply-types identity function can never apply to itself. #Typing numerals# The Church numerals are well behaved with respect to types. They can all be given the type (σ --> σ) --> σ --> σ. ## Predecessor and lists are not representable in simply typed lambda-calculus ## The predecessor of a Church-encoded numeral, or, generally, the encoding of a list with the car and cdr operations are both impossible in the simply typed lambda-calculus. Henk Barendregt's ``The impact of the lambda-calculus in logic and computer science'' (The Bulletin of Symbolic Logic, v3, N2, June 1997) has the following phrase, on p. 186: Even for a function as simple as the predecessor lambda definability remained an open problem for a while. From our present knowledge it is tempting to explain this as follows. Although the lambda calculus was conceived as an untyped theory, typeable terms are more intuitive. Now the functions addition and multiplication are defineable by typeable terms, while [101] and [108] have characterized the lambda-defineable functions in the (simply) typed lambda calculus and the predecessor is not among them [the story of the removal of Kleene's four wisdom teeth is skipped...] Ref 108 is R.Statman: The typed lambda calculus is not elementary recursive. Theoretical Comp. Sci., vol 9 (1979), pp. 73-81. Since list is a generalization of numeral -- with cons being a successor, append being the addition, tail (aka cdr) being the predecessor -- it follows then the list cannot be encoded in the simply typed lambda-calculus. To encode both operations, we need either inductive (generally, recursive) types, or System F with its polymorphism. The first approach is the most common. Indeed, the familiar definition of a list data List a = Nil | Cons a (List a) gives an (iso-) recursive data type (in Haskell. In ML, it is an inductive data type). Lists can also be represented in System F. As a matter of fact, we do not need the full System F (where the type reconstruction is not decidable). We merely need the extension of the Hindley-Milner system with higher-ranked types, which requires a modicum of type annotations and yet is able to infer the types of all other terms. This extension is supported in Haskell and OCaml. With such an extension, we can represent a list by its fold, as shown in the code below. It is less known that this representation is faithful: we can implement all list operations, including tail, drop, and even zip. See also [[Oleg Kiselyov on the predecessor function in the lambda calculus|http://okmij.org/ftp/Computation/lambda-calc.html#predecessor]].