3 # System F: the polymorphic lambda calculus
5 The simply-typed lambda calculus is beautifully simple, but it can't
6 even express the predecessor function, let alone full recursion. And
7 we'll see shortly that there is good reason to be unsatisfied with the
8 simply-typed lambda calculus as a way of expressing natural language
9 meaning. So we will need to get more sophisticated about types. The
10 next step in that journey will be to consider System F.
12 System F was discovered by Girard (the same guy who invented Linear
13 Logic), but it was independently proposed around the same time by
14 Reynolds, who called his version the *polymorphic lambda calculus*.
15 (Reynolds was also an early player in the development of
18 System F enhances the simply-typed lambda calculus with abstraction
19 over types. Normal lambda abstraction abstracts (binds) an expression
20 (a term); type abstraction abstracts (binds) a type.
22 In order to state System F, we'll need to adopt the
23 notational convention (which will last throughout the rest of the
24 course) that "<code>x:α</code>" represents an expression `x`
25 whose type is <code>α</code>.
27 Then System F can be specified as follows:
31 types τ ::= c | α | τ1 -> τ2 | ∀α.τ
32 expressions e ::= x | λx:τ.e | e1 e2 | Λα.e | e [τ]
34 In the definition of the types, "`c`" is a type constant. Type
35 constants play the role in System F that base types play in the
36 simply-typed lambda calculus. So in a lingusitics context, type
37 constants might include `e` and `t`. "α" is a type variable. In
38 various discussions, type variables are distinguished by using letters
39 from the greek alphabet (α, β, etc.), as we do here, or by
40 using capital roman letters (X, Y, etc.), or by adding a tick mark
41 (`'a`, `'b`, etc.), as in OCaml. "`τ1 -> τ2`" is the type of a
42 function from expressions of type `τ1` to expressions of type `τ2`.
43 And "`∀α.τ`" is called a universal type, since it universally
44 quantifies over the type variable `α`. You can expect that in
45 `∀α.τ`, the type `τ` will usually have at least one free occurrence of
46 `α` somewhere inside of it.
48 In the definition of the expressions, we have variables "`x`" as usual.
49 Abstracts "`λx:τ.e`" are similar to abstracts in the simply-typed lambda
50 calculus, except that they have their shrug variable annotated with a
51 type. Applications "`e1 e2`" are just like in the simply-typed lambda calculus.
53 In addition to variables, abstracts, and applications, we have two
54 additional ways of forming expressions: "`Λα.e`" is called a *type
55 abstraction*, and "`e [τ]`" is called a *type application*. The idea
56 is that <code>Λ</code> is a capital <code>λ</code>: just
57 like the lower-case <code>λ</code>, <code>Λ</code> binds
58 variables in its body, except that unlike <code>λ</code>,
59 <code>Λ</code> binds type variables instead of expression
60 variables. So in the expression
62 <code>Λ α (λ x:α. x)</code>
64 the <code>Λ</code> binds the type variable `α` that occurs in
65 the <code>λ</code> abstract.
67 This expression is a polymorphic version of the identity function. It
68 defines one general identity function that can be adapted for use with
69 expressions of any type. In order to get it ready to apply this
70 identity function to, say, a variable of type boolean, just do this:
72 <code>(Λ α (λ x:α. x)) [t]</code>
74 This type application (where `t` is a type constant for Boolean truth
75 values) specifies the value of the type variable `α`. Not
76 surprisingly, the type of the expression that results from this type
77 application is a function from Booleans to Booleans:
79 <code>((Λα (λ x:α . x)) [t]): (b->b)</code>
81 Likewise, if we had instantiated the type variable as an entity (base
82 type `e`), the resulting identity function would have been a function
85 <code>((Λα (λ x:α. x)) [e]): (e->e)</code>
87 Clearly, for any choice of a type `α`, the identity function can be
88 instantiated as a function from expresions of type `α` to expressions
89 of type `α`. In general, then, the type of the uninstantiated
90 (polymorphic) identity function is
92 <code>(Λα (λx:α . x)): (∀α. α->α)</code>
97 We saw that the predecessor function couldn't be expressed in the
98 simply-typed lambda calculus. It *can* be expressed in System F,
99 however. Here is one way:
101 let N = ∀α.(α->α)->α->α in
102 let Pair = (N->N->N)->N in
104 let zero = Λα. λs:α->α. λz:α. z in
105 let fst = λx:N. λy:N. x in
106 let snd = λx:N. λy:N. y in
107 let pair = λx:N. λy:N. λz:N->N->N. z x y in
108 let suc = λn:N. Λα. λs:α->α. λz:α. s (n [α] s z) in
109 let shift = λp:Pair. pair (suc (p fst)) (p fst) in
110 let pre = λn:N. n [Pair] shift (pair zero zero) snd in
112 pre (suc (suc (suc zero)));
114 [If you want to run this code in
115 [[Benjamin Pierce's type-checker and evaluator for
116 System F|http://www.cis.upenn.edu/~bcpierce/tapl/index.html]], the
117 relevant evaluator is called "fullpoly", and you'll need to
118 truncate the names of "suc(c)" and "pre(d)", since those are
119 reserved words in Pierce's system.]
121 Exercise: convince yourself that `zero` has type `N`.
123 The key to the extra expressive power provided by System F is evident
124 in the typing imposed by the definition of `pre`. The variable `n` is
125 typed as a Church number, i.e., as `∀α.(α->α)->α->α`. The type
126 application `n [Pair]` instantiates `n` in a way that allows it to
127 manipulate ordered pairs: `n [Pair]: (Pair->Pair)->Pair->Pair`. In
128 other words, the instantiation turns a Church number into a
129 pair-manipulating function, which is the heart of the strategy for
130 this version of predecessor.
132 Could we try to build a system for doing Church arithmetic in which
133 the type for numbers always manipulated ordered pairs? The problem is
134 that the ordered pairs we need here are pairs of numbers. If we tried
135 to replace the type for Church numbers with a concrete (simple) type,
136 we would have to replace each `X` with the type for Pairs, `(N -> N ->
137 N) -> N`. But then we'd have to replace each of these `N`'s with the
138 type for Church numbers, `(α -> α) -> α -> α`. And then we'd have to
139 replace each of these `α`'s with... ad infinitum. If we had to choose
140 a concrete type built entirely from explicit base types, we'd be
143 [See Benjamin C. Pierce. 2002. *Types and Programming Languages*, MIT
149 In fact, unlike in the simply-typed lambda calculus,
150 it is even possible to give a type for ω in System F.
152 <code>ω = λx:(∀α.α->α). x [∀α.α->α] x</code>
154 In order to see how this works, we'll apply ω to the identity
157 <code>ω id ==</code>
159 (λx:(∀α.α->α). x [∀α.α->α] x) (Λα.λx:α.x)
161 Since the type of the identity function is `∀α.α->α`, it's the
162 right type to serve as the argument to ω. The definition of
163 ω instantiates the identity function by binding the type
164 variable `α` to the universal type `∀α.α->α`. Instantiating the
165 identity function in this way results in an identity function whose
166 type is (in some sense, only accidentally) the same as the original
167 fully polymorphic identity function.
169 So in System F, unlike in the simply-typed lambda calculus, it *is*
170 possible for a function to apply to itself!
172 Does this mean that we can implement recursion in System F? Not at
173 all. In fact, despite its differences with the simply-typed lambda
174 calculus, one important property that System F shares with the
175 simply-typed lambda calculus is that they are both strongly
176 normalizing: *every* expression in either system reduces to a normal
177 form in a finite number of steps.
179 Not only does a fixed-point combinator remain out of reach, we can't
180 even construct an infinite loop. This means that although we found a
181 type for ω, there is no general type for Ω ≡ ω
182 ω. Furthermore, it turns out that no Turing complete system can
183 be strongly normalizing, from which it follows that System F is not
187 ## Polymorphism in natural language
189 Is the simply-typed lambda calclus enough for analyzing natural
190 language, or do we need polymorphic types? Or something even more expressive?
192 The classic case study motivating polymorphism in natural language
193 comes from coordination. (The locus classicus is Partee and Rooth
196 Ann left and Bill left.
199 Ann read and reviewed the book.
201 In English (likewise, many other languages), *and* can coordinate
202 clauses, verb phrases, determiner phrases, transitive verbs, and many
203 other phrase types. In a garden-variety simply-typed grammar, each
204 kind of conjunct has a different semantic type, and so we would need
205 an independent rule for each one. Yet there is a strong intuition
206 that the contribution of *and* remains constant across all of these
207 uses. Can we capture this using polymorphic types?
211 read, reviewed e -> e -> t
213 With these basic types, we want to say something like this:
215 and:t->t->t = λl:t. λr:t. l r false
216 and = Λα.Λβ.λl:α->β.λr:α->β.λx:α. and [β] (l x) (r x)
218 The idea is that the basic *and* conjoins expressions of type `t`, and
219 when *and* conjoins functional types, it builds a function that
220 distributes its argument across the two conjuncts and conjoins the two
221 results. So `Ann left and slept` will evaluate to `(\x.and(left
222 x)(slept x)) ann`. Following the terminology of Partee and Rooth, the
223 strategy of defining the coordination of expressions with complex
224 types in terms of the coordination of expressions with less complex
225 types is known as Generalized Coordination.
227 But the definitions just given are not well-formed expressions in
228 System F. There are three problems. The first is that we have two
229 definitions of the same word. The intention is for one of the
230 definitions to be operative when the type of its arguments is type
231 `t`, but we have no way of conditioning evaluation on the *type* of an
232 argument. The second is that for the polymorphic definition, the term
233 *and* occurs inside of the definition. System F does not have
236 The third problem is more subtle. The defintion as given takes two
237 types as parameters: the type of the first argument expected by each
238 conjunct, and the type of the result of applying each conjunct to an
239 argument of that type. We would like to instantiate the recursive use
240 of *and* in the definition by using the result type. But fully
241 instantiating the definition as given requires type application to a
242 pair of types, not to just a single type. We want to somehow
243 guarantee that β will always itself be a complex type.
245 So conjunction and disjunction provide a compelling motivation for
246 polymorphism in natural language, but we don't yet have the ability to
247 build the polymorphism into a formal system.
249 And in fact, discussions of generalized coordination in the
250 linguistics literature are almost always left as a meta-level
251 generalizations over a basic simply-typed grammar. For instance, in
252 Hendriks' 1992:74 dissertation, generalized coordination is
253 implemented as a method for generating a suitable set of translation
254 rules, which are in turn expressed in a simply-typed grammar.
256 Not incidentally, we're not aware of any programming language that
257 makes generalized coordination available, despite is naturalness and
258 ubiquity in natural language. That is, coordination in programming
259 languages is always at the sentential level. You might be able to
260 evaluate `(delete file1) and (delete file2)`, but never `delete (file1
263 We'll return to thinking about generalized coordination as we get
264 deeper into types. There will be an analysis in term of continuations
265 that will be particularly satisfying.
271 OCaml has type inference: the system can often infer what the type of
272 an expression must be, based on the type of other known expressions.
274 For instance, if we type
278 The system replies with
280 val f : int -> int = <fun>
282 Since `+` is only defined on integers, it has type
285 - : int -> int -> int = <fun>
287 The parentheses are there to turn off the trick that allows the two
288 arguments of `+` to surround it in infix (for linguists, SOV) argument
294 In general, tuples with one element are identical to their one
300 though OCaml, like many systems, refuses to try to prove whether two
301 functional objects may be identical:
304 Exception: Invalid_argument "equal: functional value".
308 [Note: There is a limited way you can compare functions, using the
309 `==` operator instead of the `=` operator. Later when we discuss mutation,
310 we'll discuss the difference between these two equality operations.
311 Scheme has a similar pair, which they name `eq?` and `equal?`. In Python,
312 these are `is` and `==` respectively. It's unfortunate that OCaml uses `==` for the opposite operation that Python and many other languages use it for. In any case, OCaml will accept `(f) == f` even though it doesn't accept
313 `(f) = f`. However, don't expect it to figure out in general when two functions
314 are equivalent. (That question is not Turing computable.)
316 # (f) == (fun x -> x + 3);;
319 Here OCaml says (correctly) that the two functions don't stand in the `==` relation, which basically means they're not represented in the same chunk of memory. However as the programmer can see, the functions are extensionally equivalent. The meaning of `==` is rather weird.]
323 Booleans in OCaml, and simple pattern matching
324 ----------------------------------------------
326 Where we would write `true 1 2` in our pure lambda calculus and expect
327 it to evaluate to `1`, in OCaml boolean types are not functions
328 (equivalently, they're functions that take zero arguments). Instead, selection is
329 accomplished as follows:
331 # if true then 1 else 2;;
334 The types of the `then` clause and of the `else` clause must be the
337 The `if` construction can be re-expressed by means of the following
338 pattern-matching expression:
340 match <bool expression> with true -> <expression1> | false -> <expression2>
344 # match true with true -> 1 | false -> 2;;
349 # match 3 with 1 -> 1 | 2 -> 4 | 3 -> 9;;
355 All functions in OCaml take exactly one argument. Even this one:
357 # let f x y = x + y;;
361 Here's how to tell that `f` has been curry'd:
364 - : int -> int = <fun>
366 After we've given our `f` one argument, it returns a function that is
367 still waiting for another argument.
369 There is a special type in OCaml called `unit`. There is exactly one
370 object in this type, written `()`. So
375 Just as you can define functions that take constants for arguments
381 you can also define functions that take the unit as its argument, thus
384 val f : unit -> int = <fun>
386 Then the only argument you can possibly apply `f` to that is of the
387 correct type is the unit:
392 Now why would that be useful?
394 Let's have some fun: think of `rec` as our `Y` combinator. Then
396 # let rec f n = if (0 = n) then 1 else (n * (f (n - 1)));;
397 val f : int -> int = <fun>
401 We can't define a function that is exactly analogous to our ω.
402 We could try `let rec omega x = x x;;` what happens?
404 [Note: if you want to learn more OCaml, you might come back here someday and try:
407 val id : 'a -> 'a = <fun>
408 # let unwrap (`Wrap a) = a;;
409 val unwrap : [< `Wrap of 'a ] -> 'a = <fun>
410 # let omega ((`Wrap x) as y) = x y;;
411 val omega : [< `Wrap of [> `Wrap of 'a ] -> 'b as 'a ] -> 'b = <fun>
412 # unwrap (omega (`Wrap id)) == id;;
414 # unwrap (omega (`Wrap omega));;
415 <Infinite loop, need to control-c to interrupt>
417 But we won't try to explain this now.]
420 Even if we can't (easily) express omega in OCaml, we can do this:
422 # let rec blackhole x = blackhole x;;
424 By the way, what's the type of this function?
426 If you then apply this `blackhole` function to an argument,
430 the interpreter goes into an infinite loop, and you have to type control-c
433 Oh, one more thing: lambda expressions look like this:
437 # (fun x -> x) true;;
440 (But `(fun x -> x x)` still won't work.)
442 You may also see this:
444 # (function x -> x);;
447 This works the same as `fun` in simple cases like this, and slightly differently in more complex cases. If you learn more OCaml, you'll read about the difference between them.
449 We can try our usual tricks:
451 # (fun x -> true) blackhole;;
454 OCaml declined to try to fully reduce the argument before applying the
455 lambda function. Question: Why is that? Didn't we say that OCaml is a call-by-value/eager language?
457 Remember that `blackhole` is a function too, so we can
458 reverse the order of the arguments:
460 # blackhole (fun x -> true);;
464 Now consider the following variations in behavior:
466 # let test = blackhole blackhole;;
467 <Infinite loop, need to control-c to interrupt>
469 # let test () = blackhole blackhole;;
470 val test : unit -> 'a = <fun>
473 - : unit -> 'a = <fun>
476 <Infinite loop, need to control-c to interrupt>
478 We can use functions that take arguments of type `unit` to control
479 execution. In Scheme parlance, functions on the `unit` type are called
480 *thunks* (which I've always assumed was a blend of "think" and "chunk").
482 Question: why do thunks work? We know that `blackhole ()` doesn't terminate, so why do expressions like:
484 let f = fun () -> blackhole ()
489 Bottom type, divergence
490 -----------------------
492 Expressions that don't terminate all belong to the **bottom type**. This is a subtype of every other type. That is, anything of bottom type belongs to every other type as well. More advanced type systems have more examples of subtyping: for example, they might make `int` a subtype of `real`. But the core type system of OCaml doesn't have any general subtyping relations. (Neither does System F.) Just this one: that expressions of the bottom type also belong to every other type. It's as if every type definition in OCaml, even the built in ones, had an implicit extra clause:
494 type 'a option = None | Some of 'a;;
495 type 'a option = None | Some of 'a | bottom;;
497 Here are some exercises that may help better understand this. Figure out what is the type of each of the following:
505 let rec blackhole x = blackhole x in blackhole;;
507 let rec blackhole x = blackhole x in blackhole 1;;
509 let rec blackhole x = blackhole x in fun (y:int) -> blackhole y y y;;
511 let rec blackhole x = blackhole x in (blackhole 1) + 2;;
513 let rec blackhole x = blackhole x in (blackhole 1) || false;;
515 let rec blackhole x = blackhole x in 2 :: (blackhole 1);;
517 By the way, what's the type of this:
519 let rec blackhole (x:'a) : 'a = blackhole x in blackhole
522 Back to thunks: the reason you'd want to control evaluation with
523 thunks is to manipulate when "effects" happen. In a strongly
524 normalizing system, like the simply-typed lambda calculus or System F,
525 there are no "effects." In Scheme and OCaml, on the other hand, we can
526 write programs that have effects. One sort of effect is printing.
527 Another sort of effect is mutation, which we'll be looking at soon.
528 Continuations are yet another sort of effect. None of these are yet on
529 the table though. The only sort of effect we've got so far is
530 *divergence* or non-termination. So the only thing thunks are useful
531 for yet is controlling whether an expression that would diverge if we
532 tried to fully evaluate it does diverge. As we consider richer
533 languages, thunks will become more useful.