[[!toc levels=2]] # System F and recursive types In the simply-typed lambda calculus, we write types like σ -> τ. This looks like logical implication. We'll take that resemblance seriously when we discuss the Curry-Howard correspondence. In the meantime, note that types respect modus ponens:
Expression    Type      Implication
-----------------------------------
fn            α -> β    α ⊃ β
arg           α         α
------        ------    --------
(fn arg)      β         β
The implication in the right-hand column is modus ponens, of course. System F was discovered by Girard (the same guy who invented Linear Logic), but it was independently proposed around the same time by Reynolds, who called his version the *polymorphic lambda calculus*. (Reynolds was also an early player in the development of continuations.) System F enhances the simply-typed lambda calculus with abstraction over types. Normal lambda abstraction abstracts (binds) an expression (a term); type abstraction abstracts (binds) a type. In order to state System F, we'll need to adopt the notational convention (which will last throughout the rest of the course) that "x:α" represents an expression `x` whose type is α. Then System F can be specified as follows (choosing notation that will match up with usage in O'Caml, whose type system is based on System F): System F: --------- types τ ::= c | 'a | τ1 -> τ2 | ∀'a. τ expressions e ::= x | λx:τ. e | e1 e2 | Λ'a. e | e [τ] In the definition of the types, "`c`" is a type constant. Type constants play the role in System F that base types play in the simply-typed lambda calculus. So in a lingusitics context, type constants might include `e` and `t`. "`'a`" is a type variable. The tick mark just indicates that the variable ranges over types rather than over values; in various discussion below and later, type variable can be distinguished by using letters from the greek alphabet (α, β, etc.), or by using capital roman letters (X, Y, etc.). "`τ1 -> τ2`" is the type of a function from expressions of type `τ1` to expressions of type `τ2`. And "`∀'a. τ`" is called a universal type, since it universally quantifies over the type variable `'a`. You can expect that in `∀'a. τ`, the type `τ` will usually have at least one free occurrence of `'a` somewhere inside of it. In the definition of the expressions, we have variables "`x`" as usual. Abstracts "`λx:τ. e`" are similar to abstracts in the simply-typed lambda calculus, except that they have their shrug variable annotated with a type. Applications "`e1 e2`" are just like in the simply-typed lambda calculus. In addition to variables, abstracts, and applications, we have two additional ways of forming expressions: "`Λ'a. e`" is called a *type abstraction*, and "`e [τ]`" is called a *type application*. The idea is that Λ is a capital λ: just like the lower-case λ, Λ binds variables in its body, except that unlike λ, Λ binds type variables instead of expression variables. So in the expression Λ 'a (λ x:'a . x) the Λ binds the type variable `'a` that occurs in the λ abstract. Of course, as long as type variables are carefully distinguished from expression variables (by tick marks, Grecification, or capitalization), there is no need to distinguish expression abstraction from type abstraction by also changing the shape of the lambda. The expression immediately below is a polymorphic version of the identity function. It defines one general identity function that can be adapted for use with expressions of any type. In order to get it ready to apply this identity function to, say, a variable of type boolean, just do this: (Λ 'a (λ x:'a . x)) [t] This type application (where `t` is a type constant for Boolean truth values) specifies the value of the type variable `'a`. Not surprisingly, the type of this type application is a function from Booleans to Booleans: ((Λ 'a (λ x:'a . x)) [t]): (b -> b) Likewise, if we had instantiated the type variable as an entity (base type `e`), the resulting identity function would have been a function of type `e -> e`: ((Λ 'a (λ x:'a . x)) [e]): (e -> e) Clearly, for any choice of a type `'a`, the identity function can be instantiated as a function from expresions of type `'a` to expressions of type `'a`. In general, then, the type of the uninstantiated (polymorphic) identity function is (Λ 'a (λ x:'a . x)): (∀ 'a . 'a -> 'a) Pred in System F ---------------- We saw that the predecessor function couldn't be expressed in the simply-typed lambda calculus. It *can* be expressed in System F, however. Here is one way, coded in [[Benjamin Pierce's type-checker and evaluator for System F|http://www.cis.upenn.edu/~bcpierce/tapl/index.html]] (the relevant evaluator is called "fullpoly"): N = All X . (X->X)->X->X; Pair = (N -> N -> N) -> N; let zero = lambda X . lambda s:X->X . lambda z:X. z in let fst = lambda x:N . lambda y:N . x in let snd = lambda x:N . lambda y:N . y in let pair = lambda x:N . lambda y:N . lambda z:N->N->N . z x y in let suc = lambda n:N . lambda X . lambda s:X->X . lambda z:X . s (n [X] s z) in let shift = lambda p:Pair . pair (suc (p fst)) (p fst) in let pre = lambda n:N . n [Pair] shift (pair zero zero) snd in pre (suc (suc (suc zero))); We've truncated the names of "suc(c)" and "pre(d)", since those are reserved words in Pierce's system. Note that in this code, there is no typographic distinction between ordinary lambda and type-level lambda, though the difference is encoded in whether the variables are lower case (for ordinary lambda) or upper case (for type-level lambda). The key to the extra expressive power provided by System F is evident in the typing imposed by the definition of `pre`. The variable `n` is typed as a Church number, i.e., as `All X . (X->X)->X->X`. The type application `n [Pair]` instantiates `n` in a way that allows it to manipulate ordered pairs: `n [Pair]: (Pair->Pair)->Pair->Pair`. In other words, the instantiation turns a Church number into a pair-manipulating function, which is the heart of the strategy for this version of predecessor. Could we try to build a system for doing Church arithmetic in which the type for numbers always manipulated ordered pairs? The problem is that the ordered pairs we need here are pairs of numbers. If we tried to replace the type for Church numbers with a concrete (simple) type, we would have to replace each `X` with the type for Pairs, `(N -> N -> N) -> N`. But then we'd have to replace each of these `N`'s with the type for Church numbers, `(X -> X) -> X -> X`. And then we'd have to replace each of these `X`'s with... ad infinitum. If we had to choose a concrete type built entirely from explicit base types, we'd be unable to proceed. [See Benjamin C. Pierce. 2002. *Types and Programming Languages*, MIT Press, chapter 23.] Typing ω -------------- In fact, unlike in the simply-typed lambda calculus, it is even possible to give a type for ω in System F. ω = lambda x:(All X. X->X) . x [All X . X->X] x In order to see how this works, we'll apply ω to the identity function. ω id == (lambda x:(All X . X->X) . x [All X . X->X] x) (lambda X . lambda x:X . x) Since the type of the identity function is `(All X . X->X)`, it's the right type to serve as the argument to ω. The definition of ω instantiates the identity function by binding the type variable `X` to the universal type `All X . X->X`. Instantiating the identity function in this way results in an identity function whose type is (in some sense, only accidentally) the same as the original fully polymorphic identity function. So in System F, unlike in the simply-typed lambda calculus, it *is* possible for a function to apply to itself! Does this mean that we can implement recursion in System F? Not at all. In fact, despite its differences with the simply-typed lambda calculus, one important property that System F shares with the simply-typed lambda calculus is that they are both strongly normalizing: *every* expression in either system reduces to a normal form in a finite number of steps. Not only does a fixed-point combinator remain out of reach, we can't even construct an infinite loop. This means that although we found a type for ω, there is no general type for Ω ≡ ω ω. Furthermore, it turns out that no Turing complete system can be strongly normalizing, from which it follows that System F is not Turing complete. #Types in OCaml OCaml has type inference: the system can often infer what the type of an expression must be, based on the type of other known expressions. For instance, if we type # let f x = x + 3;; The system replies with val f : int -> int = Since `+` is only defined on integers, it has type # (+);; - : int -> int -> int = The parentheses are there to turn off the trick that allows the two arguments of `+` to surround it in infix (for linguists, SOV) argument order. That is, # 3 + 4 = (+) 3 4;; - : bool = true In general, tuples with one element are identical to their one element: # (3) = 3;; - : bool = true though OCaml, like many systems, refuses to try to prove whether two functional objects may be identical: # (f) = f;; Exception: Invalid_argument "equal: functional value". Oh well. [Note: There is a limited way you can compare functions, using the `==` operator instead of the `=` operator. Later when we discuss mutation, we'll discuss the difference between these two equality operations. Scheme has a similar pair, which they name `eq?` and `equal?`. In Python, these are `is` and `==` respectively. It's unfortunate that OCaml uses `==` for the opposite operation that Python and many other languages use it for. In any case, OCaml will accept `(f) == f` even though it doesn't accept `(f) = f`. However, don't expect it to figure out in general when two functions are equivalent. (That question is not Turing computable.) # (f) == (fun x -> x + 3);; - : bool = false Here OCaml says (correctly) that the two functions don't stand in the `==` relation, which basically means they're not represented in the same chunk of memory. However as the programmer can see, the functions are extensionally equivalent. The meaning of `==` is rather weird.] Booleans in OCaml, and simple pattern matching ---------------------------------------------- Where we would write `true 1 2` in our pure lambda calculus and expect it to evaluate to `1`, in OCaml boolean types are not functions (equivalently, they're functions that take zero arguments). Instead, selection is accomplished as follows: # if true then 1 else 2;; - : int = 1 The types of the `then` clause and of the `else` clause must be the same. The `if` construction can be re-expressed by means of the following pattern-matching expression: match with true -> | false -> That is, # match true with true -> 1 | false -> 2;; - : int = 1 Compare with # match 3 with 1 -> 1 | 2 -> 4 | 3 -> 9;; - : int = 9 Unit and thunks --------------- All functions in OCaml take exactly one argument. Even this one: # let f x y = x + y;; # f 2 3;; - : int = 5 Here's how to tell that `f` has been curry'd: # f 2;; - : int -> int = After we've given our `f` one argument, it returns a function that is still waiting for another argument. There is a special type in OCaml called `unit`. There is exactly one object in this type, written `()`. So # ();; - : unit = () Just as you can define functions that take constants for arguments # let f 2 = 3;; # f 2;; - : int = 3;; you can also define functions that take the unit as its argument, thus # let f () = 3;; val f : unit -> int = Then the only argument you can possibly apply `f` to that is of the correct type is the unit: # f ();; - : int = 3 Now why would that be useful? Let's have some fun: think of `rec` as our `Y` combinator. Then # let rec f n = if (0 = n) then 1 else (n * (f (n - 1)));; val f : int -> int = # f 5;; - : int = 120 We can't define a function that is exactly analogous to our ω. We could try `let rec omega x = x x;;` what happens? [Note: if you want to learn more OCaml, you might come back here someday and try: # let id x = x;; val id : 'a -> 'a = # let unwrap (`Wrap a) = a;; val unwrap : [< `Wrap of 'a ] -> 'a = # let omega ((`Wrap x) as y) = x y;; val omega : [< `Wrap of [> `Wrap of 'a ] -> 'b as 'a ] -> 'b = # unwrap (omega (`Wrap id)) == id;; - : bool = true # unwrap (omega (`Wrap omega));; But we won't try to explain this now.] Even if we can't (easily) express omega in OCaml, we can do this: # let rec blackhole x = blackhole x;; By the way, what's the type of this function? If you then apply this `blackhole` function to an argument, # blackhole 3;; the interpreter goes into an infinite loop, and you have to type control-c to break the loop. Oh, one more thing: lambda expressions look like this: # (fun x -> x);; - : 'a -> 'a = # (fun x -> x) true;; - : bool = true (But `(fun x -> x x)` still won't work.) You may also see this: # (function x -> x);; - : 'a -> 'a = This works the same as `fun` in simple cases like this, and slightly differently in more complex cases. If you learn more OCaml, you'll read about the difference between them. We can try our usual tricks: # (fun x -> true) blackhole;; - : bool = true OCaml declined to try to fully reduce the argument before applying the lambda function. Question: Why is that? Didn't we say that OCaml is a call-by-value/eager language? Remember that `blackhole` is a function too, so we can reverse the order of the arguments: # blackhole (fun x -> true);; Infinite loop. Now consider the following variations in behavior: # let test = blackhole blackhole;; # let test () = blackhole blackhole;; val test : unit -> 'a = # test;; - : unit -> 'a = # test ();; We can use functions that take arguments of type `unit` to control execution. In Scheme parlance, functions on the `unit` type are called *thunks* (which I've always assumed was a blend of "think" and "chunk"). Question: why do thunks work? We know that `blackhole ()` doesn't terminate, so why do expressions like: let f = fun () -> blackhole () in true terminate? Bottom type, divergence ----------------------- Expressions that don't terminate all belong to the **bottom type**. This is a subtype of every other type. That is, anything of bottom type belongs to every other type as well. More advanced type systems have more examples of subtyping: for example, they might make `int` a subtype of `real`. But the core type system of OCaml doesn't have any general subtyping relations. (Neither does System F.) Just this one: that expressions of the bottom type also belong to every other type. It's as if every type definition in OCaml, even the built in ones, had an implicit extra clause: type 'a option = None | Some of 'a;; type 'a option = None | Some of 'a | bottom;; Here are some exercises that may help better understand this. Figure out what is the type of each of the following: fun x y -> y;; fun x (y:int) -> y;; fun x y : int -> y;; let rec blackhole x = blackhole x in blackhole;; let rec blackhole x = blackhole x in blackhole 1;; let rec blackhole x = blackhole x in fun (y:int) -> blackhole y y y;; let rec blackhole x = blackhole x in (blackhole 1) + 2;; let rec blackhole x = blackhole x in (blackhole 1) || false;; let rec blackhole x = blackhole x in 2 :: (blackhole 1);; By the way, what's the type of this: let rec blackhole (x:'a) : 'a = blackhole x in blackhole Back to thunks: the reason you'd want to control evaluation with thunks is to manipulate when "effects" happen. In a strongly normalizing system, like the simply-typed lambda calculus or System F, there are no "effects." In Scheme and OCaml, on the other hand, we can write programs that have effects. One sort of effect is printing. Another sort of effect is mutation, which we'll be looking at soon. Continuations are yet another sort of effect. None of these are yet on the table though. The only sort of effect we've got so far is *divergence* or non-termination. So the only thing thunks are useful for yet is controlling whether an expression that would diverge if we tried to fully evaluate it does diverge. As we consider richer languages, thunks will become more useful.