X-Git-Url: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blobdiff_plain;f=topics%2F_week5_system_F.mdwn;h=242ada430c4430e6076b668dbb6c4d819a6ad6a8;hp=3f533abdd64d883a50f7eacbf89321b33bddd784;hb=be11c9fb4e8f436be20b5ccd5fb5a03794815440;hpb=25203ef5960e2eedad25ffaac5db3d86ba5b31b1 diff --git a/topics/_week5_system_F.mdwn b/topics/_week5_system_F.mdwn index 3f533abd..242ada43 100644 --- a/topics/_week5_system_F.mdwn +++ b/topics/_week5_system_F.mdwn @@ -1,30 +1,542 @@ -# System F and recursive types +[[!toc levels=2]] -In the simply-typed lambda calculus, we write types like σ --> τ. This looks like logical implication. We'll take -that resemblance seriously when we discuss the Curry-Howard -correspondence. In the meantime, note that types respect modus -ponens: +# System F: the polymorphic lambda calculus -
-Expression    Type     Implication
-----------------------------------
-fn            α -> β   α ⊃ β
-arg           α        α
-------        ------         --------
-fn arg        β         β
-
+The simply-typed lambda calculus is beautifully simple, but it can't +even express the predecessor function, let alone full recursion. And +we'll see shortly that there is good reason to be unsatisfied with the +simply-typed lambda calculus as a way of expressing natural language +meaning. So we will need to get more sophisticated about types. The +next step in that journey will be to consider System F. -The implication in the right-hand column is modus ponens, of course. +System F was discovered by Girard (the same guy who invented Linear +Logic), but it was independently proposed around the same time by +Reynolds, who called his version the *polymorphic lambda calculus*. +(Reynolds was also an early player in the development of +continuations.) -System F is usually attributed to Girard, but was independently -proposed around the same time by Reynolds. It enhances the -simply-typed lambda calculus with quantification over types. In -System F, you can say things like +System F enhances the simply-typed lambda calculus with abstraction +over types. Normal lambda abstraction abstracts (binds) an expression +(a term); type abstraction abstracts (binds) a type. -Λ α (\x.x):(α -> α) +In order to state System F, we'll need to adopt the +notational convention (which will last throughout the rest of the +course) that "x:α" represents an expression `x` +whose type is α. -This says that the identity function maps arguments of type α to -results of type α, for any choice of α. So the Λ is -a universal quantifier over types. +Then System F can be specified as follows: + System F: + --------- + types τ ::= c | α | τ1 -> τ2 | ∀α.τ + expressions e ::= x | λx:τ.e | e1 e2 | Λα.e | e [τ] + +In the definition of the types, "`c`" is a type constant. Type +constants play the role in System F that base types play in the +simply-typed lambda calculus. So in a lingusitics context, type +constants might include `e` and `t`. "α" is a type variable. In +various discussions, type variables are distinguished by using letters +from the greek alphabet (α, β, etc.), as we do here, or by +using capital roman letters (X, Y, etc.), or by adding a tick mark +(`'a`, `'b`, etc.), as in OCaml. "`τ1 -> τ2`" is the type of a +function from expressions of type `τ1` to expressions of type `τ2`. +And "`∀α.τ`" is called a universal type, since it universally +quantifies over the type variable `α`. You can expect that in +`∀α.τ`, the type `τ` will usually have at least one free occurrence of +`α` somewhere inside of it. + +In the definition of the expressions, we have variables "`x`" as usual. +Abstracts "`λx:τ.e`" are similar to abstracts in the simply-typed lambda +calculus, except that they have their shrug variable annotated with a +type. Applications "`e1 e2`" are just like in the simply-typed lambda calculus. + +In addition to variables, abstracts, and applications, we have two +additional ways of forming expressions: "`Λα.e`" is called a *type +abstraction*, and "`e [τ]`" is called a *type application*. The idea +is that Λ is a capital λ: just +like the lower-case λ, Λ binds +variables in its body, except that unlike λ, +Λ binds type variables instead of expression +variables. So in the expression + +Λ Î± (λ x:α. x) + +the Λ binds the type variable `α` that occurs in +the λ abstract. + +This expression is a polymorphic version of the identity function. It +defines one general identity function that can be adapted for use with +expressions of any type. In order to get it ready to apply this +identity function to, say, a variable of type boolean, just do this: + +(Λ Î± (λ x:α. x)) [t] + +This type application (where `t` is a type constant for Boolean truth +values) specifies the value of the type variable `α`. Not +surprisingly, the type of the expression that results from this type +application is a function from Booleans to Booleans: + +((Λα (λ x:α . x)) [t]): (b->b) + +Likewise, if we had instantiated the type variable as an entity (base +type `e`), the resulting identity function would have been a function +of type `e -> e`: + +((Λα (λ x:α. x)) [e]): (e->e) + +Clearly, for any choice of a type `α`, the identity function can be +instantiated as a function from expresions of type `α` to expressions +of type `α`. In general, then, the type of the uninstantiated +(polymorphic) identity function is + +(Λα (λx:α . x)): (∀α. α->α) + +Pred in System F +---------------- + +We saw that the predecessor function couldn't be expressed in the +simply-typed lambda calculus. It *can* be expressed in System F, +however. Here is one way: + + let N = ∀α.(α->α)->α->α in + let Pair = (N->N->N)->N in + + let zero = Λα. λs:α->α. λz:α. z in + let fst = λx:N. λy:N. x in + let snd = λx:N. λy:N. y in + let pair = λx:N. λy:N. λz:N->N->N. z x y in + let succ = λn:N. Λα. λs:α->α. λz:α. s (n [α] s z) in + let shift = λp:Pair. pair (succ (p fst)) (p fst) in + let pred = λn:N. n [Pair] shift (pair zero zero) snd in + + pre (suc (suc (suc zero))); + +[If you want to run this code in +[[Benjamin Pierce's type-checker and evaluator for +System F|http://www.cis.upenn.edu/~bcpierce/tapl/index.html]], the +relevant evaluator is called "fullpoly", and you'll need to +truncate the names of "suc(c)" and "pre(d)", since those are +reserved words in Pierce's system.] + +Exercise: convince yourself that `zero` has type `N`. + +The key to the extra expressive power provided by System F is evident +in the typing imposed by the definition of `pred`. The variable `n` +is typed as a Church number, i.e., as `N ≡ ∀α.(α->α)->α->α`. +The type application `n [Pair]` instantiates `n` in a way that allows +it to manipulate ordered pairs: `n [Pair]: (Pair->Pair)->Pair->Pair`. +In other words, the instantiation turns a Church number into a certain +pair-manipulating function, which is the heart of the strategy for +this version of computing the predecessor function. + +Could we try to accommodate the needs of the predecessor function by +building a system for doing Church arithmetic in which the type for +numbers always manipulated ordered pairs? The problem is that the +ordered pairs we need here are pairs of numbers. If we tried to +replace the type for Church numbers with a concrete (simple) type, we +would have to replace each `N` with the type for Pairs, `(N -> N -> N) +-> N`. But then we'd have to replace each of these `N`'s with the +type for Church numbers, which we're imagining is `(Pair -> Pair) -> +Pair -> Pair`. And then we'd have to replace each of these `Pairs`'s +with... ad infinitum. If we had to choose a concrete type built +entirely from explicit base types, we'd be unable to proceed. + +[See Benjamin C. Pierce. 2002. *Types and Programming Languages*, MIT +Press, chapter 23.] + +Typing ω +-------------- + +In fact, unlike in the simply-typed lambda calculus, +it is even possible to give a type for ω in System F. + +ω = λx:(∀α.α->α). x [∀α.α->α] x + +In order to see how this works, we'll apply ω to the identity +function. + +ω id ≡ (λx:(∀α.α->α). x [∀α.α->α] x) (Λα.λx:α.x) + +Since the type of the identity function is `∀α.α->α`, it's the +right type to serve as the argument to ω. The definition of +ω instantiates the identity function by binding the type +variable `α` to the universal type `∀α.α->α`. Instantiating the +identity function in this way results in an identity function whose +type is (in some sense, only accidentally) the same as the original +fully polymorphic identity function. + +So in System F, unlike in the simply-typed lambda calculus, it *is* +possible for a function to apply to itself! + +Does this mean that we can implement recursion in System F? Not at +all. In fact, despite its differences with the simply-typed lambda +calculus, one important property that System F shares with the +simply-typed lambda calculus is that they are both strongly +normalizing: *every* expression in either system reduces to a normal +form in a finite number of steps. + +Not only does a fixed-point combinator remain out of reach, we can't +even construct an infinite loop. This means that although we found a +type for ω, there is no general type for Ω ≡ ω +ω. In fact, it turns out that no Turing complete system can be +strongly normalizing, from which it follows that System F is not +Turing complete. + + +## Polymorphism in natural language + +Is the simply-typed lambda calclus enough for analyzing natural +language, or do we need polymorphic types? Or something even more expressive? + +The classic case study motivating polymorphism in natural language +comes from coordination. (The locus classicus is Partee and Rooth +1983.) + + Type of the argument of "and": + Ann left and Bill left. t + Ann left and slept. e->t + Ann and Bill left. (e->t)-t (i.e, generalize quantifiers) + Ann read and reviewed the book. e->e->t + +In English (likewise, many other languages), *and* can coordinate +clauses, verb phrases, determiner phrases, transitive verbs, and many +other phrase types. In a garden-variety simply-typed grammar, each +kind of conjunct has a different semantic type, and so we would need +an independent rule for each one. Yet there is a strong intuition +that the contribution of *and* remains constant across all of these +uses. + +Can we capture this using polymorphic types? + + Ann, Bill e + left, slept e -> t + read, reviewed e -> e -> t + +With these basic types, we want to say something like this: + + and:t->t->t = λl:t. λr:t. l r false + gen_and = Λα.Λβ.λf:(β->t).λl:α->β.λr:α->β.λx:α. f (l x) (r x) + +The idea is that the basic *and* (the one defined in the first line) +conjoins expressions of type `t`. But when *and* conjoins functional +types (the definition in the second line), it builds a function that +distributes its argument across the two conjuncts and then applies the +appropriate lower-order instance of *and*. + + and (Ann left) (Bill left) + gen_and [e] [t] and left slept + gen_and [e] [e->t] (gen_and [e] [t] and) read reviewed + +Following the terminology of Partee and Rooth, this strategy of +defining the coordination of expressions with complex types in terms +of the coordination of expressions with less complex types is known as +Generalized Coordination, which is why we call the polymorphic part of +the definition `gen_and`. + +In the first line, the basic *and* is ready to conjoin two truth +values. In the second line, the polymorphic definition of `gen_and` +makes explicit exactly how the meaning of *and* when it coordinates +verb phrases depends on the meaning of the basic truth connective. +Likewise, when *and* coordinates transitive verbs of type `e->e->t`, +the generalized *and* depends on the `e->t` version constructed for +dealing with coordinated verb phrases. + +On the one hand, this definition accurately expresses the way in which +the meaning of the conjunction of more complex types relates to the +meaning of the conjunction of simpler types. On the other hand, it's +awkward to have to explicitly supply an expression each time that +builds up the meaning of the *and* that coordinates the expressions of +the simpler types. We'd like to have that automatically handled by +the polymorphic definition; but that would require writing code that +behaved differently depending on the types of its type arguments, +which goes beyond the expressive power of System F. + +And in fact, discussions of generalized coordination in the +linguistics literature are almost always left as a meta-level +generalizations over a basic simply-typed grammar. For instance, in +Hendriks' 1992:74 dissertation, generalized coordination is +implemented as a method for generating a suitable set of translation +rules, which are in turn expressed in a simply-typed grammar. + +There is some work using System F to express generalizations about +natural language: Ponvert, Elias. 2005. Polymorphism in English Logical +Grammar. In *Lambda Calculus Type Theory and Natural Language*: 47--60. + +Not incidentally, we're not aware of any programming language that +makes generalized coordination available, despite is naturalness and +ubiquity in natural language. That is, coordination in programming +languages is always at the sentential level. You might be able to +evaluate `(delete file1) and (delete file2)`, but never `delete (file1 +and file2)`. + +We'll return to thinking about generalized coordination as we get +deeper into types. There will be an analysis in term of continuations +that will be particularly satisfying. + + +#Types in OCaml + + +OCaml has type inference: the system can often infer what the type of +an expression must be, based on the type of other known expressions. + +For instance, if we type + + # let f x = x + 3;; + +The system replies with + + val f : int -> int = + +Since `+` is only defined on integers, it has type + + # (+);; + - : int -> int -> int = + +The parentheses are there to turn off the trick that allows the two +arguments of `+` to surround it in infix (for linguists, SOV) argument +order. That is, + + # 3 + 4 = (+) 3 4;; + - : bool = true + +In general, tuples with one element are identical to their one +element: + + # (3) = 3;; + - : bool = true + +though OCaml, like many systems, refuses to try to prove whether two +functional objects may be identical: + + # (f) = f;; + Exception: Invalid_argument "equal: functional value". + +Oh well. + +[Note: There is a limited way you can compare functions, using the +`==` operator instead of the `=` operator. Later when we discuss mutation, +we'll discuss the difference between these two equality operations. +Scheme has a similar pair, which they name `eq?` and `equal?`. In Python, +these are `is` and `==` respectively. It's unfortunate that OCaml uses `==` for the opposite operation that Python and many other languages use it for. In any case, OCaml will accept `(f) == f` even though it doesn't accept +`(f) = f`. However, don't expect it to figure out in general when two functions +are equivalent. (That question is not Turing computable.) + + # (f) == (fun x -> x + 3);; + - : bool = false + +Here OCaml says (correctly) that the two functions don't stand in the `==` relation, which basically means they're not represented in the same chunk of memory. However as the programmer can see, the functions are extensionally equivalent. The meaning of `==` is rather weird.] + + + +Booleans in OCaml, and simple pattern matching +---------------------------------------------- + +Where we would write `true 1 2` in our pure lambda calculus and expect +it to evaluate to `1`, in OCaml boolean types are not functions +(equivalently, they're functions that take zero arguments). Instead, selection is +accomplished as follows: + + # if true then 1 else 2;; + - : int = 1 + +The types of the `then` clause and of the `else` clause must be the +same. + +The `if` construction can be re-expressed by means of the following +pattern-matching expression: + + match with true -> | false -> + +That is, + + # match true with true -> 1 | false -> 2;; + - : int = 1 + +Compare with + + # match 3 with 1 -> 1 | 2 -> 4 | 3 -> 9;; + - : int = 9 + +Unit and thunks +--------------- + +All functions in OCaml take exactly one argument. Even this one: + + # let f x y = x + y;; + # f 2 3;; + - : int = 5 + +Here's how to tell that `f` has been curry'd: + + # f 2;; + - : int -> int = + +After we've given our `f` one argument, it returns a function that is +still waiting for another argument. + +There is a special type in OCaml called `unit`. There is exactly one +object in this type, written `()`. So + + # ();; + - : unit = () + +Just as you can define functions that take constants for arguments + + # let f 2 = 3;; + # f 2;; + - : int = 3;; + +you can also define functions that take the unit as its argument, thus + + # let f () = 3;; + val f : unit -> int = + +Then the only argument you can possibly apply `f` to that is of the +correct type is the unit: + + # f ();; + - : int = 3 + +Now why would that be useful? + +Let's have some fun: think of `rec` as our `Y` combinator. Then + + # let rec f n = if (0 = n) then 1 else (n * (f (n - 1)));; + val f : int -> int = + # f 5;; + - : int = 120 + +We can't define a function that is exactly analogous to our ω. +We could try `let rec omega x = x x;;` what happens? + +[Note: if you want to learn more OCaml, you might come back here someday and try: + + # let id x = x;; + val id : 'a -> 'a = + # let unwrap (`Wrap a) = a;; + val unwrap : [< `Wrap of 'a ] -> 'a = + # let omega ((`Wrap x) as y) = x y;; + val omega : [< `Wrap of [> `Wrap of 'a ] -> 'b as 'a ] -> 'b = + # unwrap (omega (`Wrap id)) == id;; + - : bool = true + # unwrap (omega (`Wrap omega));; + + +But we won't try to explain this now.] + + +Even if we can't (easily) express omega in OCaml, we can do this: + + # let rec blackhole x = blackhole x;; + +By the way, what's the type of this function? + +If you then apply this `blackhole` function to an argument, + + # blackhole 3;; + +the interpreter goes into an infinite loop, and you have to type control-c +to break the loop. + +Oh, one more thing: lambda expressions look like this: + + # (fun x -> x);; + - : 'a -> 'a = + # (fun x -> x) true;; + - : bool = true + +(But `(fun x -> x x)` still won't work.) + +You may also see this: + + # (function x -> x);; + - : 'a -> 'a = + +This works the same as `fun` in simple cases like this, and slightly differently in more complex cases. If you learn more OCaml, you'll read about the difference between them. + +We can try our usual tricks: + + # (fun x -> true) blackhole;; + - : bool = true + +OCaml declined to try to fully reduce the argument before applying the +lambda function. Question: Why is that? Didn't we say that OCaml is a call-by-value/eager language? + +Remember that `blackhole` is a function too, so we can +reverse the order of the arguments: + + # blackhole (fun x -> true);; + +Infinite loop. + +Now consider the following variations in behavior: + + # let test = blackhole blackhole;; + + + # let test () = blackhole blackhole;; + val test : unit -> 'a = + + # test;; + - : unit -> 'a = + + # test ();; + + +We can use functions that take arguments of type `unit` to control +execution. In Scheme parlance, functions on the `unit` type are called +*thunks* (which I've always assumed was a blend of "think" and "chunk"). + +Question: why do thunks work? We know that `blackhole ()` doesn't terminate, so why do expressions like: + + let f = fun () -> blackhole () + in true + +terminate? + +Bottom type, divergence +----------------------- + +Expressions that don't terminate all belong to the **bottom type**. This is a subtype of every other type. That is, anything of bottom type belongs to every other type as well. More advanced type systems have more examples of subtyping: for example, they might make `int` a subtype of `real`. But the core type system of OCaml doesn't have any general subtyping relations. (Neither does System F.) Just this one: that expressions of the bottom type also belong to every other type. It's as if every type definition in OCaml, even the built in ones, had an implicit extra clause: + + type 'a option = None | Some of 'a;; + type 'a option = None | Some of 'a | bottom;; + +Here are some exercises that may help better understand this. Figure out what is the type of each of the following: + + fun x y -> y;; + + fun x (y:int) -> y;; + + fun x y : int -> y;; + + let rec blackhole x = blackhole x in blackhole;; + + let rec blackhole x = blackhole x in blackhole 1;; + + let rec blackhole x = blackhole x in fun (y:int) -> blackhole y y y;; + + let rec blackhole x = blackhole x in (blackhole 1) + 2;; + + let rec blackhole x = blackhole x in (blackhole 1) || false;; + + let rec blackhole x = blackhole x in 2 :: (blackhole 1);; + +By the way, what's the type of this: + + let rec blackhole (x:'a) : 'a = blackhole x in blackhole + + +Back to thunks: the reason you'd want to control evaluation with +thunks is to manipulate when "effects" happen. In a strongly +normalizing system, like the simply-typed lambda calculus or System F, +there are no "effects." In Scheme and OCaml, on the other hand, we can +write programs that have effects. One sort of effect is printing. +Another sort of effect is mutation, which we'll be looking at soon. +Continuations are yet another sort of effect. None of these are yet on +the table though. The only sort of effect we've got so far is +*divergence* or non-termination. So the only thing thunks are useful +for yet is controlling whether an expression that would diverge if we +tried to fully evaluate it does diverge. As we consider richer +languages, thunks will become more useful.