# Assignment 7 There is no separate assignment 6. (There was a single big assignment for weeks 5 and 6, and we're going to keep the assignment numbers in synch with the weeks.) ## Evaluation order in Combinatory Logic 1. Give a term that the lazy evaluators (either [[the Haskell evaluator|code/ski_evaluator.hs]], or the lazy version of [[the OCaml evaluator|code/ski_evaluator.ml]]) do not evaluate all the way to a normal form, that is, that contains a redex somewhere inside of it after it has been reduced. 2. One of the [[criteria we established for classifying reduction strategies|topics/week3_evaluation_order]] strategies is whether they reduce subterms hidden under lambdas. That is, for a term like `(\x y. x z) (\x. x)`, do we reduce to `\y.(\x.x) z` and stop, or do we reduce further to `\y.z`? Explain what the corresponding question would be for CL. Using the eager version of the OCaml CL evaluator (`reduce_try2`), prove that the evaluator does reduce terms inside of at least some "functional" CL terms. Then provide a modified evaluator that does not perform reductions in those positions. (Just give the modified version of your recursive reduction function.) ## Evaluation in the untyped lambda calculus: substitute-and-repeat Having sketched the issues with a discussion of Combinatory Logic, we're going to begin to construct an evaluator for a simple language that includes lambda abstraction. In this problem set, we're going to work through the issues twice: once with a function that does substitution in the obvious way, and keeps reducing-and-repeating until there are no more eligible redexes. You'll see it's somewhat complicated. The complications come from the need to worry about variable capture. (Seeing these complications should give you an inkling of why we presented the evaluation order discussion using Combinatory Logic, since we don't need to worry about variables in CL.) We're not going to ask you to write the entire program yourself. Instead, we're going to give you almost the complete program, with a few gaps in it that you have to complete. You have to understand enough to add the last pieces to make the program function. You can find the skeleton code [[here|/code/untyped_evaluator.ml]]. The first place you need to add code is in the `free_in` function. You already wrote such a function [[in a previous homework|assignment5#occurs_free]], so this part should be easy. The intended behavior is for the function to return results like this: # free_in "x" (App (Lambda("x, Var "x"), Var "x"));; - : bool = true The second place you need to add code is in the `reduce_head_once` function. As we explain in the code comments, this is similar to the `reduce_if_redex` function in the combinatory logic interpreters. Three of the clauses near the end of this function are incomplete. As we explain in the code comments, this interpreter uses an eager/call-by-value reduction strategy. What are its other evaluation order properties? Does it perform beta-reduction underneath lambdas? Does it perform eta-reduction? ## Evaluation in the untyped lambda calculus: environments The previous interpreter strategy is nice because it corresponds so closely to the reduction rules we give when specifying our lambda calculus. (Including specifying evaluation order, which redexes it's allowed to reduce, and so on.) But keeping track of free and bound variables, computing fresh variables when needed, that's all a pain. Here's a better strategy. Instead of keeping all of the information about which variables have been bound or are still free implicitly inside of the terms, we'll keep a separate scorecard, which we will call an "environment". This is a familiar strategy for philosophers of language and for linguists, since it amounts to evaluating terms relative to an assignment function. The difference between the substitute-and-repeat approach above, and this approach, is one huge step towards monads. The skeleton code for this is at the [[same link as before|/code/untyped_evaluator.ml]]. This part of the exercise is the "VB" part of that code. You'll see that in the `eval` function, a new kind of value `Closure (ident) (term) (env)` is used. What's that about? The `Closure`s are for handling the binding of terms that have locally free variables in them. Let's see how this works. For exposition, I'll pretend that the code you're working with has primitive numbers in it, though it doesn't. (But the fuller code it's simplified from does; see below.) Now consider: term environment ---- ----------- (\w.(\y.y) w) 2 [] (\y.y) w [w->2] y [y->w, w->2] In the first step, we bind `w` to the term `2`, by saving this association in our environment. In the second step, we bind `y` to the term `w`. In the third step, we would like to replace `y` with whatever its current value is according to our scorecard/environment. But a naive handling of this would replace `y` with `w`; and that's not the right result, because `w` should itself be mapped onto `2`. On the other hand, in: term environment ---- ----------- (\x w. x) w 2 [] (\w. x) 2 [x->w] x [w->2, x->w] Now our final term _should_ be replaced with `w`, not with `2`. So evidently it's not so simple as just recursively re-looking up variables in the environment. A good strategy for handling these cases would be not to bind `y` to the term `w`, but rather to bind it to _what the term `w` then fully evaluates to_. Then in the first case we'd get:
term                             environment
----                             -----------
(\w.(\y.y) w) 2                  []
(\y.y) w                         [w->2]
y                                [y->2, w->2]
And at the next step, `y` will evaluate directly to `2`, as desired. In the other example, though, `x` gets bound to `w` when `w` is already free. (In fact the code skeleton we gave you won't permit that to happen; it refuses to perform any applications except when the arguments are "result values", and it doesn't count free variables as such. As a result, other variables can never get bound to free variables.) So far, so good. But now consider the term: (\f. x) ((\x y. y x) 0) Since the outermost head `(\f. x)` is already a `Lambda`, we begin by evaluating the argument `((\x y. y x) 0)`. This results in: term environment ---- ----------- (\f. x) (\y. y x) [x->0] But that's not right, since it will result in the variable `x` in the head also being associated with the argument `0`. Instead, we want the binding of `x` to `0` to be local to the argument term `(\y. y x)`. For the moment, let's notate that like this:
term                             environment
----                             -----------
(\f. x)1 (\y. y x)2              []1   [x->0]2
Now, let's suppose the head is more complex, like so: (\f. (\x. f x) I) ((\x y. y x) 0) That might be easier to understand if you transform it to: let f = (let x = 0 in \y. y x) in let x = I in f x Since the outermost head `(\f. (\x. f x) I)` is already a `Lambda`, we begin by evaluating the argument `((\x y. y x) 0)`. This results in:
term                             environment
----                             -----------
(\f. (\x. f x) I)1 (\y. y x)2    []1 [x->0]2
Now the argument is not itself any longer an `App`, and so we are ready to evaluate the outermost application, binding `f` to the argument term. So we get:
term                             environment
----                             -----------
((\x. f x) I)1                   [f->(\y. y x)2]1 [x->0]2
Next we bind `x` to the argument `I`, getting:
term                             environment
----                             -----------
(f x)1                           [x->I, f->(\y. y x)2]1 [x->0]2
Now we have to apply the value that `f` is bound to to the value that `x` is bound to. But notice there is a free variable `x` in the term that `f` is bound to. How should we interpret that term? Should the evaluation proceed:
(\y. y x1) x1                    [x->I, ...]1 [x->0]2
y x1                             [y->I, x->I, ...]1 [x->0]2
I I
I
using the value that `x` is bound to in context1, _where the `f` value is applied_? Or should it proceed:
(\y. y x2) x1                    [x->I, ...]1 [x->0]2
y x2                             [y->I, x->I, ...]1 [x->0]2
I 0
0
using the value that `x` was bound to in context2, _where `f` was bound_? In fact, when we specified rules for the Lambda Calculus, we committed ourselves to taking the second of these strategies. But both of the kinds of binding described here are perfectly coherent. The first is called "dynamic binding" or "dynamic scoping" and the second is called "lexical or static binding/scoping". Neither is intrinsically more correct or appropriate than the other. The first is somewhat easier for the people who write implementations of programming languages to handle; so historically it used to predominate. But the second is easier for programmers to reason about. In Scheme, variables are bound in the lexical/static way by default, just as in the Lambda Calculus; but there is special vocabulary for dealing with dynamic binding too, which is useful in some situations. (As far as I'm aware, Haskell and OCaml only provide the lexical/static binding.) In any case, if we're going to have the same semantics as the untyped Lambda Calculus, we're going to have to make sure that when we bind the variable `f` to the value `\y. y x`, that (locally) free variable `x` remains associated with the value `2` that `x` was bound to in the context where `f` is bound, not the possibly different value that `x` may be bound to later, when `f` is applied. One thing we might consider doing is _evaluating the body_ of the abstract that we want to bind `f` to, using the then-current environment to evaluate the variables that the abstract doesn't itself bind. But that's not a good idea. What if the body of that abstract never terminates? The whole program might be OK, because it might never go on to apply `f`. But we'll be stuck trying to evaluate `f`'s body anyway, and will never get to the rest of the program. Another thing we could consider doing is to substitute the `2` in for the variable `x`, and then bind `f` to `\y. y 2`. That would work, but the whole point of this evaluation strategy is to avoid doing those complicated (and inefficient) substitutions. Can you think of a third idea? What we will do is have our environment associate `f` not just with a `Lambda` term, but also with the environment that was in place _when_ `f` was bound. Then later, if we ever do use `f`, we have that saved environment around to look up any free variables in. More specifically, we'll associate `f` not with the _term_: Lambda("y", BODY) but instead with the `Closure` structure: Closure("y", BODY, SAVED_ENV) where `BODY` is the term that constituted the body of the corresponding `Lambda`, and `SAVED_ENV` is the environment in place when `f` is being bound. (In this simple implementation, we will just save the _whole environment_ then in place. But in a more efficient implementation, we'd sift through it and only keep those bindings for variables that are free in `BODY`. That would take up less space.) So that's what's going on with those `Closure`s. In the simple code we gave you to work with, we just made these another clause in the `term` datatype. That's really not correct. `Closure`s aren't terms. The syntax for our language doesn't have any constituents that get parsed into `Closure`s. `Closure`s are only created *during the course of evaluating terms*: specifically, when a variable gets bound to an abstract, which may itself contain variables that are locally free (not bound by the abstract itself). So really we should have two datatypes, one for terms, and another for the *results* (sometimes called "values") that terms can evaluate to. `Closure`s are results, but they aren't terms. `App`s are terms, but not results. If we had primitive numbers or other constants in our language, they might be both terms and results. In the fuller code from which your homework is simplified, this is how the types are in fact defined. But it makes things more complicated. So to keep things simple for the homework, we just pretended like `Closure`s were a new, exotic kind of `term`. In any case, now you know what's going on with the `Closure`s, and you should be able to complete the missing pieces of the `eval` function in the code skeleton linked above. If you've completed all the (eight) missing parts correctly, then you should be able to compile the code skeleton, and use it as described in the comments at the start of the code. ## Fuller interpreter We've also prepared a fuller version of the interpreter, that has user-friendly input and printing of results. We'll provide a link to that shortly. It will be easiest for you to understand that code if you've completed the gaps in the simplified skeleton linked above. There's nothing you need to do with this; it's just for you to play with. If you're interested, you can compare the code you completed for the previous two segments of the homework to the (only moderately more complex) code in the `engine.ml` file of this fuller program. ## Monads Mappables (functors), MapNables (applicative functors), and Monads (composables) are ways of lifting computations from unboxed types into boxed types. Here, a "boxed type" is a type function with one unsaturated hole (which may have several occurrences, as in `(α,α) tree`). We can think of the box type as a function from a type to a type. Recall that a monad requires a singleton function ⇧ (\* mid \*) : P-> P, and a composition operator like >=> : (P->Q) -> (Q->R) -> (P->R). As we said in the notes, we'll move freely back and forth between using `>=>` and using `<=<` (aka `mcomp`), which is just `>=>` with its arguments flipped. `<=<` has the virtue that it corresponds more closely to the ordinary mathematical symbol `○`. But `>=>` has the virtue that its types flow more naturally from left to right. Anyway, `mid`/`⇧` and (let's say) `<=<` have to obey the Monad Laws: ⇧ <=< k == k k <=< ⇧ == k j <=< (k <=< l) == (j <=< k) <=< l For example, the Identity monad has the identity function `I` for `⇧` and ordinary function composition `○` for `<=<`. It is easy to prove that the laws hold for any terms `j`, `k`, and `l` whose types are suitable for `⇧` and `<=<`: ⇧ <=< k == I ○ k == \p. I (k p) ~~> \p. k p ~~> k k <=< ⇧ == k ○ I == \p. k (I p) ~~> \p. k p ~~> k (j <=< k) <=< l == (\p. j (k p)) ○ l == \q. (\p. j (k p)) (l q) ~~> \q. j (k (l q)) j <=< (k <=< l) == j ○ (k ○ l) == j ○ (\p. k (l p)) == \q. j ((\p. k (l p)) q) ~~> \q. j (k (l q)) 1. On a number of occasions, we've used the Option/Maybe type to make our conceptual world neat and tidy (for instance, think of [[our discussion of Kaplan's Plexy|topics/week6_plexy]]). As we learned in class, there is a natural monad for the Option type. Using the vocabulary of OCaml, let's say that `'a option` is the type of a boxed `'a`, whatever type `'a` is. More specifically, type 'a option = None | Some 'a (If you have trouble keeping straight what is the OCaml terminology for this and what is the Haskell terminology, don't worry, we do too.) Now the obvious singleton for the Option monad is `\p. Some p`. Give (or reconstruct) either of the composition operators `>=>` or `<=<`. Show that your composition operator obeys the Monad Laws. 2. Do the same with lists. That is, given an arbitrary type `'a`, let the boxed type be `['a]` or `'a list`, that is, lists of values of type `'a`. The `⇧` is the singleton function `\p. [p]`, and the composition operator is: let (>=>) (j : 'a -> 'b list) (k : 'b -> 'c list) : 'a -> 'c list = fun a -> List.flatten (List.map k (j a)) For example: let j a = [a; a+1];; let k b = [b*b; b+b];; (j >=> k) 7 (* which OCaml evaluates to: - : int list = [49; 14; 64; 16] *) Show that these obey the Monad Laws.