One strategy for evaluating or interpreting terms avoids the tedious and inefficient _substituting_
we've become familiar with. Instead, when a variable becomes bound to a result value, we'll
keep track of that binding in a separate scorecard, that we'll call an "environment."
This strategy has to introduce a new kind of structure, called a `Closure`, to handle
binding to terms that have locally free variables in them. Let's
see how this works. For exposition, I'll pretend that the code you're working with has
primitive numbers in it, though [[the homework assignment|/exercises/assignment6-7]] doesn't.
(But [[the fuller code|/topics/week7_untyped_evaluator]] it's simplified from does.)
Now consider:
term environment
---- -----------
(\w.(\y.y) w) 2 []
(\y.y) w [w->2]
y [y->w, w->2]
In the first step, we bind `w` to the term `2`, by saving this association in our environment. In the second step, we bind `y` to the term `w`. In the third step, we would like to replace `y` with whatever its current value is according to our scorecard/environment. But a naive handling of this would replace `y` with `w`; and that's not the right result, because `w` should itself be mapped to `2`. On the other hand, in:
term environment
---- -----------
(\x w. x) w 2 []
(\w. x) 2 [x->w]
x [w->2, x->w]
Now our final term _should_ be replaced with `w`, not with `2`. So evidently it's not so simple as just recursively re-looking up variables in the environment.
A good strategy for handling these cases would be not to bind `y` to the term `w`, but rather to bind it to _what the term `w` then fully evaluates to_. Then in the first case we'd get:
term environment
---- -----------
(\w.(\y.y) w) 2 []
(\y.y) w [w->2]
y [**y->2**, w->2]

And at the next step, `y` will evaluate directly to `2`, as desired. In the other example, though, `x` gets bound to `w` when `w` is already free. (In fact the code skeleton we gave you won't permit that to happen; it refuses to perform any applications except when the arguments are "result values", and it doesn't count free variables as such. As a result, other variables can never get bound to free variables.)
So far, so good.
But now consider the term:
(\f. x) ((\x y. y x) 0)
Since the outermost head `(\f. x)` is already a `Lambda`, we begin by evaluating the argument `((\x y. y x) 0)`. This results in:
term environment
---- -----------
(\f. x) (\y. y x) [x->0]
But that's not right, since it will result in the variable `x` in the head also being associated with the argument `0`. Instead, we want the binding of `x` to `0` to be local to the argument term `(\y. y x)`. For the moment, let's notate that like this:
term environment
---- -----------
(\f. x)_{1} (\y. y x)_{2} []_{1} [x->0]_{2}

Now, let's suppose the head is more complex, like so:
(\f. (\x. f x) I) ((\x y. y x) 0)
That might be easier to understand if you transform it to:
let f = (let x = 0 in \y. y x) in
let x = I in
f x
Since the outermost head `(\f. (\x. f x) I)` is already a `Lambda`, we begin by evaluating the argument `((\x y. y x) 0)`. This results in:
term environment
---- -----------
(\f. (\x. f x) I)_{1} (\y. y x)_{2} []_{1} [x->0]_{2}

Now the argument is not itself any longer an `App`, and so we are ready to evaluate the outermost application, binding `f` to the argument term. So we get:
term environment
---- -----------
((\x. f x) I)_{1} [f->(\y. y x)_{2}]_{1} [x->0]_{2}

Next we bind `x` to the argument `I`, getting:
term environment
---- -----------
(f x)_{1} [x->I, f->(\y. y x)_{2}]_{1} [x->0]_{2}

Now we have to apply the value that `f` is bound to to the value that `x` is bound to. But notice there is a free variable `x` in the term that `f` is bound to. How should we interpret that term? Should the evaluation proceed:
(\y. y **x**_{1}) x_{1} [x->I, ...]_{1} [x->0]_{2}
y **x**_{1} [y->I, x->I, ...]_{1} [x->0]_{2}
I I
I

using the value that `x` is bound to in context_{1}, _where the `f` value is applied_? Or should it proceed:
(\y. y **x**_{2}) x_{1} [x->I, ...]_{1} [x->0]_{2}
y **x**_{2} [y->I, x->I, ...]_{1} [x->0]_{2}
I 0
0

using the value that `x` was bound to in context_{2}, _where `f` was bound_?
In fact, when we specified rules for the Lambda Calculus, we committed ourselves to taking the second of these strategies. But both of the kinds of binding described here are perfectly coherent. The first is called "dynamic binding" or "dynamic scoping" and the second is called "lexical or static binding/scoping". Neither is intrinsically more correct or appropriate than the other. The first is somewhat easier for the people who write implementations of programming languages to handle; so historically it used to predominate. But the second is easier for programmers to reason about.
> In Scheme, variables are bound in the lexical/static way by default, just as in the Lambda Calculus; but there is special vocabulary for dealing with dynamic binding too, which is useful in some situations. As far as I'm aware, Haskell and OCaml only provide the lexical/static binding. *Shell scripts*, on the other hand, only use dynamic binding. If you type this at a shell prompt: `( x=0; foo() { echo $x; }; bar() { local x=1; foo; }; bar )`, it will print `1` not `0`.
In any case, if we're going to have the same semantics as the untyped Lambda Calculus, we're going to have to make sure that when we bind the variable `f` to the value `\y. y x`, that (locally) free variable `x` remains associated with the value `0` that `x` was bound to in the context where `f` is bound, not the possibly different value that `x` may be bound to later, when `f` is applied. One thing we might consider doing is _evaluating the body_ of the abstract that we want to bind `f` to, using the then-current environment to evaluate the variables that the abstract doesn't itself bind. But that's not a good idea. What if the body of that abstract never terminates? The whole program might be OK, because it might never go on to apply `f`. But we'll be stuck trying to evaluate `f`'s body anyway, and will never get to the rest of the program. Another thing we could consider doing is to substitute the `2` in for the variable `x`, and then bind `f` to `\y. y 2`. That would work, but the whole point of this evaluation strategy is to avoid doing those complicated (and inefficient) substitutions. Can you think of a third idea?
What we will do is have our environment associate `f` not just with a `Lambda` term, but also with the environment that was in place _when_ `f` was bound. Then later, if we ever do use `f`, we have that saved environment around to look up any free variables in. More specifically, we'll associate `f` not with the _term_:
Lambda("y", BODY)
but instead with the `Closure` structure:
Closure("y", BODY, SAVED_ENV)
where `BODY` is the term that constituted the body of the corresponding `Lambda`, and `SAVED_ENV` is the environment in place when `f` is being bound. (In this simple implementation, we will just save the _whole environment_ then in place. But in a more efficient implementation, we'd sift through it and only keep those bindings for variables that are free in `BODY`. That would take up less space.)