From 497a680a341e5dbacc2b312f7029128ed1f166db Mon Sep 17 00:00:00 2001 From: Chris Date: Sun, 8 Mar 2015 14:32:55 -0400 Subject: [PATCH] edits --- topics/_week7_eval_cl.mdwn | 161 ++++++++++++++++++++++++--------------------- 1 file changed, 86 insertions(+), 75 deletions(-) diff --git a/topics/_week7_eval_cl.mdwn b/topics/_week7_eval_cl.mdwn index ec88fda7..09a480b6 100644 --- a/topics/_week7_eval_cl.mdwn +++ b/topics/_week7_eval_cl.mdwn @@ -5,31 +5,34 @@ # Reasoning about evaluation order in Combinatory Logic We've discussed evaluation order before, primarily in connection with -the untyped lambda calculus. Whenever a term has more than one redex, -we have to choose which one to reduce, and this choice can make a -difference. For instance, in the term `((\x.I)(ωω)`, if we reduce the -leftmost redex first, the term reduces to the normal form `I` in one -step. But if we reduce the left most redex instead (namely, `(ωω)`), -we do not arrive at a normal form, and are in danger of entering an -infinite loop. +the untyped lambda calculus. Whenever a term contains more than one +redex, we have to choose which one to reduce, and this choice can make +a difference. For instance, in the term `((\x.I)(ωω)`, if we reduce +the leftmost redex first, the term reduces to the normal form `I` in +one step. But if we reduce the left most redex instead (namely, +`(ωω)`), we do not arrive at a normal form, and are in danger of +entering an infinite loop. Thanks to the introduction of sum types (disjoint union), we are now -in a position to gain a deeper understanding by writing a program that -allows us to experiment with different evaluation order strategies. +in a position to gain a deeper understanding of evaluation order by +writing a program that experiments with different evaluation order +strategies. One thing we'll see is that it is all too easy for the evaluation order properties of an evaluator to depend on the evaluation order properties of the programming language in which the evaluator is written. We will seek to write an evaluator in which the order of evaluation is insensitive to the evaluator language. The goal is to -find an order-insensitive way to reason about evaluation order. +find an order-insensitive way to reason about evaluation order. We +will not fully succeed in this first attempt, but we will make good +progress. -The first evaluator we develop will evaluate terms in Combinatory -Logic. That will significantly simplify the program, since we won't -need to worry about variables or substitution. As we develop and -extend our evaluator in future weeks, we'll switch to lambdas, but for -now, working with the elegant simplicity of Combinatory Logic will -make the evaluation order issues easier to grasp. +The first evaluator we will develop will evaluate terms in Combinatory +Logic. This significantly simplifies the program, since we won't need +to worry about variables or substitution. As we develop and extend +our evaluator in future weeks, we'll switch to lambdas, but for now, +working with the simplicity of Combinatory Logic will make the +evaluation order issues easier to grasp. A brief review: a term in CL is the combination of three basic expressions, `S`, `K`, and `I`, governed by the following reduction @@ -40,36 +43,39 @@ rules: Sabc ~~> ac(bc) where `a`, `b`, and `c` stand for an arbitrary term of CL. We've seen -that how to embed the untyped lambda calculus in CL, so it's no -surprise that the same evaluation order issues arise in CL. +how to embed the untyped lambda calculus in CL, so it's no +surprise that the evaluation order issues arise in CL. To illustrate, +we'll use the following definition: skomega = SII(SII) ~~> I(SII)(I(SII)) ~~> SII(SII) -Instead of the lambda term `Ω`, we'll use the CL term skomega, defined -here, though we'll use the same symbol, `Ω`. If we consider the term +Instead of the lambda term `Ω`, we'll use the CL term skomega. We'll +use the same symbol, `Ω`, though: in a lambda term, `Ω` refers to +omega, but in a CL term, `Ω` refers to skomega as defined here. + +If we consider the term KIΩ == KI(SII(SII)) we can choose to reduce the leftmost redex by firing the reduction rule for `K`, in which case the term reduces to the normal form `I` in one step; or we can choose to reduce the skomega part, by firing the -reduction rule fo `S`, in which case we do not get a normal form, -we're headed towards an infinite loop. +reduction rule for the leftmost `S`, in which case we do not get a +normal form, and we're headed towards an infinite loop. With sum types, we can define terms in CL in OCaml as follows: type term = I | S | K | FA of (term * term) - let skomega = FA (FA (FA (S,I), I), FA (FA (S,I), I)) - let test1 = FA (FA (K,I), skomega) + let skomega = FA (FA (FA (S, I), I), FA (FA (S, I), I)) This recursive type definition says that a term in CL is either one of the three simple expressions, or else a pair of CL expressions. Following Heim and Kratzer, `FA` stands for Functional Application. With this type definition, we can encode skomega, as well as other -terms whose reduction behavior we want to control. +terms whose reduction behavior we want to try to control. Using pattern matching, it is easy to code the one-step reduction rules for CL: @@ -85,11 +91,11 @@ rules for CL: # reduce_one_step skomega;; - : term = FA (FA (I, FA (FA (S, I), I)), FA (I, FA (FA (S, I), I))) -The need to explicitly insert the type constructor `FA` obscures -things a bit, but it's still possible to see how the one-step -reduction function is just the reduction rules for CL. The -OCaml interpreter shows us that the function faithfully recognizes -that `KSI ~~> S`, and `skomega ~~> I(SII)(I(SII))`. +The type constructor `FA` obscures things a bit, but it's still +possible to see how the one-step reduction function is just the +reduction rules for CL. The OCaml interpreter shows us that the +function faithfully recognizes that `KSI ~~> S`, and `skomega ~~> +I(SII)(I(SII))`. We can now say precisely what it means to be a redex in CL. @@ -119,18 +125,18 @@ no redexes. In order to fully reduce a term, we need to be able to reduce redexes that are not at the top level of the term. - (KKI)SI ~~> KSI ~~> S + (II)I ~~> IK ~~> K -That is, we want to be able to first evaluate the redex `KKI` that is -a proper subpart of the larger term to produce a new intermediate term +That is, we want to be able to first evaluate the redex `II` that is +a proper subpart of the larger term, to produce a new intermediate term that we can then evaluate to the final normal form. Because we need to process subparts, and because the result after processing a subpart may require further processing, the recursive -structure of our evaluation function has to be quite subtle. To truly -understand it, you will need to do some sophisticated thinking about -how recursion works. We'll show you how to keep track of what is -going on by constructing an recursive execution trace of inputs and +structure of our evaluation function has to be somewhat subtle. To +truly understand, you will need to do some sophisticated thinking +about how recursion works. We'll show you how to keep track of what +is going on by examining a recursive execution trace of inputs and outputs. We'll develop our full reduction function in stages. Once we have it @@ -162,8 +168,8 @@ Since the initial input (`I(IK)`) is a redex, the result after the one-step reduction is `IK`. We recursively call `reduce1` on this input. Since `IK` is itself a redex, the result after one-step reduction is `K`. We recursively call `reduce1` on this input. Since -`K` is not a redex, the recursion bottoms out, and we simply return -it. +`K` is not a redex, the recursion bottoms out, and we return +the result. But this function doesn't do enough reduction. @@ -184,10 +190,10 @@ complex term. else t' Since we need access to the subterms, we do pattern matching on the -input term. If the input is simple, we return it. If the input is -complex, we first process the subexpressions, and only then see if we -have a redex. To understand how this works, follow the trace -carefully: +input term. If the input is simple, we return it (the first three +match cases). If the input is complex, we first process the +subexpressions, and only then see if we have a redex. To understand +how this works, follow the trace carefully: # reduce2 (FA(FA(I,I),K));; reduce2 <-- FA (FA (I, I), K) @@ -241,10 +247,15 @@ enough, # reduce2 (FA(FA(K,I),skomega));; C-c C-cInterrupted. -instead of performing the leftmost reduction first, and recognizing +Running the evaluator with this input leads to an infinite loop, and +the only way to get out is to kill the interpreter with control-c. + +Instead of performing the leftmost reduction first, and recognizing that this term reduces to the normal form `I`, we get lost endlessly trying to reduce skomega. +## Laziness is hard to overcome + To emphasize that our evaluation order here is at the mercy of the evaluation order of OCaml, here is the exact same program translated into Haskell. We'll put them side by side to emphasize the exact parallel. @@ -254,35 +265,35 @@ OCaml Haskell ========================================================== ========================================================= type term = I | S | K | FA of (term * term) data Term = I | S | K | FA Term Term deriving (Eq, Show) - -let skomega = FA (FA (FA (S,I), I), FA (FA (S,I), I)) skomega = (FA (FA (FA S I) I) (FA (FA S I) I)) - - reduce_one_step :: Term -> Term -let reduce_one_step (t:term):term = match t with reduce_one_step t = case t of - FA(I,a) -> a FA I a -> a - | FA(FA(K,a),b) -> a FA (FA K a) b -> a - | FA(FA(FA(S,a),b),c) -> FA(FA(a,c),FA(b,c)) FA (FA (FA S a) b) c -> FA (FA a c) (FA b c) - | _ -> t _ -> t - - is_redex :: Term -> Bool -let is_redex (t:term):bool = not (t = reduce_one_step t) is_redex t = not (t == reduce_one_step t) - - reduce2 :: Term -> Term -let rec reduce2 (t:term):term = match t with reduce2 t = case t of - I -> I I -> I - | K -> K K -> K - | S -> S S -> S - | FA (a, b) -> FA a b -> - let t' = FA (reduce2 a, reduce2 b) in let t' = FA (reduce2 a) (reduce2 b) in + +let skomega = FA (FA (FA (S,I), I), FA (FA (S,I), I)) skomega = (FA (FA (FA S I) I) (FA (FA S I) I)) + + reduce_one_step :: Term -> Term +let reduce_one_step (t:term):term = match t with reduce_one_step t = case t of + FA(I,a) -> a FA I a -> a + | FA(FA(K,a),b) -> a FA (FA K a) b -> a + | FA(FA(FA(S,a),b),c) -> FA(FA(a,c),FA(b,c)) FA (FA (FA S a) b) c -> FA (FA a c) (FA b c) + | _ -> t _ -> t + + is_redex :: Term -> Bool +let is_redex (t:term):bool = not (t = reduce_one_step t) is_redex t = not (t == reduce_one_step t) + + reduce2 :: Term -> Term +let rec reduce2 (t:term):term = match t with reduce2 t = case t of + I -> I I -> I + | K -> K K -> K + | S -> S S -> S + | FA (a, b) -> FA a b -> + let t' = FA (reduce2 a, reduce2 b) in let t' = FA (reduce2 a) (reduce2 b) in if (is_redex t') then reduce2 (reduce_one_step t') if (is_redex t') then reduce2 (reduce_one_step t') - else t' else t' + else t' else t' There are some differences in the way types are made explicit, and in the way terms are specified (`FA(a,b)` for Ocaml versus `FA a b` for Haskell). But the two programs are essentially identical. -Yet the Haskell program finds the normal form for KI --> +Yet the Haskell program finds the normal form for `KIΩ`: *Main> reduce2 (FA (FA K I) skomega) I @@ -293,10 +304,9 @@ asked it to construct t' by evaluating `reduce2 a` and `reduce2 b`, it doesn't bother computing `reduce2 b`. Instead, it waits to see if we ever really need to use the result. -So despite intuitions, the program as written does NOT determine -evaluation order behavior. At this stage, we have defined an -evaluation order that still depends on the evaluation order of the -underlying interpreter. +So the program as written does NOT fully determine evaluation order +behavior. At this stage, we have defined an evaluation order that +still depends on the evaluation order of the underlying interpreter. There are two questions we could ask: Can we adjust the OCaml evaluator to exhibit lazy behavior? and Can we adjust the Haskell @@ -305,7 +315,7 @@ is easy and interesting, and we'll give it right away. The answer to the second question is also interesting, but not easy. There are various tricks available in Haskell we could use (such as the `seq` operator), but a fully general, satisifying resolution will have to -wait until we have Continuation Passing Transforms. +wait until we have Continuation Passing Style transforms. The answer to the first question (Can we adjust the OCaml evaluator to exhibit lazy behavior?) is quite simple: @@ -321,10 +331,11 @@ let rec reduce3 (t:term):term = match t with else t' -There is only one small difference: instead of setting `t'` to -`FA (reduce a, reduce b)`, we have `FA (reduce a, b)`. That is, we -don't evaluate the right-hand subexpression at all. Ever! The only -way to get evaluated is to somehow get into functor position. +There is only one small difference: instead of setting `t'` to `FA +(reduce a, reduce b)`, we omit one of the recursive calls, and have +`FA (reduce a, b)`. That is, we don't evaluate the right-hand +subexpression at all. Ever! The only way to get evaluated is to +somehow get into functor position. # reduce3 (FA(FA(K,I),skomega));; - : term = I -- 2.11.0