X-Git-Url: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blobdiff_plain;f=topics%2F_week3_eval_order.mdwn;h=26a4b04d42486d894c17cac4d51a95de038c3e27;hp=5944cb3b1485d81cd80724798201e19c99a37ce8;hb=cbf0c724914abae08e00292bdcbd3dea1f9460cd;hpb=31cebc8050836005ee17dd1d20ae81b2ab9afa3c diff --git a/topics/_week3_eval_order.mdwn b/topics/_week3_eval_order.mdwn index 5944cb3b..26a4b04d 100644 --- a/topics/_week3_eval_order.mdwn +++ b/topics/_week3_eval_order.mdwn @@ -1,4 +1,3 @@ - Evaluation Strategies and Normalization ======================================= @@ -46,7 +45,14 @@ This question highlights that there are different choices to make about how eval > where x does not occur free in `M`, to `M`? -With regard to Q3, it should be intuitively clear that `\x. M x` and `M` will behave the same with respect to any arguments they are given. It can also be proven that no other functions can behave differently with respect to them. However, the logical system you get when eta-reduction is added to the proof theory is importantly different from the one where only beta-reduction is permitted. +With regard to Q3, it should be intuitively clear that `\x. M x` and +`M` will behave the same with respect to any arguments they are +given. It can also be proven that no other functions can behave +differently with respect to them (that is, no function can behave +differently depending on whether its argument is `M` versus `\x. M +x`). However, the logical system you get when eta-reduction is added +to the proof theory is importantly different from the one where only +beta-reduction is permitted. If we answer Q2 by permitting reduction inside abstracts, and we also permit eta-reduction, then where none of y1, ..., yn occur free in M, this: @@ -56,7 +62,7 @@ will eta-reduce by n steps to: \x. M -When we add eta-reduction to our proof system, we end up reconstruing the meaning of `~~>` and `<~~>` and "normal form", all in terms that permit eta-reduction as well. Sometimes these expressions will be annotated to indicate whether only beta-reduction is allowed (~~>β) or whether both beta- and eta-reduction is allowed (~~>βη). +When we add eta-reduction to our proof system, we end up reconstruing the meaning of `~~>` and `<~~>` and "normal form", all in terms that permit eta-reduction as well. Sometimes these expressions will be annotated to indicate whether only beta-reduction is allowed (`~~>`β) or whether both beta- and eta-reduction is allowed (`~~>`βη). The logical system you get when eta-reduction is added to the proof system has the following property: @@ -120,12 +126,30 @@ As we'll see in several weeks, there are techniques for *forcing* call-by-value Call-by-value and call-by-name have different pros and cons. -One important advantage of normal-order evaluation in particular is that it can compute orthodox values for: +One important advantage of normal-order evaluation in particular is +that it can compute orthodox values for (\x. y) (ω +ω). -

-(\x. y) (ω ω)

-\z. (\x. y) (ω ω) -

+To see how normal-order evaluation delivers a normal +form, consider all conceivable evaluation paths of the following term: + +
+(\x.I)((\x.xxx)(\x.xxx))
+ |      \
+ I       (\x.w)((\x.xxx)(\x.xxx)(\x.xxx))
+          /      \
+         I        (\x.w)((\x.xxx)(\x.xxx)(\x.xxx)(\x.xxx))
+                   /       \
+                  I         etc.
+
+ +If we start by evaluating the leftmost redex, we're done in one step. +If we unwisely start by evaluating the rightmost redex, we arrive at a +term that still has two redexes. Once again, we have a choice: either +reduce the leftmost redex, in which case we're done. Or reduce the +rightmost redex, creating an even longer term. Clearly, in this +situation, we want to prioritize reducing the leftmost redex in order +to arrive at a stable result more quickly. Indeed, it's provable that if there's *any* reduction path that delivers a value for a given expression, the normal-order evalutation strategy will terminate with that value. @@ -134,6 +158,33 @@ There's a sense in which you *can't get anything more out of* ω &ome A computational system is said to be **confluent**, or to have the **Church-Rosser** or **diamond** property, if, whenever there are multiple possible evaluation paths, those that terminate always terminate in the same value. In such a system, the choice of which sub-expressions to evaluate first will only matter if some of them but not others might lead down a non-terminating path. +The diamond property is named after the shape of diagrams like the following: + +
+                     ((\x.x)((\y.y) z))
+                       /      \
+                      /        \
+                     /          \
+                    /            \
+                    ((\y.y) z)   ((\x.x) z)
+                      \             /
+                       \           /
+                        \         /
+                         \       /
+                          \     /
+                           \   /
+                             z
+
+ +Because the starting term contains more than one redex, we can imagine +reducing the leftmost redex first (the left branch of the diagram) or +else the rightmost redex (the right branch of the diagram). But +because the lambda calculus is confluent, no matter which lambda you +choose to target for reduction first, you end up at the same place. +It's like travelling in Manhattan: if you walk uptown first and then +head east, you end up in the same place as if you walk east and then +head uptown. + The untyped lambda calculus is confluent. So long as a computation terminates, it always terminates in the same way. It doesn't matter which order the sub-expressions are evaluated in. A computational system is said to be **strongly normalizing** if every permitted evaluation path is guaranteed to terminate. The untyped lambda calculus is not strongly normalizing: ω ω doesn't terminate by any evaluation path; and (\x. y) (ω ω) terminates only by some evaluation paths but not by others. @@ -174,60 +225,12 @@ But is there any method for doing this in general---for telling, of any given co Interestingly, Church also set up an association between the lambda calculus and first-order predicate logic, such that, for arbitrary lambda formulas `M` and `N`, some formula would be provable in predicate logic iff `M` and `N` were convertible. So since the right-hand side is not decidable, questions of provability in first-order predicate logic must not be decidable either. This was the first proof of the undecidability of first-order predicate logic. +## Efficiency -##More on evaluation strategies## - -Here are notes on [[evaluation order]] that make the choice of which -lambda to reduce next the selection of a route through a network of -links. - - -##More on evaluation strategies## - -Here (below) are notes on [[evaluation order]] that make the choice of which -lambda to reduce next the selection of a route through a network of -links. - - -More on Evaluation Order -======================== - -This discussion elaborates on the discussion of evaluation order in the -class notes from week 2. It makes use of the reduction diagrams drawn -in class, which makes choice of evaluation strategy a choice of which -direction to move through a network of reduction links. - - -Some lambda terms can be reduced in different ways: - -
-                     ((\x.x)((\y.y) z))
-                       /      \
-                      /        \
-                     /          \
-                    /            \
-                    ((\y.y) z)   ((\x.x) z)
-                      \             /
-                       \           /
-                        \         /
-                         \       /
-                          \     /
-                           \   /
-                             z
-
- -But because the lambda calculus is confluent (has the diamond -property, named after the shape of the diagram above), no matter which -lambda you choose to target for reduction first, you end up at the -same place. It's like travelling in Manhattan: if you walk uptown -first and then head east, you end up in the same place as if you walk -east and then head uptown. - -But which lambda you target has implications for efficiency and for -termination. (Later in the course, it will have implications for -the order in which side effects occur.) - -First, efficiency: +Which evaluation strategy you adopt has implications not only for +decidability and termination, but for efficiency. (Later in the +course, it will have implications for the order in which side effects +occur.)
                       ((\x.w)((\y.y) z))
@@ -240,11 +243,11 @@ First, efficiency:
                               w
 
-If a function discards its argument (as `\x.w` does), it makes sense -to target that function for reduction, rather than wasting effort -reducing the argument only to have the result of all that work thrown -away. So in this situation, the strategy of "always reduce the -leftmost reducible lambda" wins. +If a function discards its argument, as `\x.w` does, it makes sense to +prioritize redexes in which that function serves as the head, rather +than wasting effort reducing the argument only to have the result of +all that work thrown away. So in this situation, the strategy of +"always reduce the leftmost reducible lambda" is more efficient. But: @@ -276,44 +279,7 @@ In effect, when we copy the unreduced argument, we double the amount of work we need to do to deal with that argument. So when the function copies its argument, the "always reduce the -rightmost reducible lambda" wins. +rightmost reducible lambda" is more efficient. So evaluation strategies have a strong effect on how many reduction steps it takes to arrive at a stopping point (e.g., normal form). - -Now for termination: - -
-(\x.w)((\x.xxx)(\x.xxx))
- |      \
- |       (\x.w)((\x.xxx)(\x.xxx)(\x.xxx))
- |        /      \
- |       /        (\x.w)((\x.xxx)(\x.xxx)(\x.xxx)(\x.xxx))
- |      /          /       \
- .-----------------         etc.
- |
- w
-
- -Here we have a function that discards its argument on the left, and a -non-terminating term on the right. It's even more evil than Omega, -because it not only never reduces, it generates more and more work -with each so-called reduction. If we unluckily adopt the "always -reduce the rightmost reducible lambda" strategy, we'll spend our days -relentlessly copying out new copies of \x.xxx. But if we even once -reduce the leftmost reducible lambda, we'll immediately jump to the -normal form, `w`. - -We can roughly associate the "leftmost always" strategy with call by -name (normal order), and the "rightmost always" strategy with call by -value (applicative order). There are fine-grained distinctions among -these terms of art that we will not pause to untangle here. - -If a term has a normal form (a reduction such that no further -reduction is possible), then a leftmost-always reduction will find it. -(That's why it's called "normal order": it's the evaluation order that -guarantees finding a normal form if one exists.) - -Preview: if the evaluation of a function has side effects, then the -choice of an evaluation strategy will make a big difference in which -side effect occur and in which order.