1 Syntactic equality, reduction, convertibility
2 =============================================
4 Define T to be `(\x. x y) z`. Then T and `(\x. x y) z` are syntactically equal, and we're counting them as syntactically equal to `(\z. z y) z` as well. We write:
7 T ≡ `(\x. x y) z` ≡ `(\z. z y) z`
16 means that T beta-reduces to `z y`. This:
22 means that M and T are beta-convertible, that is, that there's something they both reduce to in zero or more steps.
24 Combinators and Combinatorial Logic
25 ===================================
27 Lambda expressions that have no free variables are known as **combinators**. Here are some common ones:
30 **I** is defined to be `\x x`<p>
31 **K** is defined to be `\x y. x`, That is, it throws away its second argument. So `K x` is a constant function from any (further) argument to `x`. ("K" for "constant".) Compare K to our definition of **true**.<p>
32 **get-first** was our function for extracting the first element of an ordered pair: `\fst snd. fst`. Compare this to K and true as well.<p>
33 **get-second** was our function for extracting the second element of an ordered pair: `\fst snd. snd`. Compare this to our definition of false.<p>
34 **ω** is defined to be: `\x. x x`<p>
37 It's possible to build a logical system equally powerful as the lambda calculus (and straightforwardly intertranslatable with it) using just combinators, considered as atomic operations. Such a language doesn't have any variables in it: not just no free variables, but no variables at all.
39 One can do that with a very spare set of basic combinators. These days the standard base is just three combinators: K and I from above, and also one more, S, which behaves the same as the lambda expression `\f g x. f x (g x)`. behaves.But it's possible to be even more minimalistic, and get by with only a single combinator. (And there are different single-combinator bases you can choose.)
41 These systems are Turing complete. In other words: every computation we know how to describe can be represented in a logical system consisting of only a single primitive operation!
43 Here's more to read about combinatorial logic:
50 In the assignment we asked you to reduce various expressions until it wasn't possible to reduce them any further. For two of those expressions, this was impossible to do. One of them was this:
54 As we saw above, each of the halves of this formula are the combinator ω; so this can also be written:
56 <pre><code>ω ω</code></pre>
58 This compound expression---the self-application of ω---is named Ω. It has the form of an application of an abstract (ω) to an argument (which also happens to be ω), so it's a redex and can be reduced. But when we reduce it, we get <code>ω ω</code> again. So there's no stage at which this expression has been reduced to a point where it can't be reduced any further. In other words, evaluation of this expression "never terminates." (This is standard language, however it has the unfortunate connotation that evaluation is a process or operation that is performed in time. You shouldn't think of it like that. Evaluation of this expression "never terminates" in the way that the decimal expansion of π never terminates. This are static, atemporal facts about their mathematical properties.)
60 There are infinitely many formulas in the lambda calculus that have this same property. Ω is the syntactically simplest of them. In our meta-theory, it's common to assign such formula a special value, <code>⊥</code>, pronounced "bottom." When we get to discussing types, you'll see that this value is counted as belonging to every type. To say that a formula has the bottom value means that the computation that formula represents never terminates and so doesn't evaluate to any orthodox value.
62 From a "Fregean" or "weak Kleene" perspective, if any component of an expression fails to be evaluable (to an orthodox, computed value), then the whole expression should be unevaluable as well.
64 However, in some such cases it seems *we could* sensibly carry on evaluation. For instance, consider:
67 (\x. y) (ω ω)
70 Should we count this as unevaluable, because the reduction of <code>ω ω</code> never terminates? Or should we count it as evaluating to `y`?
72 This question highlights that there are different choices to make about how evaluation or computation proceeds. It's helpful to think of three questions in this neighborhood:
74 > Q1. When arguments are complex, as <code>ω ω</code>, do we reduce them before or after substituting them into the abstracts to which they are arguments?
76 > Q2. Are we allowed to reduce inside abstracts? That is, can we reduce:
84 > or can we continue reducing to:
88 > Q3. Are we allowed to "eta-reduce"? That is, can we reduce expressions of the form:
92 > where x does not occur free in `M`, to `M`? It should be intuitively clear that `\x. M x` and `M` will behave the same with respect to any arguments they are given. It can also be proven that no other functions can behave differently with respect to them. However, the logical system you get when eta-reduction is added to the proof theory is importantly different from the one where only beta-reduction is permitted.
95 The evaluation strategy which answers Q1 by saying "reduce arguments first" is known as **call-by-value**. The evaluation strategy which answers Q1 by saying "substitute arguments in unreduced" is known as **call-by-name** or **call-by-need** (the difference between these has to do with efficiency, not semantics).
97 When one has a call-by-value strategy that also permits reduction to continue inside unapplied abstracts, that's known as "applicative order" reduction. When one has a call-by-name stratehy that oermits reduction inside abstracts, that's known as "normal order" reduction. Consider an expression of the form:
101 Its syntax has the following tree:
115 Applicative order evaluation does what's called a "post-order traversal" of the tree: that is, we always go left and down whenever we can, and we process a node only after processing all its children. So `(C D)` gets processed before `((A B) (C D))` does, and `(E F)` gets processed before `((A B) (C D)) (E F)` does.
117 Normal order evaluation, on the other hand, will substitute the expresion `(C D)` into the abstract that `(A B)` evaluates to, without first trying to compute what `(C D)` evaluates to. That computation may be done later.
119 When we have an expression like:
123 the computation of `(C D)` won't ever have to be performed, on a normal order or call by name evaluation strategy. Instead, that reduces directly to `y`. This is so even if `(C D)` is the non-evaluable <code>(ω ω)</code>!
125 Most programming languages, including Scheme and OCaml, use the call-by-value evaluation strategy. (But they don't permit evaluation to continue inside an unappplied function.) There are techniques for making them model the other sort of behavior.
127 Some functional programming languages, such as Haskell, use the call-by-name evaluation strategy. Each has pros and cons.
129 The lambda calculus can be evaluated either way. You have to decide what the rules shall be.
131 One important advantage of the normal-order evaluation strategy is that it can compute an orthodox value for:
134 (\x. y) (ω ω)
137 Indeed, it's provable that if there's any reduction path that delivers a value for an expression, the normal-order evalutation strategy will terminate with that value.
139 An expression is said to be in **normal form** when it's not possible to perform any more reductions. (EVEN INSIDE ABSTRACTS?) There's a sense in which you can't get anything more out of <code>ω ω</code>, but it's not in normal form because it still has the form of a redex.
141 A computational system is said to be **confluent**, or to have the **Church-Rosser** or **diamond** property, if, whenever there are multiple possible evaluation paths, those that terminate always terminate in the same value. In such a system, the choice of which sub-expressions to evaluate first will only matter if some of them but not others might lead down a non-terminating path.
143 The untyped lambda calculus is confluent. So long as a computation terminates, it always terminates in the same way. It doesn't matter which order the sub-expressions are evaluated in.
145 A computational system is said to be **strongly normalizing** if every permitted evaluation path is guaranteed to terminate. The untyped lambda calculus is not strongly normalizing: <code>ω ω</code> doesn't terminate by any evaluation path; and <code>(\x. y) (ω ω)</code> terminates only by some evaluation paths but not by others.
147 But the untyped lambda calculus enjoys some compensation for this weakness. It's Turing complete! It can represent any computation we know how to describe. (That's the cash value of being Turing complete, not the rigorous definition. We don't know how to rigorously define "any computation we know how to describe." There is however a rigorous definition for being Turing complete.) And in fact, it's been proven that you can't havee both. If a language is Turing complete, it cannot be strongly normalizing.
149 A computational system is said to be **weakly normalizing** if there's always guaranteed to be *at least one* evaluation path that terminates. The untyped lambda calculus is not weakly normalizing either, as we've seen.
151 The *typed* lambda calculus that linguists traditionally work with, on the other hand, is strongly normalizing. (And as a result, is not Turning complete.) It has expressive power that the untyped lambda calculus lacks, but it is also unable to represent some (terminating!) computations that the untyped lambda calculus can represent.
153 Other more-powerful type systems we'll look at in the course will also fail to be Turing complete, though they will turn out to be pretty powerful.
167 (( combinatorial logic ))
172 We'll write that like this:
174 ((\x (y x)) z) ~~> (y z)
176 Different authors use different notations. Some authors use the term "contraction" for a single reduction step, and reserve the term "reduction" for the reflexive transitive closure of that, that is, for zero or more reduction steps. Informally, it seems easiest to us to say "reduction" for one or more reduction steps. So when we write:
180 We'll mean that you can get from M to N by one or more reduction steps. Hankin uses the symbol <code><big><big>→</big></big></code> for one-step contraction, and the symbol <code><big><big>↠</big></big></code> for zero-or-more step reduction. Hindley and Seldin use <code><big><big><big>⊳</big></big></big><sub>1</sub></code> and <code><big><big><big>⊳</big></big></big></code>.
182 When M and N are such that there's some P that M reduces to by zero or more steps, and that N also reduces to by zero or more steps, then we say that M and N are **beta-convertible**. We'll write that like this:
186 This is what plays the role of equality in the lambda calculus. Hankin uses the symbol `=` for this. So too do Hindley and Seldin. Personally, I keep confusing that with the relation to be described next, so let's use this notation instead. Note that `M <~~> N` doesn't mean that each of `M` and `N` are reducible to each other; that only holds when `M` and `N` are the same expression. (Or, with our convention of only saying "reducible" for one or more reduction steps, it never holds.)
188 In the metatheory, it's also sometimes useful to talk about formulas that are syntactically equivalent *before any reductions take place*. Hankin uses the symbol <code>≡</code> for this. So too do Hindley and Seldin. We'll use that too, and will avoid using `=` when discussing the metatheory. Instead we'll use `<~~>` as we said above. When we want to introduce a stipulative definition, we'll write it out longhand, as in:
191 combinators as lambda expressions
194 tuples = possibly type-heterogenous ordered collections, different length -> different type
195 lists = type-homogenous ordered collections, lists of different lengths >=0 can be of same type
200 1. Substitution; using alpha-conversion and other strategies
201 1. Conversion versus reduction
203 1. Different evaluation strategies (call by name, call by value, etc.)
204 1. Strongly normalizing vs weakly normalizing vs non-normalizing; Church-Rosser Theorem(s)
205 1. Lambda calculus compared to combinatorial logic<p>
206 1. Church-like encodings of numbers, defining addition and multiplication
207 1. Defining the predecessor function; alternate encodings for the numbers
208 1. Homogeneous sequences or "lists"; how they differ from pairs, triples, etc.
209 1. Representing lists as pairs
210 1. Representing lists as folds
211 1. Typical higher-order functions: map, filter, fold<p>
212 1. Recursion exploiting the fold-like representation of numbers and lists ([[!wikipedia Deforestation (computer science)]], [[!wikipedia Zipper (data structure)]])
213 1. General recursion using omega
215 1. Eta reduction and "extensionality" ??
216 Undecidability of equivalence
218 There is no algorithm which takes as input two lambda expressions and outputs TRUE or FALSE depending on whether or not the two expressions are equivalent. This was historically the first problem for which undecidability could be proven. As is common for a proof of undecidability, the proof shows that no computable function can decide the equivalence. Church's thesis is then invoked to show that no algorithm can do so.
220 Church's proof first reduces the problem to determining whether a given lambda expression has a normal form. A normal form is an equivalent expression which cannot be reduced any further under the rules imposed by the form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression e which closely follows the proof of Gödel's first incompleteness theorem. If e is applied to its own Gödel number, a contradiction results.
224 1. The Y combinator(s); more on evaluation strategies<p>
225 1. Introducing the notion of a "continuation", which technique we'll now already have used a few times
231 syntactic equality `===`
232 contract/reduce/`~~>`
237 strongly normalizable
238 "normal order" reduction vs "applicative order"
239 eval strategy choices
241 Reduction strategies For more details on this topic, see Evaluation
244 Whether a term is normalising or not, and how much work needs to be
245 done in normalising it if it is, depends to a large extent on the reduction
246 strategy used. The distinction between reduction strategies relates to the
247 distinction in functional programming languages between eager evaluation and
250 Full beta reductions Any redex can be reduced at any time. This means
251 essentially the lack of any particular reduction strategy—with regard to
252 reducibility, "all bets are off". Applicative order The leftmost, innermost
253 redex is always reduced first. Intuitively this means a function's arguments
254 are always reduced before the function itself. Applicative order always
255 attempts to apply functions to normal forms, even when this is not possible.
256 Most programming languages (including Lisp, ML and imperative languages like C
257 and Java) are described as "strict", meaning that functions applied to
258 non-normalising arguments are non-normalising. This is done essentially using
259 applicative order, call by value reduction (see below), but usually called
260 "eager evaluation". Normal order The leftmost, outermost redex is always
261 reduced first. That is, whenever possible the arguments are substituted into
262 the body of an abstraction before the arguments are reduced. Call by name As
263 normal order, but no reductions are performed inside abstractions. For example
264 λx.(λx.x)x is in normal form according to this strategy, although it contains
265 the redex (λx.x)x. Call by value Only the outermost redexes are reduced: a
266 redex is reduced only when its right hand side has reduced to a value (variable
267 or lambda abstraction). Call by need As normal order, but function applications
268 that would duplicate terms instead name the argument, which is then reduced
269 only "when it is needed". Called in practical contexts "lazy evaluation". In
270 implementations this "name" takes the form of a pointer, with the redex
271 represented by a thunk.
273 Applicative order is not a normalising strategy. The usual
274 counterexample is as follows: define Ω = ωω where ω = λx.xx. This entire
275 expression contains only one redex, namely the whole expression; its reduct is
276 again Ω. Since this is the only available reduction, Ω has no normal form
277 (under any evaluation strategy). Using applicative order, the expression KIΩ =
278 (λxy.x) (λx.x)Ω is reduced by first reducing Ω to normal form (since it is the
279 leftmost redex), but since Ω has no normal form, applicative order fails to
280 find a normal form for KIΩ.
282 In contrast, normal order is so called because it always finds a
283 normalising reduction if one exists. In the above example, KIΩ reduces under
284 normal order to I, a normal form. A drawback is that redexes in the arguments
285 may be copied, resulting in duplicated computation (for example, (λx.xx)
286 ((λx.x)y) reduces to ((λx.x)y) ((λx.x)y) using this strategy; now there are two
287 redexes, so full evaluation needs two more steps, but if the argument had been
288 reduced first, there would now be none).
290 The positive tradeoff of using applicative order is that it does not
291 cause unnecessary computation if all arguments are used, because it never
292 substitutes arguments containing redexes and hence never needs to copy them
293 (which would duplicate work). In the above example, in applicative order
294 (λx.xx) ((λx.x)y) reduces first to (λx.xx)y and then to the normal order yy,
295 taking two steps instead of three.
297 Most purely functional programming languages (notably Miranda and its
298 descendents, including Haskell), and the proof languages of theorem provers,
299 use lazy evaluation, which is essentially the same as call by need. This is
300 like normal order reduction, but call by need manages to avoid the duplication
301 of work inherent in normal order reduction using sharing. In the example given
302 above, (λx.xx) ((λx.x)y) reduces to ((λx.x)y) ((λx.x)y), which has two redexes,
303 but in call by need they are represented using the same object rather than
304 copied, so when one is reduced the other is too.
309 Strict evaluation Main article: strict evaluation
311 In strict evaluation, the arguments to a function are always evaluated
312 completely before the function is applied.
314 Under Church encoding, eager evaluation of operators maps to strict evaluation
315 of functions; for this reason, strict evaluation is sometimes called "eager".
316 Most existing programming languages use strict evaluation for functions. [edit]
319 Applicative order (or leftmost innermost) evaluation refers to an evaluation
320 strategy in which the arguments of a function are evaluated from left to right
321 in a post-order traversal of reducible expressions (redexes). Unlike
322 call-by-value, applicative order evaluation reduces terms within a function
323 body as much as possible before the function is applied. [edit] Call by value
325 Call-by-value evaluation (also referred to as pass-by-value) is the most common
326 evaluation strategy, used in languages as different as C and Scheme. In
327 call-by-value, the argument expression is evaluated, and the resulting value is
328 bound to the corresponding variable in the function (frequently by copying the
329 value into a new memory region). If the function or procedure is able to assign
330 values to its parameters, only its local copy is assigned — that is, anything
331 passed into a function call is unchanged in the caller's scope when the
334 Call-by-value is not a single evaluation strategy, but rather the family of
335 evaluation strategies in which a function's argument is evaluated before being
336 passed to the function. While many programming languages (such as Eiffel and
337 Java) that use call-by-value evaluate function arguments left-to-right, some
338 evaluate functions and their arguments right-to-left, and others (such as
339 Scheme, OCaml and C) leave the order unspecified (though they generally require
340 implementations to be consistent).
342 In some cases, the term "call-by-value" is problematic, as the value which is
343 passed is not the value of the variable as understood by the ordinary meaning
344 of value, but an implementation-specific reference to the value. The
345 description "call-by-value where the value is a reference" is common (but
346 should not be understood as being call-by-reference); another term is
347 call-by-sharing. Thus the behaviour of call-by-value Java or Visual Basic and
348 call-by-value C or Pascal are significantly different: in C or Pascal, calling
349 a function with a large structure as an argument will cause the entire
350 structure to be copied, potentially causing serious performance degradation,
351 and mutations to the structure are invisible to the caller. However, in Java or
352 Visual Basic only the reference to the structure is copied, which is fast, and
353 mutations to the structure are visible to the caller. [edit] Call by reference
355 In call-by-reference evaluation (also referred to as pass-by-reference), a
356 function receives an implicit reference to the argument, rather than a copy of
357 its value. This typically means that the function can modify the argument-
358 something that will be seen by its caller. Call-by-reference therefore has the
359 advantage of greater time- and space-efficiency (since arguments do not need to
360 be copied), as well as the potential for greater communication between a
361 function and its caller (since the function can return information using its
362 reference arguments), but the disadvantage that a function must often take
363 special steps to "protect" values it wishes to pass to other functions.
365 Many languages support call-by-reference in some form or another, but
366 comparatively few use it as a default; Perl and Visual Basic are two that do,
367 though Visual Basic also offers a special syntax for call-by-value parameters.
368 A few languages, such as C++ and REALbasic, default to call-by-value, but offer
369 special syntax for call-by-reference parameters. C++ additionally offers
370 call-by-reference-to-const. In purely functional languages there is typically
371 no semantic difference between the two strategies (since their data structures
372 are immutable, so there is no possibility for a function to modify any of its
373 arguments), so they are typically described as call-by-value even though
374 implementations frequently use call-by-reference internally for the efficiency
377 Even among languages that don't exactly support call-by-reference, many,
378 including C and ML, support explicit references (objects that refer to other
379 objects), such as pointers (objects representing the memory addresses of other
380 objects), and these can be used to effect or simulate call-by-reference (but
381 with the complication that a function's caller must explicitly generate the
382 reference to supply as an argument). [edit] Call by sharing
384 Also known as "call by object" or "call by object-sharing" is an evaluation
385 strategy first named by Barbara Liskov et al. for the language CLU in 1974[1].
386 It is used by languages such as Python[2], Iota, Java (for object
387 references)[3], Ruby, Scheme, OCaml, AppleScript, and many other languages.
388 However, the term "call by sharing" is not in common use; the terminology is
389 inconsistent across different sources. For example, in the Java community, they
390 say that Java is pass-by-value, whereas in the Ruby community, they say that
391 Ruby is pass-by-reference, even though the two languages exhibit the same
392 semantics. Call-by-sharing implies that values in the language are based on
393 objects rather than primitive types.
395 The semantics of call-by-sharing differ from call-by-reference in that
396 assignments to function arguments within the function aren't visible to the
397 caller (unlike by-reference semantics)[citation needed]. However since the
398 function has access to the same object as the caller (no copy is made),
399 mutations to those objects within the function are visible to the caller, which
400 differs from call-by-value semantics.
402 Although this term has widespread usage in the Python community, identical
403 semantics in other languages such as Java and Visual Basic are often described
404 as call-by-value, where the value is implied to be a reference to the object.
405 [edit] Call by copy-restore
407 Call-by-copy-restore, call-by-value-result or call-by-value-return (as termed
408 in the Fortran community) is a special case of call-by-reference where the
409 provided reference is unique to the caller. If a parameter to a function call
410 is a reference that might be accessible by another thread of execution, its
411 contents are copied to a new reference that is not; when the function call
412 returns, the updated contents of this new reference are copied back to the
413 original reference ("restored").
415 The semantics of call-by-copy-restore also differ from those of
416 call-by-reference where two or more function arguments alias one another; that
417 is, point to the same variable in the caller's environment. Under
418 call-by-reference, writing to one will affect the other; call-by-copy-restore
419 avoids this by giving the function distinct copies, but leaves the result in
420 the caller's environment undefined (depending on which of the aliased arguments
421 is copied back first).
423 When the reference is passed to the callee uninitialized, this evaluation
424 strategy may be called call-by-result. [edit] Partial evaluation Main article:
427 In partial evaluation, evaluation may continue into the body of a function that
428 has not been applied. Any sub-expressions that do not contain unbound variables
429 are evaluated, and function applications whose argument values are known may be
430 reduced. In the presence of side-effects, complete partial evaluation may
431 produce unintended results; for this reason, systems that support partial
432 evaluation tend to do so only for "pure" expressions (expressions without
433 side-effects) within functions. [edit] Non-strict evaluation
435 In non-strict evaluation, arguments to a function are not evaluated unless they
436 are actually used in the evaluation of the function body.
438 Under Church encoding, lazy evaluation of operators maps to non-strict
439 evaluation of functions; for this reason, non-strict evaluation is often
440 referred to as "lazy". Boolean expressions in many languages use lazy
441 evaluation; in this context it is often called short circuiting. Conditional
442 expressions also usually use lazy evaluation, albeit for different reasons.
445 Normal-order (or leftmost outermost) evaluation is the evaluation strategy
446 where the outermost redex is always reduced, applying functions before
447 evaluating function arguments. It differs from call-by-name in that
448 call-by-name does not evaluate inside the body of an unapplied
449 function[clarification needed]. [edit] Call by name
451 In call-by-name evaluation, the arguments to functions are not evaluated at all
452 — rather, function arguments are substituted directly into the function body
453 using capture-avoiding substitution. If the argument is not used in the
454 evaluation of the function, it is never evaluated; if the argument is used
455 several times, it is re-evaluated each time. (See Jensen's Device.)
457 Call-by-name evaluation can be preferable over call-by-value evaluation because
458 call-by-name evaluation always yields a value when a value exists, whereas
459 call-by-value may not terminate if the function's argument is a non-terminating
460 computation that is not needed to evaluate the function. Opponents of
461 call-by-name claim that it is significantly slower when the function argument
462 is used, and that in practice this is almost always the case as a mechanism
463 such as a thunk is needed. [edit] Call by need
465 Call-by-need is a memoized version of call-by-name where, if the function
466 argument is evaluated, that value is stored for subsequent uses. In a "pure"
467 (effect-free) setting, this produces the same results as call-by-name; when the
468 function argument is used two or more times, call-by-need is almost always
471 Because evaluation of expressions may happen arbitrarily far into a
472 computation, languages using call-by-need generally do not support
473 computational effects (such as mutation) except through the use of monads and
474 uniqueness types. This eliminates any unexpected behavior from variables whose
475 values change prior to their delayed evaluation.
477 This is a kind of Lazy evaluation.
479 Haskell is the most well-known language that uses call-by-need evaluation.
481 R also uses a form of call-by-need. [edit] Call by macro expansion
483 Call-by-macro-expansion is similar to call-by-name, but uses textual
484 substitution rather than capture-avoiding substitution. With uncautious use,
485 macro substitution may result in variable capture and lead to undesired
486 behavior. Hygienic macros avoid this problem by checking for and replacing
487 shadowed variables that are not parameters.
492 Eager evaluation or greedy evaluation is the evaluation strategy in most
493 traditional programming languages.
495 In eager evaluation an expression is evaluated as soon as it gets bound to a
496 variable. The term is typically used to contrast lazy evaluation, where
497 expressions are only evaluated when evaluating a dependent expression. Eager
498 evaluation is almost exclusively used in imperative programming languages where
499 the order of execution is implicitly defined by the source code organization.
501 One advantage of eager evaluation is that it eliminates the need to track and
502 schedule the evaluation of expressions. It also allows the programmer to
503 dictate the order of execution, making it easier to determine when
504 sub-expressions (including functions) within the expression will be evaluated,
505 as these sub-expressions may have side-effects that will affect the evaluation
506 of other expressions.
508 A disadvantage of eager evaluation is that it forces the evaluation of
509 expressions that may not be necessary at run time, or it may delay the
510 evaluation of expressions that have a more immediate need. It also forces the
511 programmer to organize the source code for optimal order of execution.
513 Note that many modern compilers are capable of scheduling execution to better
514 optimize processor resources and can often eliminate unnecessary expressions
515 from being executed entirely. Therefore, the notions of purely eager or purely
516 lazy evaluation may not be applicable in practice.
520 In computer programming, lazy evaluation is the technique of delaying a
521 computation until the result is required.
523 The benefits of lazy evaluation include: performance increases due to avoiding
524 unnecessary calculations, avoiding error conditions in the evaluation of
525 compound expressions, the capability of constructing potentially infinite data
526 structures, and the capability of defining control structures as abstractions
527 instead of as primitives.
529 Languages that use lazy actions can be further subdivided into those that use a
530 call-by-name evaluation strategy and those that use call-by-need. Most
531 realistic lazy languages, such as Haskell, use call-by-need for performance
532 reasons, but theoretical presentations of lazy evaluation often use
533 call-by-name for simplicity.
535 The opposite of lazy actions is eager evaluation, sometimes known as strict
536 evaluation. Eager evaluation is the evaluation behavior used in most
537 programming languages.
539 Lazy evaluation refers to how expressions are evaluated when they are passed as
540 arguments to functions and entails the following three points:[1]
542 1. The expression is only evaluated if the result is required by the calling
543 function, called delayed evaluation.[2] 2. The expression is only evaluated to
544 the extent that is required by the calling function, called short-circuit
545 evaluation. 3. The expression is never evaluated more than once, called
546 applicative-order evaluation.[3]
550 * 1 Delayed evaluation
551 o 1.1 Control structures
552 * 2 Controlling eagerness in lazy languages 3 Other uses 4 See also 5
553 * References 6 External links
555 [edit] Delayed evaluation
557 Delayed evaluation is used particularly in functional languages. When using
558 delayed evaluation, an expression is not evaluated as soon as it gets bound to
559 a variable, but when the evaluator is forced to produce the expression's value.
560 That is, a statement such as x:=expression; (i.e. the assignment of the result
561 of an expression to a variable) clearly calls for the expression to be
562 evaluated and the result placed in x, but what actually is in x is irrelevant
563 until there is a need for its value via a reference to x in some later
564 expression whose evaluation could itself be deferred, though eventually the
565 rapidly-growing tree of dependencies would be pruned in order to produce some
566 symbol rather than another for the outside world to see.
568 Some programming languages delay evaluation of expressions by default, and some
569 others provide functions or special syntax to delay evaluation. In Miranda and
570 Haskell, evaluation of function arguments is delayed by default. In many other
571 languages, evaluation can be delayed by explicitly suspending the computation
572 using special syntax (as with Scheme's "delay" and "force" and OCaml's "lazy"
573 and "Lazy.force") or, more generally, by wrapping the expression in a thunk.
574 The object representing such an explicitly delayed evaluation is called a
575 future or promise. Perl 6 uses lazy evaluation of lists, so one can assign
576 infinite lists to variables and use them as arguments to functions, but unlike
577 Haskell and Miranda, Perl 6 doesn't use lazy evaluation of arithmetic operators
578 and functions by default.
580 Delayed evaluation has the advantage of being able to create calculable
581 infinite lists without infinite loops or size matters interfering in
582 computation. For example, one could create a function that creates an infinite
583 list (often called a stream) of Fibonacci numbers. The calculation of the n-th
584 Fibonacci number would be merely the extraction of that element from the
585 infinite list, forcing the evaluation of only the first n members of the list.
587 For example, in Haskell, the list of all Fibonacci numbers can be written as
589 fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
591 In Haskell syntax, ":" prepends an element to a list, tail returns a list
592 without its first element, and zipWith uses a specified function (in this case
593 addition) to combine corresponding elements of two lists to produce a third.
595 Provided the programmer is careful, only the values that are required to
596 produce a particular result are evaluated. However, certain calculations may
597 result in the program attempting to evaluate an infinite number of elements;
598 for example, requesting the length of the list or trying to sum the elements of
599 the list with a fold operation would result in the program either failing to
600 terminate or running out of memory. [edit] Control structures
602 Even in most eager languages if statements evaluate in a lazy fashion.
606 evaluates (a), then if and only if (a) evaluates to true does it evaluate (b),
607 otherwise it evaluates (c). That is, either (b) or (c) will not be evaluated.
608 Conversely, in an eager language the expected behavior is that
610 define f(x,y) = 2*x set k = f(e,5)
612 will still evaluate (e) and (f) when computing (k). However, user-defined
613 control structures depend on exact syntax, so for example
615 define g(a,b,c) = if a then b else c l = g(h,i,j)
617 (i) and (j) would both be evaluated in an eager language. While in
619 l' = if h then i else j
621 (i) or (j) would be evaluated, but never both.
623 Lazy evaluation allows control structures to be defined normally, and not as
624 primitives or compile-time techniques. If (i) or (j) have side effects or
625 introduce run time errors, the subtle differences between (l) and (l') can be
626 complex. As most programming languages are Turing-complete, it is of course
627 possible to introduce lazy control structures in eager languages, either as
628 built-ins like C's ternary operator ?: or by other techniques such as clever
629 use of lambdas, or macros.
631 Short-circuit evaluation of Boolean control structures is sometimes called
632 "lazy". [edit] Controlling eagerness in lazy languages
634 In lazy programming languages such as Haskell, although the default is to
635 evaluate expressions only when they are demanded, it is possible in some cases
636 to make code more eager—or conversely, to make it more lazy again after it has
637 been made more eager. This can be done by explicitly coding something which
638 forces evaluation (which may make the code more eager) or avoiding such code
639 (which may make the code more lazy). Strict evaluation usually implies
640 eagerness, but they are technically different concepts.
642 However, there is an optimisation implemented in some compilers called
643 strictness analysis, which, in some cases, allows the compiler to infer that a
644 value will always be used. In such cases, this may render the programmer's
645 choice of whether to force that particular value or not, irrelevant, because
646 strictness analysis will force strict evaluation.
648 In Haskell, marking constructor fields strict means that their values will
649 always be demanded immediately. The seq function can also be used to demand a
650 value immediately and then pass it on, which is useful if a constructor field
651 should generally be lazy. However, neither of these techniques implements
652 recursive strictness—for that, a function called deepSeq was invented.
654 Also, pattern matching in Haskell 98 is strict by default, so the ~ qualifier
655 has to be used to make it lazy. [edit]
660 confluence/Church-Rosser
663 "combinators", useful ones:
667 n-ary[sic] composition
668 "fold-based"[sic] representation of numbers
669 defining some operations, not yet predecessor
670 iszero,succ,add,mul,...?
673 explain differences between list and tuple (and stream)
674 FIFO queue,LIFO stack,etc...
675 "pair-based" representation of lists (1,2,3)
676 nil,cons,isnil,head,tail
678 explain operations like "map","filter","fold_left","fold_right","length","reverse"
679 but we're not yet in position to implement them because we don't know how to recurse
681 Another way to do lists is based on model of how we did numbers
682 "fold-based" representation of lists
683 One virtue is we can do some recursion by exploiting the fold-based structure of our implementation; don't (yet) need a general method for recursion
685 Go back to numbers, how to do predecessor? (a few ways)
686 For some purposes may be easier (to program,more efficient) to use "pair-based" representation of numbers
687 ("More efficient" but these are still base-1 representations of numbers!)
688 In this case, too you'd need a general method for recursion
689 (You could also have a hybrid, pair-and-fold based representation of numbers, and a hybrid, pair-and-fold based representation of lists. Works quite well.)
692 Even if we have fold-based representation of numbers, and predecessor/equal/subtraction, some recursive functions are going to be out of our reach
693 Need a general method, where f(n) doesn't just depend on f(n-1) (or <f(n-1),f(n-2),...>). Example?
695 How to do with recursion with omega.
698 Next week: fixed point combinators