1. Substitution; using alpha-conversion and other strategies 1. Conversion versus reduction 1. Different evaluation strategies (call by name, call by value, etc.) 1. Strongly normalizing vs weakly normalizing vs non-normalizing; Church-Rosser Theorem(s) 1. Lambda calculus compared to combinatorial logic

1. Church-like encodings of numbers, defining addition and multiplication 1. Defining the predecessor function; alternate encodings for the numbers 1. Homogeneous sequences or "lists"; how they differ from pairs, triples, etc. 1. Representing lists as pairs 1. Representing lists as folds 1. Typical higher-order functions: map, filter, fold

1. Recursion exploiting the fold-like representation of numbers and lists ([[!wikipedia Deforestation (computer science)]], [[!wikipedia Zipper (data structure)]]) 1. General recursion using omega 1. Eta reduction and "extensionality" ?? Undecidability of equivalence There is no algorithm which takes as input two lambda expressions and outputs TRUE or FALSE depending on whether or not the two expressions are equivalent. This was historically the first problem for which undecidability could be proven. As is common for a proof of undecidability, the proof shows that no computable function can decide the equivalence. Church's thesis is then invoked to show that no algorithm can do so. Church's proof first reduces the problem to determining whether a given lambda expression has a normal form. A normal form is an equivalent expression which cannot be reduced any further under the rules imposed by the form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression e which closely follows the proof of Gödel's first incompleteness theorem. If e is applied to its own Gödel number, a contradiction results. 1. The Y combinator(s); more on evaluation strategies

1. Introducing the notion of a "continuation", which technique we'll now already have used a few times alpha-convertible syntactic equality `===` contract/reduce/`~~>` convertible `<~~>` normalizing weakly normalizable strongly normalizable "normal order" reduction vs "applicative order" eval strategy choices Reduction strategies For more details on this topic, see Evaluation strategy. Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. The distinction between reduction strategies relates to the distinction in functional programming languages between eager evaluation and lazy evaluation. Full beta reductions Any redex can be reduced at any time. This means essentially the lack of any particular reduction strategy—with regard to reducibility, "all bets are off". Applicative order The leftmost, innermost redex is always reduced first. Intuitively this means a function's arguments are always reduced before the function itself. Applicative order always attempts to apply functions to normal forms, even when this is not possible. Most programming languages (including Lisp, ML and imperative languages like C and Java) are described as "strict", meaning that functions applied to non-normalising arguments are non-normalising. This is done essentially using applicative order, call by value reduction (see below), but usually called "eager evaluation". Normal order The leftmost, outermost redex is always reduced first. That is, whenever possible the arguments are substituted into the body of an abstraction before the arguments are reduced. Call by name As normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x. Call by value Only the outermost redexes are reduced: a redex is reduced only when its right hand side has reduced to a value (variable or lambda abstraction). Call by need As normal order, but function applications that would duplicate terms instead name the argument, which is then reduced only "when it is needed". Called in practical contexts "lazy evaluation". In implementations this "name" takes the form of a pointer, with the redex represented by a thunk. Applicative order is not a normalising strategy. The usual counterexample is as follows: define Ω = ωω where ω = λx.xx. This entire expression contains only one redex, namely the whole expression; its reduct is again Ω. Since this is the only available reduction, Ω has no normal form (under any evaluation strategy). Using applicative order, the expression KIΩ = (λxy.x) (λx.x)Ω is reduced by first reducing Ω to normal form (since it is the leftmost redex), but since Ω has no normal form, applicative order fails to find a normal form for KIΩ. In contrast, normal order is so called because it always finds a normalising reduction if one exists. In the above example, KIΩ reduces under normal order to I, a normal form. A drawback is that redexes in the arguments may be copied, resulting in duplicated computation (for example, (λx.xx) ((λx.x)y) reduces to ((λx.x)y) ((λx.x)y) using this strategy; now there are two redexes, so full evaluation needs two more steps, but if the argument had been reduced first, there would now be none). The positive tradeoff of using applicative order is that it does not cause unnecessary computation if all arguments are used, because it never substitutes arguments containing redexes and hence never needs to copy them (which would duplicate work). In the above example, in applicative order (λx.xx) ((λx.x)y) reduces first to (λx.xx)y and then to the normal order yy, taking two steps instead of three. Most purely functional programming languages (notably Miranda and its descendents, including Haskell), and the proof languages of theorem provers, use lazy evaluation, which is essentially the same as call by need. This is like normal order reduction, but call by need manages to avoid the duplication of work inherent in normal order reduction using sharing. In the example given above, (λx.xx) ((λx.x)y) reduces to ((λx.x)y) ((λx.x)y), which has two redexes, but in call by need they are represented using the same object rather than copied, so when one is reduced the other is too. Strict evaluation Main article: strict evaluation In strict evaluation, the arguments to a function are always evaluated completely before the function is applied. Under Church encoding, eager evaluation of operators maps to strict evaluation of functions; for this reason, strict evaluation is sometimes called "eager". Most existing programming languages use strict evaluation for functions. [edit] Applicative order Applicative order (or leftmost innermost) evaluation refers to an evaluation strategy in which the arguments of a function are evaluated from left to right in a post-order traversal of reducible expressions (redexes). Unlike call-by-value, applicative order evaluation reduces terms within a function body as much as possible before the function is applied. [edit] Call by value Call-by-value evaluation (also referred to as pass-by-value) is the most common evaluation strategy, used in languages as different as C and Scheme. In call-by-value, the argument expression is evaluated, and the resulting value is bound to the corresponding variable in the function (frequently by copying the value into a new memory region). If the function or procedure is able to assign values to its parameters, only its local copy is assigned — that is, anything passed into a function call is unchanged in the caller's scope when the function returns. Call-by-value is not a single evaluation strategy, but rather the family of evaluation strategies in which a function's argument is evaluated before being passed to the function. While many programming languages (such as Eiffel and Java) that use call-by-value evaluate function arguments left-to-right, some evaluate functions and their arguments right-to-left, and others (such as Scheme, OCaml and C) leave the order unspecified (though they generally require implementations to be consistent). In some cases, the term "call-by-value" is problematic, as the value which is passed is not the value of the variable as understood by the ordinary meaning of value, but an implementation-specific reference to the value. The description "call-by-value where the value is a reference" is common (but should not be understood as being call-by-reference); another term is call-by-sharing. Thus the behaviour of call-by-value Java or Visual Basic and call-by-value C or Pascal are significantly different: in C or Pascal, calling a function with a large structure as an argument will cause the entire structure to be copied, potentially causing serious performance degradation, and mutations to the structure are invisible to the caller. However, in Java or Visual Basic only the reference to the structure is copied, which is fast, and mutations to the structure are visible to the caller. [edit] Call by reference In call-by-reference evaluation (also referred to as pass-by-reference), a function receives an implicit reference to the argument, rather than a copy of its value. This typically means that the function can modify the argument- something that will be seen by its caller. Call-by-reference therefore has the advantage of greater time- and space-efficiency (since arguments do not need to be copied), as well as the potential for greater communication between a function and its caller (since the function can return information using its reference arguments), but the disadvantage that a function must often take special steps to "protect" values it wishes to pass to other functions. Many languages support call-by-reference in some form or another, but comparatively few use it as a default; Perl and Visual Basic are two that do, though Visual Basic also offers a special syntax for call-by-value parameters. A few languages, such as C++ and REALbasic, default to call-by-value, but offer special syntax for call-by-reference parameters. C++ additionally offers call-by-reference-to-const. In purely functional languages there is typically no semantic difference between the two strategies (since their data structures are immutable, so there is no possibility for a function to modify any of its arguments), so they are typically described as call-by-value even though implementations frequently use call-by-reference internally for the efficiency benefits. Even among languages that don't exactly support call-by-reference, many, including C and ML, support explicit references (objects that refer to other objects), such as pointers (objects representing the memory addresses of other objects), and these can be used to effect or simulate call-by-reference (but with the complication that a function's caller must explicitly generate the reference to supply as an argument). [edit] Call by sharing Also known as "call by object" or "call by object-sharing" is an evaluation strategy first named by Barbara Liskov et al. for the language CLU in 1974[1]. It is used by languages such as Python[2], Iota, Java (for object references)[3], Ruby, Scheme, OCaml, AppleScript, and many other languages. However, the term "call by sharing" is not in common use; the terminology is inconsistent across different sources. For example, in the Java community, they say that Java is pass-by-value, whereas in the Ruby community, they say that Ruby is pass-by-reference, even though the two languages exhibit the same semantics. Call-by-sharing implies that values in the language are based on objects rather than primitive types. The semantics of call-by-sharing differ from call-by-reference in that assignments to function arguments within the function aren't visible to the caller (unlike by-reference semantics)[citation needed]. However since the function has access to the same object as the caller (no copy is made), mutations to those objects within the function are visible to the caller, which differs from call-by-value semantics. Although this term has widespread usage in the Python community, identical semantics in other languages such as Java and Visual Basic are often described as call-by-value, where the value is implied to be a reference to the object. [edit] Call by copy-restore Call-by-copy-restore, call-by-value-result or call-by-value-return (as termed in the Fortran community) is a special case of call-by-reference where the provided reference is unique to the caller. If a parameter to a function call is a reference that might be accessible by another thread of execution, its contents are copied to a new reference that is not; when the function call returns, the updated contents of this new reference are copied back to the original reference ("restored"). The semantics of call-by-copy-restore also differ from those of call-by-reference where two or more function arguments alias one another; that is, point to the same variable in the caller's environment. Under call-by-reference, writing to one will affect the other; call-by-copy-restore avoids this by giving the function distinct copies, but leaves the result in the caller's environment undefined (depending on which of the aliased arguments is copied back first). When the reference is passed to the callee uninitialized, this evaluation strategy may be called call-by-result. [edit] Partial evaluation Main article: Partial evaluation In partial evaluation, evaluation may continue into the body of a function that has not been applied. Any sub-expressions that do not contain unbound variables are evaluated, and function applications whose argument values are known may be reduced. In the presence of side-effects, complete partial evaluation may produce unintended results; for this reason, systems that support partial evaluation tend to do so only for "pure" expressions (expressions without side-effects) within functions. [edit] Non-strict evaluation In non-strict evaluation, arguments to a function are not evaluated unless they are actually used in the evaluation of the function body. Under Church encoding, lazy evaluation of operators maps to non-strict evaluation of functions; for this reason, non-strict evaluation is often referred to as "lazy". Boolean expressions in many languages use lazy evaluation; in this context it is often called short circuiting. Conditional expressions also usually use lazy evaluation, albeit for different reasons. [edit] Normal order Normal-order (or leftmost outermost) evaluation is the evaluation strategy where the outermost redex is always reduced, applying functions before evaluating function arguments. It differs from call-by-name in that call-by-name does not evaluate inside the body of an unapplied function[clarification needed]. [edit] Call by name In call-by-name evaluation, the arguments to functions are not evaluated at all — rather, function arguments are substituted directly into the function body using capture-avoiding substitution. If the argument is not used in the evaluation of the function, it is never evaluated; if the argument is used several times, it is re-evaluated each time. (See Jensen's Device.) Call-by-name evaluation can be preferable over call-by-value evaluation because call-by-name evaluation always yields a value when a value exists, whereas call-by-value may not terminate if the function's argument is a non-terminating computation that is not needed to evaluate the function. Opponents of call-by-name claim that it is significantly slower when the function argument is used, and that in practice this is almost always the case as a mechanism such as a thunk is needed. [edit] Call by need Call-by-need is a memoized version of call-by-name where, if the function argument is evaluated, that value is stored for subsequent uses. In a "pure" (effect-free) setting, this produces the same results as call-by-name; when the function argument is used two or more times, call-by-need is almost always faster. Because evaluation of expressions may happen arbitrarily far into a computation, languages using call-by-need generally do not support computational effects (such as mutation) except through the use of monads and uniqueness types. This eliminates any unexpected behavior from variables whose values change prior to their delayed evaluation. This is a kind of Lazy evaluation. Haskell is the most well-known language that uses call-by-need evaluation. R also uses a form of call-by-need. [edit] Call by macro expansion Call-by-macro-expansion is similar to call-by-name, but uses textual substitution rather than capture-avoiding substitution. With uncautious use, macro substitution may result in variable capture and lead to undesired behavior. Hygienic macros avoid this problem by checking for and replacing shadowed variables that are not parameters. Eager evaluation or greedy evaluation is the evaluation strategy in most traditional programming languages. In eager evaluation an expression is evaluated as soon as it gets bound to a variable. The term is typically used to contrast lazy evaluation, where expressions are only evaluated when evaluating a dependent expression. Eager evaluation is almost exclusively used in imperative programming languages where the order of execution is implicitly defined by the source code organization. One advantage of eager evaluation is that it eliminates the need to track and schedule the evaluation of expressions. It also allows the programmer to dictate the order of execution, making it easier to determine when sub-expressions (including functions) within the expression will be evaluated, as these sub-expressions may have side-effects that will affect the evaluation of other expressions. A disadvantage of eager evaluation is that it forces the evaluation of expressions that may not be necessary at run time, or it may delay the evaluation of expressions that have a more immediate need. It also forces the programmer to organize the source code for optimal order of execution. Note that many modern compilers are capable of scheduling execution to better optimize processor resources and can often eliminate unnecessary expressions from being executed entirely. Therefore, the notions of purely eager or purely lazy evaluation may not be applicable in practice. In computer programming, lazy evaluation is the technique of delaying a computation until the result is required. The benefits of lazy evaluation include: performance increases due to avoiding unnecessary calculations, avoiding error conditions in the evaluation of compound expressions, the capability of constructing potentially infinite data structures, and the capability of defining control structures as abstractions instead of as primitives. Languages that use lazy actions can be further subdivided into those that use a call-by-name evaluation strategy and those that use call-by-need. Most realistic lazy languages, such as Haskell, use call-by-need for performance reasons, but theoretical presentations of lazy evaluation often use call-by-name for simplicity. The opposite of lazy actions is eager evaluation, sometimes known as strict evaluation. Eager evaluation is the evaluation behavior used in most programming languages. Lazy evaluation refers to how expressions are evaluated when they are passed as arguments to functions and entails the following three points:[1] 1. The expression is only evaluated if the result is required by the calling function, called delayed evaluation.[2] 2. The expression is only evaluated to the extent that is required by the calling function, called short-circuit evaluation. 3. The expression is never evaluated more than once, called applicative-order evaluation.[3] Contents [hide] * 1 Delayed evaluation o 1.1 Control structures * 2 Controlling eagerness in lazy languages 3 Other uses 4 See also 5 * References 6 External links [edit] Delayed evaluation Delayed evaluation is used particularly in functional languages. When using delayed evaluation, an expression is not evaluated as soon as it gets bound to a variable, but when the evaluator is forced to produce the expression's value. That is, a statement such as x:=expression; (i.e. the assignment of the result of an expression to a variable) clearly calls for the expression to be evaluated and the result placed in x, but what actually is in x is irrelevant until there is a need for its value via a reference to x in some later expression whose evaluation could itself be deferred, though eventually the rapidly-growing tree of dependencies would be pruned in order to produce some symbol rather than another for the outside world to see. Some programming languages delay evaluation of expressions by default, and some others provide functions or special syntax to delay evaluation. In Miranda and Haskell, evaluation of function arguments is delayed by default. In many other languages, evaluation can be delayed by explicitly suspending the computation using special syntax (as with Scheme's "delay" and "force" and OCaml's "lazy" and "Lazy.force") or, more generally, by wrapping the expression in a thunk. The object representing such an explicitly delayed evaluation is called a future or promise. Perl 6 uses lazy evaluation of lists, so one can assign infinite lists to variables and use them as arguments to functions, but unlike Haskell and Miranda, Perl 6 doesn't use lazy evaluation of arithmetic operators and functions by default. Delayed evaluation has the advantage of being able to create calculable infinite lists without infinite loops or size matters interfering in computation. For example, one could create a function that creates an infinite list (often called a stream) of Fibonacci numbers. The calculation of the n-th Fibonacci number would be merely the extraction of that element from the infinite list, forcing the evaluation of only the first n members of the list. For example, in Haskell, the list of all Fibonacci numbers can be written as fibs = 0 : 1 : zipWith (+) fibs (tail fibs) In Haskell syntax, ":" prepends an element to a list, tail returns a list without its first element, and zipWith uses a specified function (in this case addition) to combine corresponding elements of two lists to produce a third. Provided the programmer is careful, only the values that are required to produce a particular result are evaluated. However, certain calculations may result in the program attempting to evaluate an infinite number of elements; for example, requesting the length of the list or trying to sum the elements of the list with a fold operation would result in the program either failing to terminate or running out of memory. [edit] Control structures Even in most eager languages if statements evaluate in a lazy fashion. if a then b else c evaluates (a), then if and only if (a) evaluates to true does it evaluate (b), otherwise it evaluates (c). That is, either (b) or (c) will not be evaluated. Conversely, in an eager language the expected behavior is that define f(x,y) = 2*x set k = f(e,5) will still evaluate (e) and (f) when computing (k). However, user-defined control structures depend on exact syntax, so for example define g(a,b,c) = if a then b else c l = g(h,i,j) (i) and (j) would both be evaluated in an eager language. While in l' = if h then i else j (i) or (j) would be evaluated, but never both. Lazy evaluation allows control structures to be defined normally, and not as primitives or compile-time techniques. If (i) or (j) have side effects or introduce run time errors, the subtle differences between (l) and (l') can be complex. As most programming languages are Turing-complete, it is of course possible to introduce lazy control structures in eager languages, either as built-ins like C's ternary operator ?: or by other techniques such as clever use of lambdas, or macros. Short-circuit evaluation of Boolean control structures is sometimes called "lazy". [edit] Controlling eagerness in lazy languages In lazy programming languages such as Haskell, although the default is to evaluate expressions only when they are demanded, it is possible in some cases to make code more eager—or conversely, to make it more lazy again after it has been made more eager. This can be done by explicitly coding something which forces evaluation (which may make the code more eager) or avoiding such code (which may make the code more lazy). Strict evaluation usually implies eagerness, but they are technically different concepts. However, there is an optimisation implemented in some compilers called strictness analysis, which, in some cases, allows the compiler to infer that a value will always be used. In such cases, this may render the programmer's choice of whether to force that particular value or not, irrelevant, because strictness analysis will force strict evaluation. In Haskell, marking constructor fields strict means that their values will always be demanded immediately. The seq function can also be used to demand a value immediately and then pass it on, which is useful if a constructor field should generally be lazy. However, neither of these techniques implements recursive strictness—for that, a function called deepSeq was invented. Also, pattern matching in Haskell 98 is strict by default, so the ~ qualifier has to be used to make it lazy. [edit] confluence/Church-Rosser "combinators", useful ones: Useful combinators I K omega true/get-first/K false/get-second make-pair S,B,C,W/dup,Omega (( combinatorial logic )) composition n-ary[sic] composition "fold-based"[sic] representation of numbers defining some operations, not yet predecessor iszero,succ,add,mul,...? lists? explain differences between list and tuple (and stream) FIFO queue,LIFO stack,etc... "pair-based" representation of lists (1,2,3) nil,cons,isnil,head,tail explain operations like "map","filter","fold_left","fold_right","length","reverse" but we're not yet in position to implement them because we don't know how to recurse Another way to do lists is based on model of how we did numbers "fold-based" representation of lists One virtue is we can do some recursion by exploiting the fold-based structure of our implementation; don't (yet) need a general method for recursion Go back to numbers, how to do predecessor? (a few ways) For some purposes may be easier (to program,more efficient) to use "pair-based" representation of numbers ("More efficient" but these are still base-1 representations of numbers!) In this case, too you'd need a general method for recursion (You could also have a hybrid, pair-and-fold based representation of numbers, and a hybrid, pair-and-fold based representation of lists. Works quite well.) Recursion Even if we have fold-based representation of numbers, and predecessor/equal/subtraction, some recursive functions are going to be out of our reach Need a general method, where f(n) doesn't just depend on f(n-1) (or ). Example? How to do with recursion with omega. Next week: fixed point combinators