From: Jim Date: Sat, 21 Feb 2015 14:15:57 +0000 (-0500) Subject: Merge branch 'working' X-Git-Url: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=commitdiff_plain;h=6694165ac4d6edab602b3ad3651d0a5931b36a0e;hp=12b94057358619fd5ae7a48f685fedba8ce4d082 Merge branch 'working' * working: replace tattoo image --- diff --git a/content.mdwn b/content.mdwn index a546e9d8..b6f23c0b 100644 --- a/content.mdwn +++ b/content.mdwn @@ -80,4 +80,4 @@ Week 4: * [[Fixed point combinators|topics/week4_fixed_point_combinators]] * [[More about fixed point combinators|topics/week4_more_about_fixed_point_combinators]] * Towards types (in progress) - +* [[Homework for week 4|exercises/assignment4]] diff --git a/exercises/_assignment4.mdwn b/exercises/_assignment4.mdwn deleted file mode 100644 index 7df06052..00000000 --- a/exercises/_assignment4.mdwn +++ /dev/null @@ -1,146 +0,0 @@ -## Recursion ## - -## Basic fixed points ## - -1. Recall that ω := \f.ff, and Ω := -ω ω. Is Ω a fixed point for -ω? Find a fixed point for ω, -and prove that it is a fixed point. - -2. Consider Ω X for an arbitrary term `X`. -Ω is so busy reducing itself (the eternal solipsist) that it -never gets around to noticing whether it has an argument, let alone -doing anything with that argument. If so, how could Ω have a -fixed point? That is, how could there be an `X` such that -Ω X <~~> Ω(Ω X)? To answer this -question, begin by constructing . Prove that - is a fixed point for Ω. - -3. Find two different terms that have the same fixed point. That is, -find terms `F`, `G`, and `X` such that `F(X) <~~> F(F(X))` and `G(X) -<~~> G(G(X))`. (If you need a hint, reread the notes on fixed -points.) - -## Writing recursive functions ## - -4. Helping yourself to the functions given just below, -write a recursive function caled `fac` that computes the factorial. -The factorial `n! = n * (n - 1) * (n - 2) * ... * 3 * 2 * 1`. -For instance, `fac 0 ~~> 1`, `fac 1 ~~> 1`, `fac 2 ~~> 2`, `fac 3 ~~> -6`, and `fac 4 ~~> 24`. - - let true = \then else. then in - let false = \then else. else in - let iszero = \n. n (\x. false) true in - let pred = \n f z. n (\u v. v (u f)) (K z) I in - let succ = \n f z. f (n f z) in - let add = \n m .n succ m in - let mult = \n m.n(add m)0 in - let Y = \h . (\f . h (f f)) (\f . h (f f)) in - - let fac = ... in - - fac 4 - -## Arithmetic infinity? ## - -The next few questions involve reasoning about Church arithmetic and -infinity. Let's choose some arithmetic functions: - - succ = \nfz.f(nfz) - add = \m n. m succ n in - mult = \m n. m (add n) 0 in - exp = \m n . m (mult n) 1 in - -There is a pleasing pattern here: addition is defined in terms of -the successor function, multiplication is defined in terms of -addition, and exponentiation is defined in terms of multiplication. - -1. Find a fixed point `X` for the succ function. Prove it's a fixed -point, i.e., demonstrate that `succ X <~~> succ (succ X)`. - -We've had surprising success embedding normal arithmetic in the lambda -calculus, modeling the natural numbers, addition, multiplication, and -so on. But one thing that some versions of arithmetic supply is a -notion of infinity, which we'll write as `inf`. This object usually -satisfies the following constraints, for any (finite) natural number `n`: - - n + inf == inf - n * inf == inf - n ^ inf == inf - n leq inf == True - -2. Prove that `add 1 X <~~> X`, where `X` is the fixed -point you found in (1). What about `add 2 X <~~> X`? - -Comment: a fixed point for the succ function is an object such that it -is unchanged after adding 1 to it. It make a certain amount of sense -to use this object to model arithmetic infinity. For instance, -depending on implementation details, it might happen that `leq n X` is -true for all (finite) natural numbers `n`. However, the fixed point -you found for succ is unlikely to be a fixed point for `mult n` or for -`exp n`. - - -#Mutually-recursive functions# - -
    -
  1. (Challenging.) One way to define the function `even` is to have it hand off -part of the work to another function `odd`: - - let even = \x. iszero x - ; if x == 0 then result is - true - ; else result turns on whether x's pred is odd - (odd (pred x)) - -At the same tme, though, it's natural to define `odd` in such a way that it -hands off part of the work to `even`: - - let odd = \x. iszero x - ; if x == 0 then result is - false - ; else result turns on whether x's pred is even - (even (pred x)) - -Such a definition of `even` and `odd` is called **mutually recursive**. If you -trace through the evaluation of some sample numerical arguments, you can see -that eventually we'll always reach a base step. So the recursion should be -perfectly well-grounded: - - even 3 - ~~> iszero 3 true (odd (pred 3)) - ~~> odd 2 - ~~> iszero 2 false (even (pred 2)) - ~~> even 1 - ~~> iszero 1 true (odd (pred 1)) - ~~> odd 0 - ~~> iszero 0 false (even (pred 0)) - ~~> false - -But we don't yet know how to implement this kind of recursion in the lambda -calculus. - -The fixed point operators we've been working with so far worked like this: - - let X = Y T in - X <~~> T X - -Suppose we had a pair of fixed point operators, `Y1` and `Y2`, that operated on -a *pair* of functions `T1` and `T2`, as follows: - - let X1 = Y1 T1 T2 in - let X2 = Y2 T1 T2 in - X1 <~~> T1 X1 X2 and - X2 <~~> T2 X1 X2 - -If we gave you such a `Y1` and `Y2`, how would you implement the above -definitions of `even` and `odd`? - - -
  2. (More challenging.) Using our derivation of Y from the [Week3 -notes](/week3/#index4h2) as a model, construct a pair `Y1` and `Y2` that behave -in the way described. - -(See [[hints/Assignment 4 hint 3]] if you need some hints.) - diff --git a/exercises/assignment4.mdwn b/exercises/assignment4.mdwn new file mode 100644 index 00000000..8bbeffe4 --- /dev/null +++ b/exercises/assignment4.mdwn @@ -0,0 +1,214 @@ +## Fixed points ## + +1. Recall that `ω ≡ \x. x x`, and `Ω ≡ ω ω`. Is `Ω` a fixed point for `ω`? Find a fixed point for `ω`, and prove that it is a fixed point. + +2. Consider `Ω ξ` for an arbitrary term `ξ`. +`Ω` is so busy reducing itself (the eternal narcissist) that it +never gets around to noticing whether it has an argument, let alone +doing anything with that argument. If so, how could `Ω` have a +fixed point? That is, how could there be an `ξ` such that +`Ω ξ <~~> ξ`? To answer this +question, begin by constructing `Y Ω`. Prove that +`Y Ω` is a fixed point for `Ω`. + +3. Find two different terms that have the same fixed point. That is, +find terms `F`, `G`, and `ξ` such that `F ξ <~~> ξ` and `G ξ +<~~> ξ`. (If you need a hint, reread the notes on fixed +points.) + +4. Assume that `Ψ` is some fixed point combinator; we're not telling you which one. (You can just write `Psi` in your homework if you don't know how to generate the symbol `Ψ`.) Prove that `Ψ Ψ` is a fixed point of itself, that is, that `Ψ Ψ <~~> Ψ Ψ (Ψ Ψ)`. + + + + +## Writing recursive functions ## + +5. Helping yourself to the functions given below, +write a recursive function called `fact` that computes the factorial. +The factorial `n! = n * (n - 1) * (n - 2) * ... * 3 * 2 * 1`. +For instance, `fact 0 ~~> 1`, `fact 1 ~~> 1`, `fact 2 ~~> 2`, `fact 3 ~~> +6`, and `fact 4 ~~> 24`. + + let true = \y n. y in + let false = \y n. n in + let pair = \a b. \v. v a b in + let fst = \a b. a in ; aka true + let snd = \a b. b in ; aka false + let zero = \s z. z in + let succ = \n s z. s (n s z) in + let zero? = \n. n (\p. false) true in + let pred = \n. n (\p. p (\a b. pair (succ a) a)) (pair zero zero) snd in + let add = \l r. r succ l in + let mult = \l r. r (add l) 0 in + let Y = \h. (\u. h (u u)) (\u. h (u u)) in + + let fact = ... in + + fact 4 + +6. For this question, we want to implement **sets** of numbers in terms of lists of numbers, where we make sure as we construct those lists that they never contain a single number more than once. (It would be even more efficient if we made sure that the lists were always sorted, but we won't try to implement that refinement here.) To enforce the idea of modularity, let's suppose you don't know the details of how the lists are implemented. You just are given the functions defined below for them (but pretend you don't see the actual definitions). These define lists in terms of [[one of the new encodings discussed last week|/topics/week3_lists#v5-lists]]. + + ; all functions from the previous question, plus + ; `num_cmp x y lt eq gt` returns lt when xy + let num_cmp = (\base build consume. \l r. r consume (l build base) fst) + ; where base is + (pair (\a b c. b) (K (\a b c. a))) + ; and build is + (\p. pair (\a b c. c) p) + ; and consume is + (\p. p fst p (p snd) (p snd)) in + let num_equal? = \x y. num_cmp x y false true false in + let neg = \b y n. b n y in + let empty = \f n. n in + let cons = \x xs. \f n. f x xs in + let empty? = \xs. xs (\y ys. false) true in + let tail = \xs. xs (\y ys. ys) empty in + let append = Y (\append. \xs zs. xs (\y ys. (cons y (append ys zs))) zs) in + let take_while = Y (\take_while. \p xs. xs (\y ys. (p y) (cons y (take_while p ys)) empty) empty) in + let drop_while = Y (\drop_while. \p xs. xs (\y ys. (p y) (drop_while p ys) xs) empty) in + ... + + The functions `take_while` and `drop_while` work as described in [[Week 1's homework|assignment1]]. + + Using those resources, define a `set_cons` and a `set_equal?` function. The first should take a number argument `x` and a set argument `xs` (implemented as a list of numbers assumed to have no repeating elements), and return a (possibly new) set argument which contains `x`. (But make sure `x` doesn't appear in the result twice!) The `set_equal?` function should take two set arguments `xs` and `ys` and say whether they represent the same set. (Be careful, the lists `[1, 2]` and `[2, 1]` are different lists but do represent the same set. Hence, you can't just use the `list_equal?` function you defined in last week's homework.) + + Here are some tips for getting started. Use `drop_while`, `num_equal?`, and `empty?` to define a `mem?` function that returns `true` if number `x` is a member of a list of numbers `xs`, else returns `false`. Also use `take_while`, `drop_while`, `num_equal?`, `tail` and `append` to define a `without` function that returns a copy of a list of numbers `xs` that omits the first occurrence of a number `x`, if there be such. You may find these functions `mem?` and `without` useful in defining `set_cons` and `set_equal?`. Also, for `set_equal?`, you are probably going to want to define the function recursively... as now you know how to do. + + +7. Linguists often analyze natural language expressions into trees. We'll need trees in future weeks, and tree structures provide good opportunities for learning how to write recursive functions. Making use of our current resources, we might approximate trees as follows. Instead of words or syntactic categories, we'll have the nodes of the tree labeled with Church numbers. We'll think of a tree as a list in which each element is itself a tree. For simplicity, we'll adopt the convention that a tree of length 1 must contain a number as its only element. + + Then we have the following representations: + + . + /|\ + / | \ + 1 2 3 + + [[1], [2], [3]] + + . + / \ + /\ 3 + 1 2 + + [[[1], [2]], [3]] + + . + / \ + 1 /\ + 2 3 + + [[1], [[2], [3]]] + + Some limitations of this scheme: there is no easy way to label an inner, branching node (for example with a syntactic category like VP), and there is no way to represent a tree in which a mother node has a single daughter. + + When processing a tree, you can test for whether the tree is a leaf node (that is, contains only a single number), by testing whether the length of the list is 1. This will be your base case for your recursive definitions that work on these trees. + + Your assignment is to write a Lambda Calculus function that expects a tree, encoded in the way just described, as an argument, and returns the sum of its leaves as a result. So for all of the trees listed above, it should return `1 + 2 + 3`, namely `6`. You can use any Lambda Calculus implementation of lists you like. + + + +8. The **fringe** of a leaf-labeled tree is the list of values at its leaves, ordered from left-to-right. For example, the fringe of all three trees displayed above is the same list, `[1, 2, 3]`. We are going to return to the question of how to tell whether trees have the same fringe several times this course. We'll discover more interesting and more efficient ways to do it as our conceptual toolboxes get fuller. For now, we're going to explore the straightforward strategy. Write a function that expects a tree as an argument, and returns the list which is its fringe. Next write a function that expects two trees as arguments, converts each of them into their fringes, and then determines whether the two lists so produced are equal. (Convert your `list_equal?` function from last week's homework into the Lambda Calculus for this last step.) + + + +## Arithmetic infinity? ## + +The next few questions involve reasoning about Church arithmetic and +infinity. Let's choose some arithmetic functions: + + succ = \n s z. s (n s z) + add = \l r. r succ l in + mult = \l r. r (add l) 0 in + exp = \base r. r (mult base) 1 in + +There is a pleasing pattern here: addition is defined in terms of +the successor function, multiplication is defined in terms of +addition, and exponentiation is defined in terms of multiplication. + + +9. Find a fixed point `ξ` for the successor function. Prove it's a fixed +point, i.e., demonstrate that `succ ξ <~~> ξ`. + + We've had surprising success embedding normal arithmetic in the Lambda +Calculus, modeling the natural numbers, addition, multiplication, and +so on. But one thing that some versions of arithmetic supply is a +notion of infinity, which we'll write as `inf`. This object sometimes +satisfies the following constraints, for any finite natural number `n`: + + n + inf == inf + n * inf == inf + n ^ inf == inf + leq n inf == true + + (Note, though, that with *some* notions of infinite numbers, like [[!wikipedia ordinal numbers]], operations like `+` are defined in such a way that `inf + n` is different from `n + inf`, and does exceed `inf`; similarly for `*` and `^`. With other notions of infinite numbers, like the [[!wikipedia cardinal numbers]], even less familiar arithmetic operations are employed.) + +10. Prove that `add ξ 1 <~~> ξ`, where `ξ` is the fixed +point you found in (1). What about `add ξ 2 <~~> ξ`? + +Comment: a fixed point for the successor function is an object such that it +is unchanged after adding 1 to it. It makes a certain amount of sense +to use this object to model arithmetic infinity. For instance, +depending on implementation details, it might happen that `leq n ξ` is +true for all (finite) natural numbers `n`. However, the fixed point +you found for `succ` and `(+n)` (recall this is shorthand for `\x. add x n`) may not be a fixed point for `(*n)`, for example. + + +## Mutually-recursive functions ## + +11. (Challenging.) One way to define the function `even?` is to have it hand off +part of the work to another function `odd?`: + + let even? = \x. (zero? x) + ; if x == 0 then result is + true + ; else result turns on whether x-1 is odd + (odd? (pred x)) + + At the same tme, though, it's natural to define `odd?` in such a way that it +hands off part of the work to `even?`: + + let odd? = \x. (zero? x) + ; if x == 0 then result is + false + ; else result turns on whether x-1 is even + (even? (pred x)) + + Such a definition of `even?` and `odd?` is called **mutually recursive**. If you +trace through the evaluation of some sample numerical arguments, you can see +that eventually we'll always reach a base step. So the recursion should be +perfectly well-grounded: + + even? 3 + ~~> (zero? 3) true (odd? (pred 3)) + ~~> odd? 2 + ~~> (zero? 2) false (even? (pred 2)) + ~~> even? 1 + ~~> (zero? 1) true (odd? (pred 1)) + ~~> odd? 0 + ~~> (zero? 0) false (even? (pred 0)) + ~~> false + + But we don't yet know how to implement this kind of recursion in the Lambda +Calculus. + + The fixed point operators we've been working with so far worked like this: + + let ξ = Y h in + ξ <~~> h ξ + + Suppose we had a pair of fixed point operators, `Y1` and `Y2`, that operated on +a *pair* of functions `h` and `g`, as follows: + + let ξ1 = Y1 h g in + let ξ2 = Y2 h g in + ξ1 <~~> h ξ1 ξ2 and + ξ2 <~~> g ξ1 ξ2 + + If we gave you such a `Y1` and `Y2`, how would you implement the above +definitions of `even?` and `odd?`? + + +12. (More challenging.) Using our derivation of `Y` from [[this week's notes|topics/week4_fixed_point_combinators#deriving-y]] as a model, construct a pair `Y1` and `Y2` that behave in the way described above. + + Here is one hint to get you started: remember that in the notes, we constructed a fixed point for `h` by evolving it into `H` and using `H H` as `h`'s fixed point. We suggested the thought exercise, how might you instead evolve `h` into some `T` and then use `T T T` as `h`'s fixed point. Try solving this problem first. It may help give you the insights you need to define a `Y1` and `Y2`. [[Here are some hints|assignment4_hint]]. diff --git a/exercises/assignment4_hint.mdwn b/exercises/assignment4_hint.mdwn new file mode 100644 index 00000000..666b4603 --- /dev/null +++ b/exercises/assignment4_hint.mdwn @@ -0,0 +1,125 @@ +## Hint/solution for Y1, Y2 ## + +(Perhaps you might read a few bullet points to get some momentum, then stop reading and see how much further you can get on your own.) + +* First, the thought exercise about `T T T`. Recall that `h` was of the form `\body. ...body...`, and we created one fixed point `ξ` like this: + + H ≡ \u. h (u u) + ξ ≡ H H + + Now we want to create another fixed point `ξ′` that works like this: + + T ≡ \u v. h ... + ξ′ ≡ T T T + ≡ (\u v. h ...) T T + <~~> h (T T T) + + The last line is justified because we know that `ξ′ ≡ T T T` is supposed to be a fixed point of `h`. How could you fill in the `...` so that the next to last line reduced to the last line? + +* We're going to give the answer now, so try to solve it without looking... + +* Answer: `T ≡ (\u v. h (u u v))`. (Really it could be three occurrences of any mix of `u`s or `v`s. You could also use, for example `(\u v. h (v u v)`.) You'll see that when `u` is bound to `T` and so is `v`, then `(u u v) <~~> T T T`. + +* Ok, now let's tackle `Y1` and `Y2`. Begin by trying to specify `ξ1` and `ξ2` directly, then worry about making operators `Y1` and `Y2` such that: + + ξ1 ≡ Y1 h g + ξ2 ≡ Y2 h g + + later. Remember how `ξ1` and `ξ2` need to work: + + ξ1 <~~> h ξ1 ξ2 and + ξ2 <~~> g ξ1 ξ2 + +* First hint: `ξ1` is going to have the form `X1 H G`, for three terms `X1`, `H`, and `G`. Don't worry yet about what the terms `H` and `G` are. + +* Second hint: `ξ2` is going to have the form `X2 H G`. + +* So evidently, `X1` will be an abstract that accepts (at least) two arguments, and so too `X2`. We can write them schematically like this: + + X1 ≡ \u v. ... + X2 ≡ \u v. ... + + (What fills in the `...` in the two cases will presumably be different.) And thus: + + ξ1 ≡ (\u v. ...) H G + ξ2 ≡ (\u v. ...) H G + +* Can you solve it now? + +* Another hint. `h` is going to be a function like `even?` that needs to refer to `odd?`, and possibly also to itself. (In the particular case of `even?` it doesn't need to refer to itself, but in general a mutually recursive function might.) So we can imagine `h` to have the form `\h g. ...`, and similarly, we can imagine `g` to have the form `\h g. ...`. For `g`, the `...` will presumably be different than it is for `h`. (We adopt the convention of always having the `\h` come first and the `\g` come second in *both* `h` and `g`, just because that makes some of the patterns that will emerge easier to see. You could do it differently if you wanted. Then your answer would be different than the one we're developing here.) + +* What `h` should be using for its `\h` variable is the fixed point `ξ1`; and for its `\g` variable is the fixed point `ξ2`. Same for `g`. + +* So `X1` will have the form: + + X1 ≡ \u v. h ξ1 ξ2 + ≡ \u v. h (X1 H G) (X2 H G) + + And `X2` will have the form: + + X2 ≡ \u v. g ξ1 ξ2 + ≡ \u v. g (X1 H G) (X2 H G) + + Thus: + + ξ1 ≡ (\u v. h (X1 H G) (X2 H G)) H G + ξ2 ≡ (\u v. g (X1 H G) (X2 H G)) H G + + where `X1` happens also to be the lambda term that is the head of `ξ1`; and `X2` happens also to be the lambda term that is the head of `ξ2`. + +* Where we have `H` in `X1` (the head of `ξ1`), there we could just use the bound variable `u`, right? Similarly for `G` we could just use the bound variable `v`, right? And similarly for `X2` (the head of `ξ2`). So: + + ξ1 ≡ (\u v. h (X1 u v) (X2 u v)) H G + ξ2 ≡ (\u v. g (X1 u v) (X2 u v)) H G + +* Can you solve it yet? + +* Don't bring in `Y` or any other already-familiar fixed point combinator; that's a false path. You don't need it. + +* The question you should be focusing on is: what are `H` and `G`? What are their relations to `X1` and `X2`? These might not be four distinct lambda terms. How many lambda terms do you think those four labels collectively designate? + +* They designate only two lambda terms. + +* Can you figure out what they are? + +* `H` is `X1` and `G` is `X2`. + +* Now can you figure out how to get rid of the explicit self-referencing whereby `X1` contains the free variable `X1`, meant to refer to the lambda abstract that contains it? If `X1` is just `H`, also known inside the body of `X1` as `u`, we could just write `ξ1` like this, right: + + ξ1 ≡ (\u v. h (u u v) (X2 u v)) H G + + And similarly, if `X2` is just `G`, also known inside the body of `X1` as `v`, we could just write `ξ1` like this, right: + + ξ1 ≡ (\u v. h (u u v) (v u v)) H G + + The lambda abstract that is the head of `ξ1` is just an ordinary, non-self-referring lambda term, and it is what we've been calling `X1` and `H`. + +* Now can you write `ξ2`? + +* Answer: + + ξ2 ≡ (\u v. g (u u v) (v u v)) H G + +* So in short, `ξ1 ≡ H H G` and `ξ2 ≡ G H G`, where `H` and `G` are the terms `(\u v. h (u u v) (v u v))` and `(\u v. g (u u v) (v u v))`, respectively. If you do the substitutions, you will see that these definitions do indeed satisfy our constraint that: + + ξ1 <~~> h ξ1 ξ2 and + ξ2 <~~> g ξ1 ξ2 + +* Ok, so that provides a `ξ1` and a `ξ2` for our specific choice of the functions `h` and `g`. Now can you generalize this, so that + + ξ1 ≡ Y1 h g + ξ2 ≡ Y2 h g + +* We're going to give the answer. Do you want to try again to solve it yourself? By this point you should be able to do it. + +* If you're confused, read this all through again until you understand it. + +* Ready for the answer? + +* Ok, here it is: + + Y1 ≡ \h g. (\H G. H H G) (\u v. h (u u v) (v u v)) (\u v. g (u u v) (v u v)) + Y2 ≡ \h g. (\H G. G H G) (\u v. h (u u v) (v u v)) (\u v. g (u u v) (v u v)) + + You can reduce those a bit further, but it makes the pattern harder to see. + diff --git a/index.mdwn b/index.mdwn index 8f0e0c6e..a122c1da 100644 --- a/index.mdwn +++ b/index.mdwn @@ -123,7 +123,8 @@ Reduction Strategies and Normal Forms (in progress); > Topics: [[!img images/tabletop_roleplaying.png size="240x240" alt="Hey, no recursing"]] [[Yes, recursing|topics/week4_fixed_point_combinators]]; [[More about fixed point combinators|topics/week4_more_about_fixed_point_combinators]]; -Towards types (in progress) +Towards types (in progress); +[[Homework|exercises/assignment4]] > Now you can read Sections 3.1 and 6.1 of Hankin; and browse the rest of Hankin Chapter 6, which should look somewhat familiar. diff --git a/topics/week4_fixed_point_combinators.mdwn b/topics/week4_fixed_point_combinators.mdwn index a16d7c59..ed22cfff 100644 --- a/topics/week4_fixed_point_combinators.mdwn +++ b/topics/week4_fixed_point_combinators.mdwn @@ -293,6 +293,7 @@ The strategy we will present will turn out to be a general way of finding a fixed point for any lambda term. + ## Deriving Y, a fixed point combinator ## How shall we begin? Well, we need to find an argument to supply to diff --git a/topics/week4_more_about_fixed_point_combinators.mdwn b/topics/week4_more_about_fixed_point_combinators.mdwn index 33c2f077..78ffc89d 100644 --- a/topics/week4_more_about_fixed_point_combinators.mdwn +++ b/topics/week4_more_about_fixed_point_combinators.mdwn @@ -195,9 +195,9 @@ successor. Let's just check that `ξ = succ ξ`: You should see the close similarity with `Y Y` here. -## Q. So `Y` applied to `succ` returns a number that is not finite? ## +## Q: So `Y` applied to `succ` returns a number that is not finite? ## -A. Well, if it makes sense to think of it as a number at all. It doesn't have the same structure as our encodings of finite Church numbers. But let's see if it behaves like they do: +A: Well, if it makes sense to think of it as a number at all. It doesn't have the same structure as our encodings of finite Church numbers. But let's see if it behaves like they do: ; assume same definitions as before succ ξ @@ -231,9 +231,9 @@ there will always be an opportunity for more beta reduction. (Lather, rinse, repeat!) -## Q. That reminds me, what about [[evaluation order]]? ## +## Q: That reminds me, what about [[evaluation order]]? ## -A. For a recursive function that has a well-behaved base case, such as +A: For a recursive function that has a well-behaved base case, such as the factorial function, evaluation order is crucial. In the following computation, we will arrive at a normal form. Watch for the moment at which we have to make a choice about which beta reduction to perform @@ -270,10 +270,10 @@ start with the `zero?` predicate, and only produce a fresh copy of `prefact` if we are forced to. -## Q. You claimed that the Ackermann function couldn't be implemented using our primitive recursion techniques (such as the techniques that allow us to define addition and multiplication). But you haven't shown that it is possible to define the Ackermann function using full recursion. ## +## Q: You claimed that the Ackermann function couldn't be implemented using our primitive recursion techniques (such as the techniques that allow us to define addition and multiplication). But you haven't shown that it is possible to define the Ackermann function using full recursion. ## -A. OK: +A: OK: A(m,n) = | when m == 0 -> n + 1 @@ -297,7 +297,7 @@ So for instance: `A 3 x` is to `A 2 x` as exponentiation is to multiplication--- so `A 4 x` is to `A 3 x` as hyper-exponentiation is to exponentiation... -## Q. What other questions should I be asking? ## +## Q: What other questions should I be asking? ## * What is it about the variant fixed-point combinators that makes them compatible with a call-by-value evaluation strategy?