From 24cb96add7cd59e52f6c3b20be219353d8687224 Mon Sep 17 00:00:00 2001 From: jim Date: Sat, 21 Mar 2015 13:02:38 -0400 Subject: [PATCH] most answers done --- exercises/assignment4_answers.mdwn | 56 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 54 insertions(+), 2 deletions(-) diff --git a/exercises/assignment4_answers.mdwn b/exercises/assignment4_answers.mdwn index 3786a61b..844ce38d 100644 --- a/exercises/assignment4_answers.mdwn +++ b/exercises/assignment4_answers.mdwn @@ -2,6 +2,8 @@ 1. Recall that `ω ≡ \x. x x`, and `Ω ≡ ω ω`. Is `Ω` a fixed point for `ω`? Find a fixed point for `ω`, and prove that it is a fixed point. + ANSWER: Most of you just stated that `Ω` a fixed point for `ω`, but didn't explain why. The mere fact that `ω Ω` doesn't immediately reduce to `Ω` doesn't show that the two terms are not convertible. However, one can argue that `Ω` syntactically has two lambdas, and that upon reduction it always yields itself. So it suffices to show that neither `ω Ω` nor anything it can reduces to can have two lambdas. And indeed it has three lambdas, and anything it reduces to will either have three or four lambdas. For the last part of the question, `Y X` is a fixed point for any `X`, as we've already demonstrated in the notes. + 2. Consider `Ω ξ` for an arbitrary term `ξ`. `Ω` is so busy reducing itself (the eternal narcissist) that it never gets around to noticing whether it has an argument, let alone @@ -11,14 +13,18 @@ fixed point? That is, how could there be an `ξ` such that question, begin by constructing `Y Ω`. Prove that `Y Ω` is a fixed point for `Ω`. + ANSWER: Already demonstrated in the notes. We don't need `Ω ξ` to reduce to `ξ`, because `Y Ω` is a `ξ` that can do its own reducing. + 3. Find two different terms that have the same fixed point. That is, find terms `F`, `G`, and `ξ` such that `F ξ <~~> ξ` and `G ξ <~~> ξ`. (If you need a hint, reread the notes on fixed points.) + ANSWER: Everything is a fixed point of `I`, so in particular `I` is. `I` is also a fixed point for `K I`. There are many other examples. + 4. Assume that `Ψ` is some fixed point combinator; we're not telling you which one. (You can just write `Psi` in your homework if you don't know how to generate the symbol `Ψ`.) Prove that `Ψ Ψ` is a fixed point of itself, that is, that `Ψ Ψ <~~> Ψ Ψ (Ψ Ψ)`. - + ANSWER: By the definition of a fixed point operator, `Ψ Ψ` is a fixed point for `Ψ`; and `Ψ (Ψ Ψ)` is a fixed point for `Ψ Ψ`. That is: (a) `Ψ (Ψ Ψ) <~~> Ψ Ψ`; and (b) `Ψ Ψ (Ψ (Ψ Ψ)) <~~> Ψ (Ψ Ψ)`. Now a fact we did not discuss in class, but which Hankin and other readings do discuss, is that substitution of convertible subterms preserves convertibility (and as part of that, `<~~>` is transitive). Hence in the lhs of (b), we can substitute `Ψ Ψ` for the subterm `Ψ (Ψ Ψ)`, because of (a), getting (c) `Ψ Ψ (Ψ Ψ) <~~> Ψ (Ψ Ψ)`. But then by (a) and transitivity of `<~~>`, we get (d) `Ψ Ψ (Ψ Ψ) <~~> Ψ Ψ`. Which states that `Ψ Ψ` is a fixed point for itself. ## Writing recursive functions ## @@ -46,6 +52,8 @@ For instance, `fact 0 ~~> 1`, `fact 1 ~~> 1`, `fact 2 ~~> 2`, `fact 3 ~~> fact 4 + ANSWER: `let fact = Y (\fact n. (zero? n) 1 (mult n (fact (pred n))))`. + 6. For this question, we want to implement **sets** of numbers in terms of lists of numbers, where we make sure as we construct those lists that they never contain a single number more than once. (It would be even more efficient if we made sure that the lists were always sorted, but we won't try to implement that refinement here.) To enforce the idea of modularity, let's suppose you don't know the details of how the lists are implemented. You just are given the functions defined below for them (but pretend you don't see the actual definitions). These define lists in terms of [[one of the new encodings discussed last week|/topics/week3_lists#v5-lists]]. ; all functions from the previous question, plus @@ -82,6 +90,8 @@ For instance, `fact 0 ~~> 1`, `fact 1 ~~> 1`, `fact 2 ~~> 2`, `fact 3 ~~> * The native Scheme function that most resembles the `mem?` you're defining is `memv`, though that is defined for more than just numbers, and when that `memv` finds a match it returns a *list* starting with the match, rather than `#t`. + ANSWER: TODO + 7. Linguists often analyze natural language expressions into **trees**. We'll need trees in future weeks, and tree structures provide good opportunities for learning how to write recursive functions. Making use of our current resources, we might approximate trees as follows. Instead of words or syntactic categories, we'll have the nodes of the tree labeled with Church numbers. We'll think of a tree as a list in which each element is itself a tree. For simplicity, we'll adopt the convention that a tree of length 1 must contain a number as its only element. Then we have the following representations: @@ -113,10 +123,13 @@ For instance, `fact 0 ~~> 1`, `fact 1 ~~> 1`, `fact 2 ~~> 2`, `fact 3 ~~> Your assignment is to write a Lambda Calculus function that expects a tree, encoded in the way just described, as an argument, and returns the sum of its leaves as a result. So for all of the trees listed above, it should return `1 + 2 + 3`, namely `6`. You can use any Lambda Calculus implementation of lists you like. + ANSWER: TODO 8. The **fringe** of a leaf-labeled tree is the list of values at its leaves, ordered from left-to-right. For example, the fringe of all three trees displayed above is the same list, `[1, 2, 3]`. We are going to return to the question of how to tell whether trees have the same fringe several times this course. We'll discover more interesting and more efficient ways to do it as our conceptual toolboxes get fuller. For now, we're going to explore the straightforward strategy. Write a function that expects a tree as an argument, and returns the list which is its fringe. Next write a function that expects two trees as arguments, converts each of them into their fringes, and then determines whether the two lists so produced are equal. (Convert your `list_equal?` function from last week's homework into the Lambda Calculus for this last step.) + ANSWER: TODO + ## Arithmetic infinity? ## @@ -150,16 +163,35 @@ satisfies the following constraints, for any finite natural number `n`: (Note, though, that with *some* notions of infinite numbers, like [[!wikipedia ordinal numbers]], operations like `+` are defined in such a way that `inf + n` is different from `n + inf`, and does exceed `inf`; similarly for `*` and `^`. With other notions of infinite numbers, like the [[!wikipedia cardinal numbers]], even less familiar arithmetic operations are employed.) + ANSWER: Let `H ≡ \u. succ (u u)`, and `X ≡ H H ≡ (\u. succ (u u)) H`. Note that `X ≡ H H ~~> succ (H H)`. Hence `X` is a fixed point for `succ`. + 10. Prove that `add ξ 1 <~~> ξ`, where `ξ` is the fixed point you found in (1). What about `add ξ 2 <~~> ξ`? -Comment: a fixed point for the successor function is an object such that it + Comment: a fixed point for the successor function is an object such that it is unchanged after adding 1 to it. It makes a certain amount of sense to use this object to model arithmetic infinity. For instance, depending on implementation details, it might happen that `leq n ξ` is true for all (finite) natural numbers `n`. However, the fixed point you found for `succ` and `(+n)` (recall this is shorthand for `\x. add x n`) may not be a fixed point for `(*n)`, for example. + ANSWER: Prove that `add X 1 <~~> X`: + + add X 1 == (\m n. n succ m) X 1 + ~~> 1 succ X + == (\s z. s z) succ X + ~~> succ X + + Which by the previous problem is convertible with `X`. (In particular, `X ~~> succ X`.) What about `add X 2`? + + add X 2 == (\m n. n succ m) X 2 + ~~> 2 succ X + == (\s z. s (s z)) succ X + ~~> succ (succ X) + + And we know the inner term will be convertible with `X`, and hence we get that the whole result is convertible with `succ X`. Which we already said is convertible with `X`. We can readily see that `add X n <~~> X` for all (finite) natural numbers `n`. + + ## Mutually-recursive functions ## @@ -215,7 +247,27 @@ a *pair* of functions `h` and `g`, as follows: If we gave you such a `Y1` and `Y2`, how would you implement the above definitions of `even?` and `odd?`? + ANSWER: + +
+    let proto_even = \even odd. \n. (zero? n) true (odd (pred n)) in
+    let proto_odd = \even odd. \n. (zero? n) false (even (pred n)) in
+    let even = Y1 proto_even proto_odd in
+    let odd = Y2 proto_even proto_odd in
+    ...
+    
+ + By the definitions of `Y1` and `Y2`, we know that `even <~~> proto_even even odd`, and `odd <~~> proto_odd even odd`. Hence the bound variables `even` and `odd` inside the `proto_...` functions have the values we want. `even` will be bound to (something convertible with) the underlined portion of `proto_even`, and `odd` will be bound to (something convertible with) the underlined portion of `proto_odd`. 12. (More challenging.) Using our derivation of `Y` from [[this week's notes|topics/week4_fixed_point_combinators#deriving-y]] as a model, construct a pair `Y1` and `Y2` that behave in the way described above. Here is one hint to get you started: remember that in the notes, we constructed a fixed point for `h` by evolving it into `H` and using `H H` as `h`'s fixed point. We suggested the thought exercise, how might you instead evolve `h` into some `T` and then use `T T T` as `h`'s fixed point. Try solving this problem first. It may help give you the insights you need to define a `Y1` and `Y2`. [[Here are some hints|assignment4_hint]]. + + ANSWER: One solution is given in the hint. Here is another: + + let Y1 = \f g . (\u v . f (u u v)(v v u)) + (\u v . f (u u v)(v v u)) + (\v u . g (v v u)(u u v)) in + let Y2 = \f g . Y1 g f in + ... + -- 2.11.0