From: Jim Pryor Date: Sat, 18 Sep 2010 20:03:54 +0000 (-0400) Subject: Merge branch 'pryor' X-Git-Url: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=commitdiff_plain;h=c03eda2ac73fee36b28ef3f81520c688f9cc8a3c;hp=7743c45c8440825886fa216aadf1f788b1b2783e Merge branch 'pryor' --- diff --git a/week2.mdwn b/week2.mdwn index 47a46667..659ffe75 100644 --- a/week2.mdwn +++ b/week2.mdwn @@ -36,11 +36,11 @@ Lambda expressions that have no free variables are known as **combinators**. Her > **K** is defined to be `\x y. x`. That is, it throws away its second argument. So `K x` is a constant function from any (further) argument to `x`. ("K" for "constant".) Compare K - to our definition of **true**. + to our definition of `true`. -> **get-first** was our function for extracting the first element of an ordered pair: `\fst snd. fst`. Compare this to **K** and **true** as well. +> **get-first** was our function for extracting the first element of an ordered pair: `\fst snd. fst`. Compare this to K and `true` as well. -> **get-second** was our function for extracting the second element of an ordered pair: `\fst snd. snd`. Compare this to our definition of **false**. +> **get-second** was our function for extracting the second element of an ordered pair: `\fst snd. snd`. Compare this to our definition of `false`. > **B** is defined to be: `\f g x. f (g x)`. (So `B f g` is the composition `\x. f (g x)` of `f` and `g`.) @@ -48,7 +48,7 @@ Lambda expressions that have no free variables are known as **combinators**. Her > **W** is defined to be: `\f x . f x x`. (So `W f` accepts one argument and gives it to `f` twice. What is the meaning of `W multiply`?) -> **ω** is defined to be: `\x. x x` +> **ω** (that is, lower-case omega) is defined to be: `\x. x x` It's possible to build a logical system equally powerful as the lambda calculus (and readily intertranslatable with it) using just combinators, considered as atomic operations. Such a language doesn't have any variables in it: not just no free variables, but no variables at all. @@ -65,7 +65,7 @@ duplicators. ![reflexive](http://lambda.jimpryor.net/szabolcsi-reflexive.jpg) -Notice that the semantic value of *himself* is exactly W. +Notice that the semantic value of *himself* is exactly `W`. The reflexive pronoun in direct object position combines first with the transitive verb (through compositional magic we won't go into here). The result is an intransitive verb phrase that takes a subject argument, duplicates that argument, and feeds the two copies to the transitive verb meaning. Note that `W <~~> S(CI)`: @@ -103,14 +103,14 @@ S takes three arguments, duplicates the third argument, and feeds one copy to th SFGX ~~> FX(GX) If the meaning of a function is nothing more than how it behaves with respect to its arguments, -these reduction rules capture the behavior of the combinators S,K, and I completely. -We can use these rules to compute without resorting to beta reduction. For instance, we can show how the I combinator is equivalent to a certain crafty combination of S's and K's: +these reduction rules capture the behavior of the combinators S, K, and I completely. +We can use these rules to compute without resorting to beta reduction. For instance, we can show how the I combinator is equivalent to a certain crafty combination of Ss and Ks: SKKX ~~> KX(KX) ~~> X -So the combinator SKK is equivalent to the combinator I. +So the combinator `SKK` is equivalent to the combinator I. -Combinatory Logic is what you have when you choose a set of combinators and regulate their behavior with a set of reduction rules. The most common system uses S,K, and I as defined here. +Combinatory Logic is what you have when you choose a set of combinators and regulate their behavior with a set of reduction rules. The most common system uses S, K, and I as defined here. ###The equivalence of the untyped lambda calculus and combinatory logic### diff --git a/week3.mdwn b/week3.mdwn index fe7c3153..990ef4bf 100644 --- a/week3.mdwn +++ b/week3.mdwn @@ -1,28 +1,59 @@ -Even with a fold-based representation of numbers, and pred/equal/subtraction, some computable functions are going to be out of our reach. +[Give recursive definition for computing the length of a list.] -Fibonacci: doable without Y, but takes some ingenuity +Our fold-based implementation of lists, and Church's implementations of numbers, have a internal structure that mirrors common recursive operations we'd use lists and numbers for. -And so on... +As we said, it takes some ingenuity to define functions like `extract-tail` or `predecessor` for these implementations, however it can be done. (And it's not *that* difficult.) Given those functions, we can go on to define other functions like numeric equality, subtraction, and so on, just by exploiting the structure already present in our implementation of lists and numbers. -Need a general method, where f(n) doesn't just depend on f(n-1) or (f(n-1),f(n-2),..). +With sufficient ingenuity, a great many functions can be defined in the same way. For example, the factorial function is straightforward. The function which returns the nth term in the Fibonacci series is a bit more difficult, but also achievable. -Looks like Ackermann function is simplest example that MUST be done with Y, Everything simpler could be done using only fixed iteration limits. +However, some computable functions are just not definable in this way. The simplest function that *simply cannot* be defined using the resources we've so far developed is the Ackermann function: -A(y,x) = - | when x == 0 -> y + 1 - | when y == 0 -> A(x-1,1) - | _ -> A(x-1, A(x,y-1)) + A(m,n) = + | when m == 0 -> n + 1 + | else when n == 0 -> A(m-1,1) + | else -> A(m-1, A(m,n-1)) -A(0,y) = y+1 -A(1,y) = y+2 -A(2,y) = 2y + 3 -A(3,y) = 2^(y+3) -3 -A(4,y) = 2^(2^(2^...2)) [y+3 2s] - 3 + A(0,y) = y+1 + A(1,y) = y+2 + A(2,y) = 2y + 3 + A(3,y) = 2^(y+3) -3 + A(4,y) = 2^(2^(2^...2)) [where there are y+3 2s] - 3 + ... +Simpler functions always *could* be defined using the resources we've so far developed, although those definitions won't always be very efficient or easily intelligible. -Some algorithms can also be done more efficiently / intelligibly with general mechanism for recursion. +But functions like the Ackermann function require us to develop a more general technique for doing recursion---and having developed it, it will often be easier to use it even in the cases where, in principle, we didn't have to. -How to do recursion with omega. +##How to do recursion with lower-case omega## + +##Generalizing## + +In general, **fixed point** of a function f is a value *x* such that fx is equivalent to *x*. For example, what is a fixed point of the function from natural numbers to their squares? What is a fixed point of the successor function? + +In the lambda calculus, we say a fixed point of an expression `f` is any formula `X` such that: + + X <~~> f X + +What is a fixed point of the identity combinator I? + +It's a theorem of the lambda calculus that every formula has a fixed point. In fact, it will have infinitely many, syntactically distinct fixed points. And we don't just know that they exist: for any given formula, we can name many of them. + +Yes, even the formula that you're using the define the successor function will have a fixed point. Isn't that weird? Think about how it might be true. + +Well, you might think, only some of the formulas that we might give to the `successor` as arguments would really represent numbers. If we said something like: + + successor make-pair + +who knows what we'd get back? Perhaps there's some non-number-representing formula such that when we feed it to `successor` as an argument, we get the same formula back. + +Yes! That's exactly right. And which formula this is will depend on the particular way you've implemented the successor function. + +Moreover, the recipes that enable us to name fixed points for any given formula aren't *guaranteed* to give us *terminating* fixed points. They might give us formulas X such that neither `X` nor `f X` have normal forms. (Indeed, what they give us for the square function isn't any of the Church numbers, but is rather an expression with no normal form.) However, if we take care we can ensure that we *do* get terminating fixed points. And this gives us a principled, fully general strategy for doing recursion. It lets us define even functions like the Ackermann function, which were until now out of our reach. It would let us define arithmetic and list functions on the "version 1" and "version 2" implementations, where it wasn't always clear how to force the computation to "keep going." + +[Explain in terms of an arbitrary fixed point combinator Ψ.] + +[Give some examples: first, versions of Y and Θ usable with call-by-value. Then do the internal eta-reductions and say these work for call-by-name only.] + +[Explain how what we've done relates to the version using lower-case ω.] -fixed point combinators