From 153d79ff07fa1d6fb0d7673e768181ada64b01ba Mon Sep 17 00:00:00 2001 From: Jim Pryor Date: Wed, 15 Sep 2010 16:30:21 -0400 Subject: [PATCH] continue week1 summary, add week2 pages Signed-off-by: Jim Pryor --- chris_notes | 87 ++++++++++ damn.mdwn | 17 ++ lists.mdwn | 80 ++++++++++ numbers.mdwn | 32 ++++ week1.mdwn | 420 +++++++++++++++++++++++++----------------------- week2.mdwn | 511 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 6 files changed, 945 insertions(+), 202 deletions(-) create mode 100644 chris_notes create mode 100644 lists.mdwn create mode 100644 numbers.mdwn create mode 100644 week2.mdwn diff --git a/chris_notes b/chris_notes new file mode 100644 index 00000000..0f640e12 --- /dev/null +++ b/chris_notes @@ -0,0 +1,87 @@ + +1. Understanding the meaning(use) of programming languages + helps understanding the meaning(use) of natural langauges + + 1. Richard Montague. 1970. Universal Grammar. _Theoria_ 34:375--98. + "There is in my opinion no important theoretical difference + between natural languages and the artificial languages of + logicians; indeed, I consider it possible to comprehend the + syntax and semantics of both kinds of languages within a + single natural and mathematically precise theory." + + 2. Similarities: + + Function/argument structure: + f(x) + kill(it) + pronominal binding: + x := x + 1 + John is his own worst enemy + Quantification: + foreach x in [1..10] print x + Print every number from 1 to 10 + + 3. Possible differences: + + Parentheses: + 3 * (4 + 7) + ?It was four plus seven that John computed 3 multiplied by + (compare: John computed 3 multiplied by four plus seven) + Ambiguity: + 3 * 4 + 7 + Time flies like and arrow, fruit flies like a banana. + Vagueness: + 3 * 4 + A cloud near the mountain + Unbounded numbers of distinct pronouns: + f(x1) + f(x2) + f(x3) + ... + He saw her put it in ... + [In ASL, dividing up the signing space...] + + +2. Standard methods in linguistics are limited. + + 1. First-order predicate calculus + + Invented for reasoning about mathematics (Frege's quantification) + + Alethic, order insensitive: phi & psi == psi & phi + But: John left and Mary left too /= Mary left too and John left + + 2. Simply-typed lambda calculus + + Can't express the Y combinator + + +3. Meaning is computation. + + 1. Semantics is programming + + 2. Good programming is good semantics + + 1. Example + + 1. Programming technique + + Exceptions + throw (raise) + catch (handle) + + 2. Application to linguistics + presupposition + expressives + + Develop application: + fn application + divide by zero + test and repair + raise and handle + + fn application + presupposition failure + build into meaning of innocent predicates? + expressives + throw + handle + resume computation + diff --git a/damn.mdwn b/damn.mdwn index a6851621..6913c283 100644 --- a/damn.mdwn +++ b/damn.mdwn @@ -1,3 +1,20 @@ +1. Sentences have truth conditions. + +2. If "John read the book" is true, then + John read something, + Someone read the book, + John did something to the book, + etc. + +3. If "John read the damn book", + all the same entailments follow. + To a first approximation, "damn" does not affect at-issue truth + conditions. + +4. "Damn" does contribute information about the attitude of the speaker + towards some aspect of the situation described by the sentence. + + Expressives such as "damn" have side effects that don't affect the at-issue value of the sentence in which they occur. What this claim says is unpacked at some length here: . diff --git a/lists.mdwn b/lists.mdwn new file mode 100644 index 00000000..01dc76c7 --- /dev/null +++ b/lists.mdwn @@ -0,0 +1,80 @@ +(list?) +nil +cons +nil?, (pair?) +head +tail + +Chris's lists: + nil = (t,N) = \f. f true N + [a] = (f,(a,nil)) + [b,a] = (f,(b,[a])) + +isnil = get-first +head = L get-second get-first +tail = L get-second get-second + +would be nice if nil tail = nil + nil tail = (t, N) get-second get-second = N get-second + so N get-second should be (t,N) + e.g. N = K (t,N) + = (\n. K (t,n)) N + How to do that? + a fixed point g_ of g : g_ = (g g_) + (Y g) = g (Y g) + N = (\n. K (t,n)) N is of the form N = g N + So if we just set N = Y g, we'll have what we want. N = (Y g) = g (Y g) = g N + + i.e. N = Y g = Y (\n. K (t,n)) + and nil = (t, N) = (t, Y (\n. K (t, n))) + + nil get-second get-second ~~> + Y (\n. K (t, n)) get-second ~~> + (\n. K (t, n)) (Y (\n. K (t,n))) get-second ~~> + K (t, Y (\n. K (t,n))) get-second ~~> + (t, Y (\n. K (t,n))) + + +Lists 2: + nil = false + [a] = (a,nil) + +L (\h\t.K deal_with_h_and_t) if-nil + +We've already seen enumerations: true | false, red | green | blue +What if you want one or more of the elements to have associated data? e.g. red | green | blue + +could handle like this: + the-value if-red if-green (\n. handler-if-blue-to-degree-n) + + then red = \r \g \b-handler. r + green = \r \g \b-handler. g + make-blue = \degree. \r \g \b-handler. b-handler degree + +A list is basically: empty | non-empty + + empty = \non-empty-handler \if-empty. if-empty = false + cons = \h \t. \non-empty-handler \if-empty. non-empty-handler h t + + so [a] = cons a empty = \non-empty-handler \_. non-empty-handler a empty + + + +Lists 3: +[a; tl] isnil == (\f. f a tl) (\h \t.false) a b ~~> false a b + +nil isnil == (\f. M) (\h \t. false) a b ~~> M[f:=isnil] a b == a + + so M could be \a \b. a, i.e. true + so nil = \f. true == K true == K K = \_ K + + nil = K true + [a] = (a,nil) + [b,a] = (b,[a]) + +isnil = (\x\y.false) + +nil tail = K true tail = true = \x\y.x = \f.f? such that f? = Kx. there is no such. +nil head = K true head = true. could mislead. + + diff --git a/numbers.mdwn b/numbers.mdwn new file mode 100644 index 00000000..8d554245 --- /dev/null +++ b/numbers.mdwn @@ -0,0 +1,32 @@ + +Church figured out how to encode integers and arithmetic operations +using lambda terms. Here are the basics: + +0 = \f\x.fx +1 = \f\x.f(fx) +2 = \f\x.f(f(fx)) +3 = \f\x.f(f(f(fx))) +... + +Adding two integers involves applying a special function + such that +(+ 1) 2 = 3. Here is a term that works for +: + ++ = \m\n\f\x.m(f((n f) x)) + +So (+ 0) 0 = +(((\m\n\f\x.m(f((n f) x))) ;+ + \f\x.fx) ;0 + \f\x.fx) ;0 + +~~>_beta targeting m for beta conversion + +((\n\f\x.[\f\x.fx](f((n f) x))) + \f\x.fx) + +\f\x.[\f\x.fx](f(([\f\x.fx] f) x)) + +\f\x.[\f\x.fx](f(fx)) + +\f\x.\x.[f(fx)]x + +\f\x.f(fx) diff --git a/week1.mdwn b/week1.mdwn index be1b714d..08286dbb 100644 --- a/week1.mdwn +++ b/week1.mdwn @@ -30,297 +30,313 @@ From philosophy * issues in that same neighborhood will help us better understand proposals like Kit Fine's that semantics is essentially coordinated, and that `R a a` and `R a b` can differ in interpretation even when `a` and `b` don't +Declarative/functional vs Imperatival/dynamic models of computation +=================================================================== +Many of you, like us, will have grown up thinking the paradigm of computation is a sequence of changes. Let go of that. It will take some care to separate the operative notion of "sequencing" here from other notions close to it, but once that's done, you'll see that languages that have no significant notions of sequencing or changes are Turing complete: they can perform any computation we know how to describe. In itself, that only puts them on equal footing with more mainstream, imperatival programming languages like C and Java and Python, which are also Turing complete. But further, the languages we want you to become familiar with can reasonably be understood to be more fundamental. They embody the elemental building blocks that computer scientists use when reasoning about and designing other languages. +Jim offered the metaphor: think of imperatival languages, which include "mutation" and "side-effects" (we'll flesh out these keywords as we proceeed), as the pate of computation. We want to teach you about the meat and potatoes, where as it turns out there is no sequencing and no changes. There's just the evaluation or simplification of complex expressions. -1. Declarative vs imperatival models of computation. -2. Variety of ways in which "order can matter." -3. Variety of meanings for "dynamic." -4. Schoenfinkel, Curry, Church: a brief history -5. Functions as "first-class values" -6. "Curried" functions +Now, when you ask the Scheme interpreter to simplify an expression for you, that's a kind of dynamic interaction between you and the interpreter. You may wonder then why these languages should not also be understood imperatively. The difference is that in a purely declarative or functional language, there are no dynamic effects in the language itself. It's just a static semantic fact about the language that one expression reduces to another. You may have verified that fact through your dynamic interactions with the Scheme interpreter, but that's different from saying that there are dynamic effects in the language itself. -1. Beta reduction -1. Encoding pairs (and triples and ...) -1. Encoding booleans +What the latter would amount to will become clearer as we build our way up to languages which are genuinely imperatival or dynamic. + +Many of the slogans and keywords we'll encounter in discussions of these issues call for careful interpretation. They mean various different things. + +For example, you'll encounter the claim that declarative languages are distinguished by their **referential transparency.** What's meant by this is not always exactly the same, and as a cluster, it's related to but not the same as this means for philosophers and linguists. + +The notion of "function" that we'll be working with will be one that, by default, sometimes counts as non-identical functions that map all their inputs to the very same outputs. For example, two functions from jumbled decks of cards to sorted decks of cards may use different algorithms and hence be different functions. + +It's possible to enhance the lambda calculus so that functions do get identified when they map all the same inputs to the same outputs. This is called making the calculus **extensional**. Church called languages which didn't do this "intensional." If you try to understand this in terms of functions from worlds to extensions (an idea also associated with Church), you will hurt yourself. So too if you try to understand it in terms of mental stereotypes, another notion sometimes designated by "intension." + +It's often said that dynamic systems are distinguished because they are the ones in which **order matters**. However, there are many ways in which order can matter. If we have a trivalent boolean system, for example---easily had in a purely functional calculus---we might choose to give a truth-table like this for "and": + + true and true = true + true and * = * + true and false = false + * and true = * + * and * = * + * and false = * + false and true = false + false and * = false + false and false = false + +And then we'd notice that `* and false` has a different intepretation than `false and *`. (The same phenomenon is already present with the mateial conditional in bivalent logics; but seeing that a non-symmetric semantics for `and` is available even for functional languages is instructive.) + +Another way in which order can matter that's present even in functional languages is that the interpretation of some complex expressions can depend on the order in which sub-expressions are evaluated. Evaluated in one order, the computations might never terminate (and so semantically we interpret them as having "the bottom value"---we'll discuss this). Evaluated in another order, they might have a perfectly mundane value. Here's an example, though we'll reserve discussion of it until later: + + (\x. y) ((\x. x x) (\x. x x)) + +Again, these facts are all part of the metatheory of purely functional languages. But *there is* a different sense of "order matters" such that it's only in imperatival languages that order so matters. + + x := 2 + x := x + 1 + x == 3 + +Here the comparison in the last line will evaluate to true. + + x := x + 1 + x := 2 + x == 3 +Here the comparison in the last line will evaluate to false. +One of our goals for this course is to get you to understand *what is* that new +sense such that only so matters in imperatival languages. +Finally, you'll see the term **dynamic** used in a variety of ways in the literature for this course: + * dynamic versus static typing + * dynamic versus lexical scoping - Order matters + * dynamic versus static control operators -Declarative versus imperative: + * finally, we're used ourselves to talking about dynamic versus static semantics -In a pure declarative language, the order in which expressions are -evaluated (reduced, simplified) does not affect the outcome. +For the most part, these uses are only loosely connected to each other. We'll tend to use "imperatival" to describe the kinds of semantic properties made available in dynamic semantics, languages which have robust notions of sequencing changes, and so on. -(3 + 4) * (5 + 11) = 7 * (5 + 11) = 7 * 16 = 112 -(3 + 4) * (5 + 11) = (3 + 4) * 16 = 7 * 16 = 112 +Map +=== -In an imperative language, order makes a difference. -x := 2 -x := x + 1 -x == 3 -[true] +Rosetta Stone +============= -x := x + 1 -x := 2 -x == 3 -[false] -Declaratives: assertions of statements. -No matter what order you assert true facts, they remain true: -The value is the product of x and y. -x is the sum of 3 and 4. -y is the sum of 5 and 11. -The value is 112. +Basics of Lambda Calculus +========================= -Imperatives: performative utterances expressing a deontic or bouletic -modality ("Be polite", "shut the door") -Resource-sensitive, order sensitive: +The lambda calculus we'll be focusing on for the first part of the course has no types. (Some prefer to say it instead has a single type---but if you say that, you have to say that functions from this type to this type also belong to this type. Which is weird.) -Make x == 2. -Add one to x. -See if x == 3. +Here is its syntax: ----------------------------------------------------- + Variables: x, y, z, ... -Untype (monotyped) lambda calculus +Each variable is an expression. For any expressions M and N and variable a, the following are also expressions: -Syntax: +
+ Abstract: ( λa M ) -Variables: x, x', x'', x''', ... -(Cheat: x, y, z, x1, x2, ...) + Application: ( M N ) +
-Each variable is a term. -For all terms M and N and variable a, the following are also terms: +We'll tend to write ( λa M ) as just `( \a M )`. -(M N) The application of M to N -(\a M) The abstraction of a over M +Some authors reserve the term "term" for just variables and abstracts. We won't participate in that convention; we'll probably just say "term" and "expression" indiscriminately. -Examples of terms: +Examples of expressions: -x -(y x) -(x x) -(\x y) -(\x x) -(\x (\y x)) -(x (\x x)) -((\x (x x))(\x (x x))) + x + (y x) + (x x) + (\x y) + (\x x) + (\x (\y x)) + (x (\x x)) + ((\x (x x)) (\x (x x))) -Reduction/conversion/equality: +The lambda calculus has an associated proof theory. For now, we can regard the proof theory as having just one rule, called the rule of "beta-reduction" or "beta-contraction". Suppose you have some expression of the form: -Lambda terms express recipes for combining terms into new terms. -The key operation in the lambda calculus is beta-conversion. + ((\a M) N) -((\a M) N) ~~>_beta M{a := N} +that is, an application of an abstract to some other expression. This compound form is called a **redex**, meaning it's a "beta-reducible expression." `(\a M)` is called the **head** of the redex; `N` is called the **argument**, and `M` is called the **body**. -The term on the left of the arrow is an application whose first -element is a lambda abstraction. (Such an application is called a -"redex".) The beta reduction rule says that a redex is -beta-equivalent to a term that is constructed by replacing every -(free) occurrence of a in M by a copy of N. For example, +The rule of beta-reduction permits a transition from that expression to the following: -((\x x) z) ~~>_beta z -((\x (x x)) z) ~~>_beta (z z) -((\x x) (\y y)) ~~>_beta (\y y) + M {a:=N} + +What this means is just `M`, with any *free occurrences* inside `M` of the variable `a` replaced with the term `N`. + +What is a free occurrence? + +> An occurrence of a variable `a` is **bound** in T if T has the form `(\a N)`. + +> If T has the form `(M N)`, any occurrences of `a` that are bound in `M` are also bound in T, and so too any occurrences of `a` that are bound in `N`. + +> An occurrence of a variable is **free** if it's not bound. -Beta reduction is only allowed to replace *free* occurrences of a variable. -An occurrence of a variable a is BOUND in T if T has the form (\a N). -If T has the form (M N), and the occurrence of a is in M, then a is -bound in T just in case a is bound in M; if the occurrence of a is in -N, than a is bound in T just in case a is bound in N. An occurrence -of a variable a is FREE in a term T iff it is not bound in T. For instance: -T = (x (\x (\y (x (y z))))) -The first occurrence of x in T is free. The second occurrence of x -immediately follows a lambda, and is bound. The third occurrence of x -occurs within a form that begins with "\x", so it is bound as well. -Both occurrences of y are bound, and the only occurrence of z is free. +> T is defined to be `(x (\x (\y (x (y z)))))` -Lambda terms represent functions. -All (recursively computable) functions can be represented by lambda -terms (the untyped lambda calculus is Turning complete). -For some lambda terms, it is easy to see what function they represent: +The first occurrence of `x` in `T` is free. The `\x` we won't regard as being an occurrence of `x`. The next occurrence of `x` occurs within a form that begins with `\x`, so it is bound as well. The occurrence of `y` is bound; and the occurrence of `z` is free. -(\x x) the identity function: given any argument M, this function -simply returns M: ((\x x) M) ~~>_beta M. +Here's an example of beta-reduction: -(\x (x x)) duplicates its argument: -((\x (x x)) M) ~~> (M M) + ((\x (y x)) z) -(\x (\y x)) throws away its second argument: -(((\x (\y x)) M) N) ~~> M +beta-reduces to: -and so on. + (y z) + +We'll write that like this: + + ((\x (y x)) z) ~~> (y z) + +Different authors use different notations. Some authors use the term "contraction" for a single reduction step, and reserve the term "reduction" for the reflexive transitive closure of that, that is, for zero or more reduction steps. Informally, it seems easiest to us to say "reduction" for one or more reduction steps. So when we write: + + M ~~> N + +We'll mean that you can get from M to N by one or more reduction steps. Hankin uses the symbol -> for one-step contraction, and the symbol ->> for zero-or-more step reduction. Hindley and Seldin use (triangle..sub1) and (triangle). + +When M and N are such that there's some P that M reduces to by zero or more steps, and that N also reduces to by zero or more steps, then we say that M and N are **beta-convertible**. We'll write that like this: + + M <~~> N + +This is what plays the role of equality in the lambda calculus. Hankin uses the symbol `=` for this. So too do Hindley and Seldin. + +In the metatheory, it's also sometimes useful to talk about formulas that are syntactically equivalent *before any reductions take place*. Hankin uses the symbol (three bars) for this. So too do Hindley and Seldin. We'll use that too, and will avoid using `=` when discussing metatheory for the lambda calculus. Instead we'll use `<~~>` as we said above. When we want to introduce a stipulative definition, we'll write it out longhand, as in: + +> T is defined to be `(M N)`. + +We'll regard the following two expressions: + + (\x x y) + + (\z z y) + +as syntactically equivalent, since they only involve a typographic change of a bound variable. Read Hankin section 2.3 for discussion of different attitudes one can take about this. + +Note that neither of those expressions are identical to: + + (\x x w) + +because here it's a free variable that's been changed. Nor are they identical to: + + (\y y y) + +because here the second occurrence of `y` is no longer free. + +There is plenty of discussion of this, and the fine points of how substitution works, in Hankin and in various of the tutorials we've linked to about the lambda calculus. We expect you have a good intuitive understanding of what to do already, though, even if you're not able to articulate it rigorously. + + +Shorthand +--------- + +The grammar we gave for the lambda calculus leads to some verbosity. There are several informal conventions in widespread use, which enable the language to be written more compactly. (If you like, you could instead articulate a formal grammar which incorporates these additional conventions. Instead of showing it to you, we'll leave it as an exercise for those so inclined.) -It is easy to see that distinct lambda terms can represent the same -function. For instance, (\x x) and (\y y) both express the same -function, namely, the identity function. ------------------------------------------ Dot notation: dot means "put a left paren here, and put the right paren as far the right as possible without creating unbalanced -parentheses". So (\x(\y(xy))) = \x\y.xy, and \x\y.(z y) x = -(\x(\y((z y) z))), but (\x\y.(z y)) x = ((\x(\y(z y))) x). +parentheses". So: + + (\x (\y (xy))) + +can be abbreviated as: + + (\x (\y. x y)) + +and: + + (\x \y. (z y) z) + +would abbreviate: + + (\x \y ((z y) z)) + +This on the other hand: ------------------------------------------ + ((\x \y. (z y) z) -Church figured out how to encode integers and arithmetic operations -using lambda terms. Here are the basics: +would abbreviate: -0 = \f\x.fx -1 = \f\x.f(fx) -2 = \f\x.f(f(fx)) -3 = \f\x.f(f(f(fx))) -... + ((\x (\y (z y))) z) -Adding two integers involves applying a special function + such that -(+ 1) 2 = 3. Here is a term that works for +: +Parentheses: outermost parentheses around applications can be dropped. Moreover, applications will associate to the left, so `M N P` will be understood as `((M N) P)`. Finally, you can drop parentheses around abstracts, but not when they're part of an application. So you can abbreviate: -+ = \m\n\f\x.m(f((n f) x)) + (\x x y) -So (+ 0) 0 = -(((\m\n\f\x.m(f((n f) x))) ;+ - \f\x.fx) ;0 - \f\x.fx) ;0 +as: -~~>_beta targeting m for beta conversion + \x. x y -((\n\f\x.[\f\x.fx](f((n f) x))) - \f\x.fx) +but you should include the parentheses in: -\f\x.[\f\x.fx](f(([\f\x.fx] f) x)) + (\x. x y) z -\f\x.[\f\x.fx](f(fx)) +and: -\f\x.\x.[f(fx)]x + z (\x. x y) -\f\x.f(fx) +Merging lambdas: an expression of the form `(\x (\y M))`, or equivalently, `(\x. \y. M)`, can be abbreviated as: + (\x y. M) +Similarly, `(\x (\y (\z M)))` can be abbreviated as: ----------------------------------------------------- + (\x y z. M) -A concrete example: "damn" side effects -1. Sentences have truth conditions. -2. If "John read the book" is true, then - John read something, - Someone read the book, - John did something to the book, - etc. -3. If "John read the damn book", - all the same entailments follow. - To a first approximation, "damn" does not affect at-issue truth - conditions. -4. "Damn" does contribute information about the attitude of the speaker - towards some aspect of the situation described by the sentence. +Lambda terms represent functions +-------------------------------- +All (recursively computable) functions can be represented by lambda +terms (the untyped lambda calculus is Turing complete). For some lambda terms, it is easy to see what function they represent: + +(\x x) represents the identity function: given any argument M, this function +simply returns M: ((\x x) M) ~~> M. + +(\x (x x)) duplicates its argument: +((\x (x x)) M) ~~> (M M) + +(\x (\y x)) throws away its second argument: +(((\x (\y x)) M) N) ~~> M + +and so on. + +It is easy to see that distinct lambda expressions can represent the same +function, considered as a mapping from input to outputs. Obviously: + (\x x) ------------------------------------------ -Old notes, no longer operative: +and: -1. Theoretical computer science is beautiful. + (\z z) - Google search for "anagram": Did you mean "nag a ram"? - Google search for "recursion": Did you mean "recursion"? +both represent the same function, the identity function. However, we said above that we would be regarding these expressions as synactically equivalent, so they aren't yet really examples of *distinct* lambda expressions representing a single function. However, all three of these are distinct lambda expressions: - Y = \f.(\x.f (x x)) (\x.f (x x)) + (\y x. y x) (\z z) + (\x. (\z z) x) -1. Understanding the meaning(use) of programming languages - helps understanding the meaning(use) of natural langauges + (\z z) - 1. Richard Montague. 1970. Universal Grammar. _Theoria_ 34:375--98. - "There is in my opinion no important theoretical difference - between natural languages and the artificial languages of - logicians; indeed, I consider it possible to comprehend the - syntax and semantics of both kinds of languages within a - single natural and mathematically precise theory." +yet when applied to any argument M, all of these will always return M. So they have the same extension. It's also true, though you may not yet be in a position to see, that no other argument can differentiate between them when they're supplied as an argument to it. However, these expressions are all syntactically distinct. - 2. Similarities: +The first two expressions are *convertible*: in particular the first reduces to the second. So they can be regarded as proof-theoretically equivalent even though they're not syntactically identical. However, the proof theory we've given so far doesn't permit you to reduce the second expression to the third. So these lambda expressions are non-equivalent. - Function/argument structure: - f(x) - kill(it) - pronominal binding: - x := x + 1 - John is his own worst enemy - Quantification: - foreach x in [1..10] print x - Print every number from 1 to 10 +There's an extension of the proof-theory we've presented so far which does permit this further move. And in that extended proof theory, all computable functions with the same extension do turn out to be equivalent (convertible). However, at that point, we still won't be working with the traditional mathematical notion of a function as a set of ordered pairs. One reason is that the latter but not the former permits uncomputable functions. A second reason is that the latter but not the former prohibits functions from applying to themselves. We discussed this some at the end of seminar (and further discussion is best pursued in person). - 3. Possible differences: - Parentheses: - 3 * (4 + 7) - ?It was four plus seven that John computed 3 multiplied by - (compare: John computed 3 multiplied by four plus seven) - Ambiguity: - 3 * 4 + 7 - Time flies like and arrow, fruit flies like a banana. - Vagueness: - 3 * 4 - A cloud near the mountain - Unbounded numbers of distinct pronouns: - f(x1) + f(x2) + f(x3) + ... - He saw her put it in ... - [In ASL, dividing up the signing space...] - - -2. Standard methods in linguistics are limited. - 1. First-order predicate calculus - Invented for reasoning about mathematics (Frege's quantification) - Alethic, order insensitive: phi & psi == psi & phi - But: John left and Mary left too /= Mary left too and John left +Booleans and pairs +================== - 2. Simply-typed lambda calculus +Our definition of these is reviewed in [[Assignment1]]. - Can't express the Y combinator -3. Meaning is computation. - 1. Semantics is programming - 2. Good programming is good semantics - 1. Example +1. Declarative vs imperatival models of computation. +2. Variety of ways in which "order can matter." +3. Variety of meanings for "dynamic." +4. Schoenfinkel, Curry, Church: a brief history +5. Functions as "first-class values" +6. "Curried" functions + +1. Beta reduction +1. Encoding pairs (and triples and ...) +1. Encoding booleans - 1. Programming technique - Exceptions - throw (raise) - catch (handle) - 2. Application to linguistics - presupposition - expressives - Develop application: - fn application - divide by zero - test and repair - raise and handle - fn application - presupposition failure - build into meaning of innocent predicates? - expressives - throw - handle - resume computation - diff --git a/week2.mdwn b/week2.mdwn new file mode 100644 index 00000000..c7e82303 --- /dev/null +++ b/week2.mdwn @@ -0,0 +1,511 @@ +1. Substitution; using alpha-conversion and other strategies +1. Conversion versus reduction + +1. Different evaluation strategies (call by name, call by value, etc.) +1. Strongly normalizing vs weakly normalizing vs non-normalizing; Church-Rosser Theorem(s) +1. Lambda calculus compared to combinatorial logic

+1. Church-like encodings of numbers, defining addition and multiplication +1. Defining the predecessor function; alternate encodings for the numbers +1. Homogeneous sequences or "lists"; how they differ from pairs, triples, etc. +1. Representing lists as pairs +1. Representing lists as folds +1. Typical higher-order functions: map, filter, fold

+1. Recursion exploiting the fold-like representation of numbers and lists ([[!wikipedia Deforestation (computer science)]], [[!wikipedia Zipper (data structure)]]) +1. General recursion using omega + +1. Eta reduction and "extensionality" ?? +Undecidability of equivalence + +There is no algorithm which takes as input two lambda expressions and outputs TRUE or FALSE depending on whether or not the two expressions are equivalent. This was historically the first problem for which undecidability could be proven. As is common for a proof of undecidability, the proof shows that no computable function can decide the equivalence. Church's thesis is then invoked to show that no algorithm can do so. + +Church's proof first reduces the problem to determining whether a given lambda expression has a normal form. A normal form is an equivalent expression which cannot be reduced any further under the rules imposed by the form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression e which closely follows the proof of Gödel's first incompleteness theorem. If e is applied to its own Gödel number, a contradiction results. + + + +1. The Y combinator(s); more on evaluation strategies

+1. Introducing the notion of a "continuation", which technique we'll now already have used a few times + + + +alpha-convertible + +syntactic equality `===` +contract/reduce/`~~>` +convertible `<~~>` + +normalizing + weakly normalizable + strongly normalizable + "normal order" reduction vs "applicative order" + eval strategy choices + + Reduction strategies For more details on this topic, see Evaluation + strategy. + + Whether a term is normalising or not, and how much work needs to be + done in normalising it if it is, depends to a large extent on the reduction + strategy used. The distinction between reduction strategies relates to the + distinction in functional programming languages between eager evaluation and + lazy evaluation. + + Full beta reductions Any redex can be reduced at any time. This means + essentially the lack of any particular reduction strategy—with regard to + reducibility, "all bets are off". Applicative order The leftmost, innermost + redex is always reduced first. Intuitively this means a function's arguments + are always reduced before the function itself. Applicative order always + attempts to apply functions to normal forms, even when this is not possible. + Most programming languages (including Lisp, ML and imperative languages like C + and Java) are described as "strict", meaning that functions applied to + non-normalising arguments are non-normalising. This is done essentially using + applicative order, call by value reduction (see below), but usually called + "eager evaluation". Normal order The leftmost, outermost redex is always + reduced first. That is, whenever possible the arguments are substituted into + the body of an abstraction before the arguments are reduced. Call by name As + normal order, but no reductions are performed inside abstractions. For example + λx.(λx.x)x is in normal form according to this strategy, although it contains + the redex (λx.x)x. Call by value Only the outermost redexes are reduced: a + redex is reduced only when its right hand side has reduced to a value (variable + or lambda abstraction). Call by need As normal order, but function applications + that would duplicate terms instead name the argument, which is then reduced + only "when it is needed". Called in practical contexts "lazy evaluation". In + implementations this "name" takes the form of a pointer, with the redex + represented by a thunk. + + Applicative order is not a normalising strategy. The usual + counterexample is as follows: define Ω = ωω where ω = λx.xx. This entire + expression contains only one redex, namely the whole expression; its reduct is + again Ω. Since this is the only available reduction, Ω has no normal form + (under any evaluation strategy). Using applicative order, the expression KIΩ = + (λxy.x) (λx.x)Ω is reduced by first reducing Ω to normal form (since it is the + leftmost redex), but since Ω has no normal form, applicative order fails to + find a normal form for KIΩ. + + In contrast, normal order is so called because it always finds a + normalising reduction if one exists. In the above example, KIΩ reduces under + normal order to I, a normal form. A drawback is that redexes in the arguments + may be copied, resulting in duplicated computation (for example, (λx.xx) + ((λx.x)y) reduces to ((λx.x)y) ((λx.x)y) using this strategy; now there are two + redexes, so full evaluation needs two more steps, but if the argument had been + reduced first, there would now be none). + + The positive tradeoff of using applicative order is that it does not + cause unnecessary computation if all arguments are used, because it never + substitutes arguments containing redexes and hence never needs to copy them + (which would duplicate work). In the above example, in applicative order + (λx.xx) ((λx.x)y) reduces first to (λx.xx)y and then to the normal order yy, + taking two steps instead of three. + + Most purely functional programming languages (notably Miranda and its + descendents, including Haskell), and the proof languages of theorem provers, + use lazy evaluation, which is essentially the same as call by need. This is + like normal order reduction, but call by need manages to avoid the duplication + of work inherent in normal order reduction using sharing. In the example given + above, (λx.xx) ((λx.x)y) reduces to ((λx.x)y) ((λx.x)y), which has two redexes, + but in call by need they are represented using the same object rather than + copied, so when one is reduced the other is too. + + + + + Strict evaluation Main article: strict evaluation + + In strict evaluation, the arguments to a function are always evaluated + completely before the function is applied. + + Under Church encoding, eager evaluation of operators maps to strict evaluation + of functions; for this reason, strict evaluation is sometimes called "eager". + Most existing programming languages use strict evaluation for functions. [edit] + Applicative order + + Applicative order (or leftmost innermost) evaluation refers to an evaluation + strategy in which the arguments of a function are evaluated from left to right + in a post-order traversal of reducible expressions (redexes). Unlike + call-by-value, applicative order evaluation reduces terms within a function + body as much as possible before the function is applied. [edit] Call by value + + Call-by-value evaluation (also referred to as pass-by-value) is the most common + evaluation strategy, used in languages as different as C and Scheme. In + call-by-value, the argument expression is evaluated, and the resulting value is + bound to the corresponding variable in the function (frequently by copying the + value into a new memory region). If the function or procedure is able to assign + values to its parameters, only its local copy is assigned — that is, anything + passed into a function call is unchanged in the caller's scope when the + function returns. + + Call-by-value is not a single evaluation strategy, but rather the family of + evaluation strategies in which a function's argument is evaluated before being + passed to the function. While many programming languages (such as Eiffel and + Java) that use call-by-value evaluate function arguments left-to-right, some + evaluate functions and their arguments right-to-left, and others (such as + Scheme, OCaml and C) leave the order unspecified (though they generally require + implementations to be consistent). + + In some cases, the term "call-by-value" is problematic, as the value which is + passed is not the value of the variable as understood by the ordinary meaning + of value, but an implementation-specific reference to the value. The + description "call-by-value where the value is a reference" is common (but + should not be understood as being call-by-reference); another term is + call-by-sharing. Thus the behaviour of call-by-value Java or Visual Basic and + call-by-value C or Pascal are significantly different: in C or Pascal, calling + a function with a large structure as an argument will cause the entire + structure to be copied, potentially causing serious performance degradation, + and mutations to the structure are invisible to the caller. However, in Java or + Visual Basic only the reference to the structure is copied, which is fast, and + mutations to the structure are visible to the caller. [edit] Call by reference + + In call-by-reference evaluation (also referred to as pass-by-reference), a + function receives an implicit reference to the argument, rather than a copy of + its value. This typically means that the function can modify the argument- + something that will be seen by its caller. Call-by-reference therefore has the + advantage of greater time- and space-efficiency (since arguments do not need to + be copied), as well as the potential for greater communication between a + function and its caller (since the function can return information using its + reference arguments), but the disadvantage that a function must often take + special steps to "protect" values it wishes to pass to other functions. + + Many languages support call-by-reference in some form or another, but + comparatively few use it as a default; Perl and Visual Basic are two that do, + though Visual Basic also offers a special syntax for call-by-value parameters. + A few languages, such as C++ and REALbasic, default to call-by-value, but offer + special syntax for call-by-reference parameters. C++ additionally offers + call-by-reference-to-const. In purely functional languages there is typically + no semantic difference between the two strategies (since their data structures + are immutable, so there is no possibility for a function to modify any of its + arguments), so they are typically described as call-by-value even though + implementations frequently use call-by-reference internally for the efficiency + benefits. + + Even among languages that don't exactly support call-by-reference, many, + including C and ML, support explicit references (objects that refer to other + objects), such as pointers (objects representing the memory addresses of other + objects), and these can be used to effect or simulate call-by-reference (but + with the complication that a function's caller must explicitly generate the + reference to supply as an argument). [edit] Call by sharing + + Also known as "call by object" or "call by object-sharing" is an evaluation + strategy first named by Barbara Liskov et al. for the language CLU in 1974[1]. + It is used by languages such as Python[2], Iota, Java (for object + references)[3], Ruby, Scheme, OCaml, AppleScript, and many other languages. + However, the term "call by sharing" is not in common use; the terminology is + inconsistent across different sources. For example, in the Java community, they + say that Java is pass-by-value, whereas in the Ruby community, they say that + Ruby is pass-by-reference, even though the two languages exhibit the same + semantics. Call-by-sharing implies that values in the language are based on + objects rather than primitive types. + + The semantics of call-by-sharing differ from call-by-reference in that + assignments to function arguments within the function aren't visible to the + caller (unlike by-reference semantics)[citation needed]. However since the + function has access to the same object as the caller (no copy is made), + mutations to those objects within the function are visible to the caller, which + differs from call-by-value semantics. + + Although this term has widespread usage in the Python community, identical + semantics in other languages such as Java and Visual Basic are often described + as call-by-value, where the value is implied to be a reference to the object. + [edit] Call by copy-restore + + Call-by-copy-restore, call-by-value-result or call-by-value-return (as termed + in the Fortran community) is a special case of call-by-reference where the + provided reference is unique to the caller. If a parameter to a function call + is a reference that might be accessible by another thread of execution, its + contents are copied to a new reference that is not; when the function call + returns, the updated contents of this new reference are copied back to the + original reference ("restored"). + + The semantics of call-by-copy-restore also differ from those of + call-by-reference where two or more function arguments alias one another; that + is, point to the same variable in the caller's environment. Under + call-by-reference, writing to one will affect the other; call-by-copy-restore + avoids this by giving the function distinct copies, but leaves the result in + the caller's environment undefined (depending on which of the aliased arguments + is copied back first). + + When the reference is passed to the callee uninitialized, this evaluation + strategy may be called call-by-result. [edit] Partial evaluation Main article: + Partial evaluation + + In partial evaluation, evaluation may continue into the body of a function that + has not been applied. Any sub-expressions that do not contain unbound variables + are evaluated, and function applications whose argument values are known may be + reduced. In the presence of side-effects, complete partial evaluation may + produce unintended results; for this reason, systems that support partial + evaluation tend to do so only for "pure" expressions (expressions without + side-effects) within functions. [edit] Non-strict evaluation + + In non-strict evaluation, arguments to a function are not evaluated unless they + are actually used in the evaluation of the function body. + + Under Church encoding, lazy evaluation of operators maps to non-strict + evaluation of functions; for this reason, non-strict evaluation is often + referred to as "lazy". Boolean expressions in many languages use lazy + evaluation; in this context it is often called short circuiting. Conditional + expressions also usually use lazy evaluation, albeit for different reasons. + [edit] Normal order + + Normal-order (or leftmost outermost) evaluation is the evaluation strategy + where the outermost redex is always reduced, applying functions before + evaluating function arguments. It differs from call-by-name in that + call-by-name does not evaluate inside the body of an unapplied + function[clarification needed]. [edit] Call by name + + In call-by-name evaluation, the arguments to functions are not evaluated at all + — rather, function arguments are substituted directly into the function body + using capture-avoiding substitution. If the argument is not used in the + evaluation of the function, it is never evaluated; if the argument is used + several times, it is re-evaluated each time. (See Jensen's Device.) + + Call-by-name evaluation can be preferable over call-by-value evaluation because + call-by-name evaluation always yields a value when a value exists, whereas + call-by-value may not terminate if the function's argument is a non-terminating + computation that is not needed to evaluate the function. Opponents of + call-by-name claim that it is significantly slower when the function argument + is used, and that in practice this is almost always the case as a mechanism + such as a thunk is needed. [edit] Call by need + + Call-by-need is a memoized version of call-by-name where, if the function + argument is evaluated, that value is stored for subsequent uses. In a "pure" + (effect-free) setting, this produces the same results as call-by-name; when the + function argument is used two or more times, call-by-need is almost always + faster. + + Because evaluation of expressions may happen arbitrarily far into a + computation, languages using call-by-need generally do not support + computational effects (such as mutation) except through the use of monads and + uniqueness types. This eliminates any unexpected behavior from variables whose + values change prior to their delayed evaluation. + + This is a kind of Lazy evaluation. + + Haskell is the most well-known language that uses call-by-need evaluation. + + R also uses a form of call-by-need. [edit] Call by macro expansion + + Call-by-macro-expansion is similar to call-by-name, but uses textual + substitution rather than capture-avoiding substitution. With uncautious use, + macro substitution may result in variable capture and lead to undesired + behavior. Hygienic macros avoid this problem by checking for and replacing + shadowed variables that are not parameters. + + + + + Eager evaluation or greedy evaluation is the evaluation strategy in most + traditional programming languages. + + In eager evaluation an expression is evaluated as soon as it gets bound to a + variable. The term is typically used to contrast lazy evaluation, where + expressions are only evaluated when evaluating a dependent expression. Eager + evaluation is almost exclusively used in imperative programming languages where + the order of execution is implicitly defined by the source code organization. + + One advantage of eager evaluation is that it eliminates the need to track and + schedule the evaluation of expressions. It also allows the programmer to + dictate the order of execution, making it easier to determine when + sub-expressions (including functions) within the expression will be evaluated, + as these sub-expressions may have side-effects that will affect the evaluation + of other expressions. + + A disadvantage of eager evaluation is that it forces the evaluation of + expressions that may not be necessary at run time, or it may delay the + evaluation of expressions that have a more immediate need. It also forces the + programmer to organize the source code for optimal order of execution. + + Note that many modern compilers are capable of scheduling execution to better + optimize processor resources and can often eliminate unnecessary expressions + from being executed entirely. Therefore, the notions of purely eager or purely + lazy evaluation may not be applicable in practice. + + + + In computer programming, lazy evaluation is the technique of delaying a + computation until the result is required. + + The benefits of lazy evaluation include: performance increases due to avoiding + unnecessary calculations, avoiding error conditions in the evaluation of + compound expressions, the capability of constructing potentially infinite data + structures, and the capability of defining control structures as abstractions + instead of as primitives. + + Languages that use lazy actions can be further subdivided into those that use a + call-by-name evaluation strategy and those that use call-by-need. Most + realistic lazy languages, such as Haskell, use call-by-need for performance + reasons, but theoretical presentations of lazy evaluation often use + call-by-name for simplicity. + + The opposite of lazy actions is eager evaluation, sometimes known as strict + evaluation. Eager evaluation is the evaluation behavior used in most + programming languages. + + Lazy evaluation refers to how expressions are evaluated when they are passed as + arguments to functions and entails the following three points:[1] + + 1. The expression is only evaluated if the result is required by the calling + function, called delayed evaluation.[2] 2. The expression is only evaluated to + the extent that is required by the calling function, called short-circuit + evaluation. 3. The expression is never evaluated more than once, called + applicative-order evaluation.[3] + + Contents [hide] + + * 1 Delayed evaluation + o 1.1 Control structures + * 2 Controlling eagerness in lazy languages 3 Other uses 4 See also 5 + * References 6 External links + + [edit] Delayed evaluation + + Delayed evaluation is used particularly in functional languages. When using + delayed evaluation, an expression is not evaluated as soon as it gets bound to + a variable, but when the evaluator is forced to produce the expression's value. + That is, a statement such as x:=expression; (i.e. the assignment of the result + of an expression to a variable) clearly calls for the expression to be + evaluated and the result placed in x, but what actually is in x is irrelevant + until there is a need for its value via a reference to x in some later + expression whose evaluation could itself be deferred, though eventually the + rapidly-growing tree of dependencies would be pruned in order to produce some + symbol rather than another for the outside world to see. + + Some programming languages delay evaluation of expressions by default, and some + others provide functions or special syntax to delay evaluation. In Miranda and + Haskell, evaluation of function arguments is delayed by default. In many other + languages, evaluation can be delayed by explicitly suspending the computation + using special syntax (as with Scheme's "delay" and "force" and OCaml's "lazy" + and "Lazy.force") or, more generally, by wrapping the expression in a thunk. + The object representing such an explicitly delayed evaluation is called a + future or promise. Perl 6 uses lazy evaluation of lists, so one can assign + infinite lists to variables and use them as arguments to functions, but unlike + Haskell and Miranda, Perl 6 doesn't use lazy evaluation of arithmetic operators + and functions by default. + + Delayed evaluation has the advantage of being able to create calculable + infinite lists without infinite loops or size matters interfering in + computation. For example, one could create a function that creates an infinite + list (often called a stream) of Fibonacci numbers. The calculation of the n-th + Fibonacci number would be merely the extraction of that element from the + infinite list, forcing the evaluation of only the first n members of the list. + + For example, in Haskell, the list of all Fibonacci numbers can be written as + + fibs = 0 : 1 : zipWith (+) fibs (tail fibs) + + In Haskell syntax, ":" prepends an element to a list, tail returns a list + without its first element, and zipWith uses a specified function (in this case + addition) to combine corresponding elements of two lists to produce a third. + + Provided the programmer is careful, only the values that are required to + produce a particular result are evaluated. However, certain calculations may + result in the program attempting to evaluate an infinite number of elements; + for example, requesting the length of the list or trying to sum the elements of + the list with a fold operation would result in the program either failing to + terminate or running out of memory. [edit] Control structures + + Even in most eager languages if statements evaluate in a lazy fashion. + + if a then b else c + + evaluates (a), then if and only if (a) evaluates to true does it evaluate (b), + otherwise it evaluates (c). That is, either (b) or (c) will not be evaluated. + Conversely, in an eager language the expected behavior is that + + define f(x,y) = 2*x set k = f(e,5) + + will still evaluate (e) and (f) when computing (k). However, user-defined + control structures depend on exact syntax, so for example + + define g(a,b,c) = if a then b else c l = g(h,i,j) + + (i) and (j) would both be evaluated in an eager language. While in + + l' = if h then i else j + + (i) or (j) would be evaluated, but never both. + + Lazy evaluation allows control structures to be defined normally, and not as + primitives or compile-time techniques. If (i) or (j) have side effects or + introduce run time errors, the subtle differences between (l) and (l') can be + complex. As most programming languages are Turing-complete, it is of course + possible to introduce lazy control structures in eager languages, either as + built-ins like C's ternary operator ?: or by other techniques such as clever + use of lambdas, or macros. + + Short-circuit evaluation of Boolean control structures is sometimes called + "lazy". [edit] Controlling eagerness in lazy languages + + In lazy programming languages such as Haskell, although the default is to + evaluate expressions only when they are demanded, it is possible in some cases + to make code more eager—or conversely, to make it more lazy again after it has + been made more eager. This can be done by explicitly coding something which + forces evaluation (which may make the code more eager) or avoiding such code + (which may make the code more lazy). Strict evaluation usually implies + eagerness, but they are technically different concepts. + + However, there is an optimisation implemented in some compilers called + strictness analysis, which, in some cases, allows the compiler to infer that a + value will always be used. In such cases, this may render the programmer's + choice of whether to force that particular value or not, irrelevant, because + strictness analysis will force strict evaluation. + + In Haskell, marking constructor fields strict means that their values will + always be demanded immediately. The seq function can also be used to demand a + value immediately and then pass it on, which is useful if a constructor field + should generally be lazy. However, neither of these techniques implements + recursive strictness—for that, a function called deepSeq was invented. + + Also, pattern matching in Haskell 98 is strict by default, so the ~ qualifier + has to be used to make it lazy. [edit] + + + + +confluence/Church-Rosser + + +"combinators", useful ones: + +Useful combinators +I +K +omega +true/get-first/K +false/get-second +make-pair +S,B,C,W/dup,Omega + +(( combinatorial logic )) + +composition +n-ary[sic] composition +"fold-based"[sic] representation of numbers +defining some operations, not yet predecessor + iszero,succ,add,mul,...? + +lists? + explain differences between list and tuple (and stream) + FIFO queue,LIFO stack,etc... +"pair-based" representation of lists (1,2,3) +nil,cons,isnil,head,tail + +explain operations like "map","filter","fold_left","fold_right","length","reverse" +but we're not yet in position to implement them because we don't know how to recurse + +Another way to do lists is based on model of how we did numbers +"fold-based" representation of lists +One virtue is we can do some recursion by exploiting the fold-based structure of our implementation; don't (yet) need a general method for recursion + +Go back to numbers, how to do predecessor? (a few ways) +For some purposes may be easier (to program,more efficient) to use "pair-based" representation of numbers +("More efficient" but these are still base-1 representations of numbers!) +In this case, too you'd need a general method for recursion +(You could also have a hybrid, pair-and-fold based representation of numbers, and a hybrid, pair-and-fold based representation of lists. Works quite well.) + +Recursion +Even if we have fold-based representation of numbers, and predecessor/equal/subtraction, some recursive functions are going to be out of our reach +Need a general method, where f(n) doesn't just depend on f(n-1) (or ). Example? + +How to do with recursion with omega. + + +Next week: fixed point combinators + + -- 2.11.0