X-Git-Url: http://lambda.jimpryor.net/git/gitweb.cgi?p=lambda.git;a=blobdiff_plain;f=week1.mdwn;h=af079edf543c975f90a646c8d4ea6a8625cd6454;hp=5018c711cd694565b0da85e9cb7181fb999bf2d3;hb=c6aa6ab0a05bdf464373ee31cb8cd93285079ca7;hpb=59081b2acb15b435e9a486ce18bc85c1abe232c2 diff --git a/week1.mdwn b/week1.mdwn index 5018c711..a419c877 100644 --- a/week1.mdwn +++ b/week1.mdwn @@ -1,756 +1,569 @@ -Here's what we did in seminar on Monday 9/13, (Sometimes these notes will expand on things mentioned only briefly in class, or discuss useful tangents that didn't even make it into class.) +These notes will recapitulate, make more precise, and to some degree expand what we did in the last hour of our first meeting, leading up to the definitions of the `factorial` and `length` functions. -Applications -============ +### Getting started ### -We mentioned a number of linguistic and philosophical applications of the tools that we'd be helping you learn in the seminar. (We really do mean "helping you learn," not "teaching you." You'll need to aggressively browse and experiment with the material yourself, or nothing we do in a few two-hour sessions will succeed in inducing mastery of it.) +We begin with a decidable fragment of arithmetic. Our language has some primitive literal values: -From linguistics ----------------- + 0, 1, 2, 3, ... -* generalized quantifiers are a special case of operating on continuations +In fact we could get by with just the primitive literal `0` and the `succ` function, but we will make things a bit more convenient by allowing literal expressions of any natural number. We won't worry about numbers being too big for our finite computers to handle. -* (Chris: fill in other applications...) +We also have some predefined functions: -* expressives -- at the end of the seminar we gave a demonstration of modeling [[damn]] using continuations...see the linked summary for more explanation and elaboration + succ, +, *, pred, - -From philosophy ---------------- +Again, we might be able to get by with just `succ`, and define the others in terms of it, but we'll be a bit more relaxed. Since we want to stick with natural numbers, not the whole range of integers, we'll make `pred 0` just be `0`, and `2-4` also be `0`. -* the natural semantics for positive free logic is thought by some to have objectionable ontological commitments; Jim says that thought turns on not understanding the notion of a "union type", and conflating the folk notion of "naming" with the technical notion of semantic value. We'll discuss this in due course. +Here's another set of functions: -* those issues may bear on Russell's Gray's Elegy argument in "On Denoting" + ==, <, >, <=, >=, != -* and on discussion of the difference between the meaning of "is beautiful" and "beauty," and the difference between the meaning of "that snow is white" and "the proposition that snow is white." +`==` is just what we non-programmers normally express by `=`. It's a relation that holds or not between two values. Here we'll treat it as a function that takes two values as arguments and returns a *boolean* value, that is a truth-value, as a result. The reason for using the doubled `=` symbol is that the single `=` symbol tends to get used in lots of different roles in programming, so we reserve `==` to express this meaning. I will deliberately try to minimize the uses of single `=` in this made-up language (but not eliminate it entirely), to reduce ambiguity and confusion. The `==` relation---or as we're treating it here, the `==` *function* that returns a boolean value---can at least take two numbers as arguments. Probably it makes sense for it to take other kinds of values as arguments, too. For example, it should operate on two truth-values as well. Maybe we'd want it to operate on a number and a truth-value, too? and always return false in that case? What about operating on two functions? Here we encounter the difficulty that the computer can't in general *decide* when two functions are equivalent. Let's not try to sort this all out just yet. We'll suppose that `==` can at least take two numbers as arguments, or two truth-values. -* the apparatus of monads, and techniques for statically representing the semantics of an imperatival language quite generally, are explicitly or implicitly invoked in dynamic semantics +As mentioned in class, we represent the truth-values like this: -* the semantics for mutation will enable us to make sense of a difference between numerical and qualitative identity---for purely mathematical objects! + 'true, 'false -* issues in that same neighborhood will help us better understand proposals like Kit Fine's that semantics is essentially coordinated, and that `R a a` and `R a b` can differ in interpretation even when `a` and `b` don't +These are instances of a broader class of literal values that I called *symbolic atoms*. We'll return to them shortly. The reason we write them with an initial `'` will also be explained shortly. For now, it's enough to note that the expression: + 1 + 2 == 3 -Declarative/functional vs Imperatival/dynamic models of computation -=================================================================== +evaluates to `'true`, and the expression: -Many of you, like us, will have grown up thinking the paradigm of computation is a sequence of changes. Let go of that. It will take some care to separate the operative notion of "sequencing" here from other notions close to it, but once that's done, you'll see that languages that have no significant notions of sequencing or changes are Turing complete: they can perform any computation we know how to describe. In itself, that only puts them on equal footing with more mainstream, imperatival programming languages like C and Java and Python, which are also Turing complete. But further, the languages we want you to become familiar with can reasonably be understood to be more fundamental. They embody the elemental building blocks that computer scientists use when reasoning about and designing other languages. + 1 + 0 == 3 -Jim offered the metaphor: think of imperatival languages, which include "mutation" and "side-effects" (we'll flesh out these keywords as we proceeed), as the pate of computation. We want to teach you about the meat and potatoes, where as it turns out there is no sequencing and no changes. There's just the evaluation or simplification of complex expressions. +evaluates to `'false`. Something else that evaluates to `'false` is the simple expression: -Now, when you ask the Scheme interpreter to simplify an expression for you, that's a kind of dynamic interaction between you and the interpreter. You may wonder then why these languages should not also be understood imperatively. The difference is that in a purely declarative or functional language, there are no dynamic effects in the language itself. It's just a static semantic fact about the language that one expression reduces to another. You may have verified that fact through your dynamic interactions with the Scheme interpreter, but that's different from saying that there are dynamic effects in the language itself. + 'false -What the latter would amount to will become clearer as we build our way up to languages which are genuinely imperatival or dynamic. +That is, literal values are a limiting case of expression, that evaluate to just themselves. More complex expressions like `1 + 0` don't evaluate to themselves, but rather down to literal values. -Many of the slogans and keywords we'll encounter in discussions of these issues call for careful interpretation. They mean various different things. +The functions `succ` and `pred` come before their arguments, like this: -For example, you'll encounter the claim that declarative languages are distinguished by their **referential transparency.** What's meant by this is not always exactly the same, and as a cluster, it's related to but not the same as this means for philosophers and linguists. + succ 1 -The notion of **function** that we'll be working with will be one that, by default, sometimes counts as non-identical functions that map all their inputs to the very same outputs. For example, two functions from jumbled decks of cards to sorted decks of cards may use different algorithms and hence be different functions. +On the other hand, the functions `+`, `*`, `-`, `==`, and so on come in between their arguments, like this: -It's possible to enhance the lambda calculus so that functions do get identified when they map all the same inputs to the same outputs. This is called making the calculus **extensional**. Church called languages which didn't do this "intensional." If you try to understand this in terms of functions from worlds to extensions (an idea also associated with Church), you will hurt yourself. So too if you try to understand it in terms of mental stereotypes, another notion sometimes designated by "intension." + x < y -It's often said that dynamic systems are distinguished because they are the ones in which **order matters**. However, there are many ways in which order can matter. If we have a trivalent boolean system, for example---easily had in a purely functional calculus---we might choose to give a truth-table like this for "and": +Functions of this latter sort are said to have an "infix" syntax. This is just a convenience for how we write them. Our language will have to keep rigorous track of which functions have infix syntax and which don't, but we'll just rely on context and our brains to make sense of this for now. Functions with the ordinary, non-infix syntax can take two arguments, as well. If we had defined the less-than relation (boolean function) in that style, we'd write it like this instead: - true and true = true - true and * = * - true and false = false - * and true = * - * and * = * - * and false = * - false and true = false - false and * = false - false and false = false + lessthan? (x, y) -And then we'd notice that `* and false` has a different intepretation than `false and *`. (The same phenomenon is already present with the mateial conditional in bivalent logics; but seeing that a non-symmetric semantics for `and` is available even for functional languages is instructive.) +or perhaps like this: -Another way in which order can matter that's present even in functional languages is that the interpretation of some complex expressions can depend on the order in which sub-expressions are evaluated. Evaluated in one order, the computations might never terminate (and so semantically we interpret them as having "the bottom value"---we'll discuss this). Evaluated in another order, they might have a perfectly mundane value. Here's an example, though we'll reserve discussion of it until later: + lessthan? x y - (\x. y) ((\x. x x) (\x. x x)) +We'll get more acquainted with the difference between these next week. For now, I'll just stick to the first form. -Again, these facts are all part of the metatheory of purely functional languages. But *there is* a different sense of "order matters" such that it's only in imperatival languages that order so matters. +Another set of operations we have are: - x := 2 - x := x + 1 - x == 3 + and, or, not -Here the comparison in the last line will evaluate to true. +The first two of these are infix functions that expect two boolean arguments, and gives a boolean result. The third is a function that expects only one boolean argument. Our earlier function `!=` means "doesn't equal", and: - x := x + 1 - x := 2 - x == 3 + x != y -Here the comparison in the last line will evaluate to false. +will in general be just another way to write: -One of our goals for this course is to get you to understand *what is* that new -sense such that only so matters in imperatival languages. + not (x == y) -Finally, you'll see the term **dynamic** used in a variety of ways in the literature for this course: +You see that you can use parentheses in the standard way. -* dynamic versus static typing +I've started throwing in some variables. We'll say variables are any expression that starts with a lower-case letter, then is followed by a sequence of 0 or more upper- or lower-case letters, or underscores (`_`). Then at the end you can optionally have a `?` or `!` or a sequence of `'`s, understood as "prime" symbols. Hence, all of these are legal variables: -* dynamic versus lexical scoping + x + x1 + x_not_y + xUBERANT + x' + x'' + x? + xs -* dynamic versus static control operators +We'll follow a *convention* of using variables with short names and a final `s` to represent collections like sequences (to be discussed below). But this is just a convention to help us remember what we're up to, not a strict rule of the language. We'll also follow a convention of only using variables ending in `?` to represent functions that return a boolean value. Thus, for example, `zero?` will be a function that expects a single number argument and returns a boolean corresponding to whether that number is `0`. `odd?` will be a function that expects a single number argument and returns a boolean corresponding to whether than number is odd. Above, I suggested we might use `lessthan?` to represent a function that expects *two* number arguments, and again returns a boolean result. -* finally, we're used ourselves to talking about dynamic versus static semantics +We also conventionally reserve variables ending in `!` for a different special class of functions, that we will explain later in the course. -For the most part, these uses are only loosely connected to each other. We'll tend to use "imperatival" to describe the kinds of semantic properties made available in dynamic semantics, languages which have robust notions of sequencing changes, and so on. +In fact you can think of `succ` and `pred` and `not` and all the rest as also being variables; it's just that these variables have been pre-defined in our language to be bound to special functions we designated in advance. You can even think of `==` and `<` as being variables, too, bound to other functions. But I haven't given you rules yet which would make them legal variables, because they don't start with a lower-case letter. We can make the rules more liberal later. -Map -=== +Only a few things in our language aren't variables. These include the **keywords** like `let` and `case` and so on that we'll discuss below. You can't use `let` as a variable, else the syntax of our language would become too hard to mechanically parse. (And probably too hard for our meager brains to parse, too.) -
Scheme (functional part) | -OCaml (functional part) | -C, Java, Pasval -Scheme (imperative part) -OCaml (imperative part) |
-
lambda calculus -combinatorial logic |
-||
--------------------------------------------------- Turing complete --------------------------------------------------- | -||
- | more advanced type systems, such as polymorphic types - | - |
- | simply-typed lambda calculus (what linguists mostly use) - | - |
- &exists;x. (F x and &exists;x (not (F x))) -+just be sure to always supply that function with arguments that are two-element sequences whose second element is `10`. If you don't, you will have a pattern-matching failure and the interpretation of your expression will "crash". +Thus, you can now do things like this: -Some more comparisons between Scheme and OCaml ----------------------------------------------- +`let` + `f match` λ`(x, y). (x, x + y, x + 2*y, x + 3*y);` + `(a, b, c, d) match f (10, 1)` +`in (b, d)` -11. Simple predefined values +which evaluates `f (10, 1)` to `(10, 11, 12, 13)`, which it will match against the complex pattern `(a, b, c, d)`, binding all four of the contained variables, and then evaluate `(b, d)` under those bindings, giving us the result `(11, 13)`. - Numbers in Scheme: 2, 3 - In OCaml: 2, 3 +Notice that in the preceding expression, the variables `a` and `c` were never used. We're allowed to do that, but there's also a special syntax to indicate that we want to throw away a value like this. We use the special pattern `_`: - Booleans in Scheme: #t, #f - In OCaml: true, false +`let` + `f match` λ`(x, y). (x, x + y, x + 2*y, x + 3*y);` + `(_, b, _, d) match f (10, 1)` +`in (b, d)` - The eighth letter in the Latin alphabet, in Scheme: #\h - In OCaml: 'h' +The role of `_` here is just to occupy a slot in the complex pattern `(_, b, _, d)`, to make it a multivalue of four values, rather than one of only two. -12. Compound values +One last wrinkle. What if you tried to make a pattern like this: `[x, x]`, where some variable occurs multiple times. This is known as a "non-linear pattern". Some languages permit these (and require that the values being bound against `x` in the two positions be equal). Many languages don't permit that. Let's agree not to do this. - These are values which are built up out of (zero or more) simple values. +### Case and if/then/else ### - Ordered pairs in Scheme: '(2 . 3) - In OCaml: (2, 3) +In class we introduced this form of complex expression: - Lists in Scheme: '(2 3) - In OCaml: [2; 3] - We'll be explaining the difference between pairs and lists next week. +`if` φ `then` ψ `else` χ - The empty list, in Scheme: '() - In OCaml: [] +Here φ should evaluate to a boolean, and ψ and χ should evaluate to the same type. The result of the whole expression will be the same as ψ, if φ evaluates to `'true`, else to the result of χ. - The string consisting just of the eighth letter of the Latin alphabet, in Scheme: "h" - In OCaml: "h" +We said that that could be taken as shorthand for the following `case`-expression: - A longer string, in Scheme: "horse" - In OCaml: "horse" +`case` φ `of` + `'true then` ψ`;` + `'false then` χ +`end` - A shorter string, in Scheme: "" - In OCaml: "" +The `case`-expression has a list of patterns and expressions. Its initial expression φ is evaluated and then attempted to be matched against each of the patterns in turn. When we reach a pattern that can be matched---that doesn't result in a match-failure---then we evaluate the expression after the `then`, using the variable bindings in effect from the immediately preceding match. (Any match that fails has no effect on future variable bindings.) That is the result of the whole `case`-expression; we don't attempt to do any further pattern-matching after finding a pattern that succeeds. -13. Function application +If a `case`-expression gets to the end of its list of patterns, and *none* of them have matched its initial expression, the result is a pattern-matching failure. So it's good style to always include a final pattern that's guaranteed to match anything. You could use a simple variable for this, or the special pattern `_`: - Binary functions in OCaml: foo 2 3 - - Or: ( + ) 2 3 + case 4 of + 1 then 'true; + 2 then 'true; + x then 'false + end - These are the same as: ((foo 2) 3). In other words, functions in OCaml are "curried". foo 2 returns a 2-fooer, which waits for an argument like 3 and then foos 2 to it. ( + ) 2 returns a 2-adder, which waits for an argument like 3 and then adds 2 to it. + case 4 of + 1 then 'true; + 2 then 'true; + _ then 'false + end - In Scheme, on the other hand, there's a difference between ((foo 2) 3) and (foo 2 3). Scheme distinguishes between unary functions that return unary functions and binary functions. For our seminar purposes, it will be easiest if you confine yourself to unary functions in Scheme as much as possible. +will both evaluate to `'false`, without any pattern-matching failure. - Additionally, as said above, Scheme is very sensitive to parentheses and whenever you want a function applied to any number of arguments, you need to wrap the function and its arguments in a parentheses. +There's a superficial similarity between the `let`-constructions and the `case`-constructions. Each has a list whose left-hand sides are patterns and right-hand sides are expressions. Each also has an additional expression that stands out in a special position: in `let`-expressions at the end, in `case`-expressions at the beginning. But the relations of these different elements to each other is different. In `let`-expressions, the right-hand sides of the list supply the values that get bound to the variables in the patterns on the left-hand sides. Also, each pattern in the list will get matched, unless there's a pattern-match failure before we get to it. In `case`-expressions, on the other hand, it's the initial expression that supplies the value (or multivalues) that we attempt to match against the pattern, and we stop as soon as we reach a pattern that we can successfully match against. Then the variables in that pattern are bound when evaluating the corresponding right-hand side expression. +### Recursive let ### +Given all these tools, we're (almost) in a position to define functions like the `factorial` and `length` functions we defined in class. +Here's an attempt to define the `factorial` function: +`let` + `factorial match` λ `n. if n == 0 then 1 else n * factorial (n-1)` +`in factorial` +or, using `case`: -Computation = sequencing changes? +`let` + `factorial match` λ `n. case n of 0 then 1; _ then n * factorial (n - 1) end` +`in factorial` - Different notions of sequencing +But there's a problem here. What value does `factorial` have when evaluating the expression `factorial (n - 1)`? - Concatanation / syntactic complexity is not sequencing +As we said in class, the natural precedent for this with non-function variables would go something like this: - Shadowing is not mutating + let + x match 0; + y match x + 1; + x match x + 1; + z match 2 * x + in (y, z) - Define isn't mutating +We'd expect this to evaluate to `(1, 2)`, and indeed it does. That's because the `x` in the `x + 1` on the right-hand side of the third binding (`x match x + 1`) is evaluated under the scope of the first binding, of `x` to `0`. +We should expect the `factorial` variable in the right-hand side of our attempted definition to behave the same way. It will evaluate to whatever value it has before reaching this `let`-expression. We actually haven't said what is the result of trying to evaluate unbound variables, as in: + let + x match y + 0 + in x - -(let [(three 3) (two 2)] (+ 3 2)) +Let's agree not to do that. We can consider such expressions only under the implied understanding that they are parts of larger expressions that assign a value to `y`, as for example in: + let + y match 1 + in let + x match y + 0 + in x +Hence, let's understand our attempted definition of `factorial` to be part of such a larger expression: -Basics of Lambda Calculus -========================= +`let` + `factorial match` λ `n. n` +`in let` + `factorial match` λ `n. case n of 0 then 1; _ then n * factorial (n - 1) end` +`in factorial 4` -The lambda calculus we'll be focusing on for the first part of the course has no types. (Some prefer to say it instead has a single type---but if you say that, you have to say that functions from this type to this type also belong to this type. Which is weird.) +This would evaluate to what `4 * factorial 3` does, but with the `factorial` in the expression bound to the identity function λ `n. n`. In other words, we'd get the result `12`, not the correct answer `24`. -Here is its syntax: +For the time being, we will fix this solution by just introducing a special new construction `letrec` that works the way we want. Now in: -
-Variables:+`let` + `factorial match` λ `n. n` +`in letrec` + `factorial match` λ `n. case n of 0 then 1; _ then n * factorial (n - 1) end` +`in factorial 4` -Each variable is an expression. For any expressions M and N and variable a, the following are also expressions: +the initial binding of `factorial` to the identity function gets ignored, and the `factorial` in the right-hand side of our definition is interpreted to mean the very same function that we are hereby binding to `factorial`. Exactly how this works is a deep and exciting topic, that we will be looking at very closely in a few weeks. For the time being, let's just accept that `letrec` does what we intuitively want when defining functions recursively. -x
,y
,z
... -
-Abstract: (λa M)
-
+**It's important to make sure you say letrec when that's what you want.** You may not *always* want `letrec`, though, if you're ever re-using variables (or doing other things) that rely on the bindings occurring in a specified order. With `letrec`, all the bindings in the construction happen simultaneously. This is why you can say, as Jim did in class:
-We'll tend to write (λa M)
as just `(\a M)`, so we don't have to write out the markup code for the λ
. You can yourself write (λa M)
or `(\a M)` or `(lambda a M)`.
+`letrec`
+ `even? match` λ `n. case n of 0 then 'true; _ then odd? (n-1) end`
+ `odd? match` λ `n. case n of 0 then 'false; _ then even? (n-1) end`
+`in (even?, odd?)`
-
-Application: (M N)
-
+Here neither the `even?` nor the `odd?` pattern is matched before the other. They, and also the `odd?` and the `even?` variables in their right-hand side expressions, are all bound at once.
-Some authors reserve the term "term" for just variables and abstracts. We won't participate in that convention; we'll probably just say "term" and "expression" indiscriminately for expressions of any of these three forms.
+As we said, this is deep and exciting, and it will make your head spin before we're done examining it. But let's trust `letrec` to do its job, for now.
-Examples of expressions:
- x
- (y x)
- (x x)
- (\x y)
- (\x x)
- (\x (\y x))
- (x (\x x))
- ((\x (x x)) (\x (x x)))
+### Comparing recursive-style and iterative-style definitions ###
-The lambda calculus has an associated proof theory. For now, we can regard the proof theory as having just one rule, called the rule of **beta-reduction** or "beta-contraction". Suppose you have some expression of the form:
+Finally, we're in a position to revisit the two definitions of `length` that Jim presented in class. Here is the first:
- ((\a M) N)
+`letrec`
+ `length match` λ `xs. case xs of [] then 0; _:ys then 1 + length ys end`
+`in length`
-that is, an application of an abstract to some other expression. This compound form is called a **redex**, meaning it's a "beta-reducible expression." `(\a M)` is called the **head** of the redex; `N` is called the **argument**, and `M` is called the **body**.
+This function accept a sequence `xs`, and if its empty returns `0`, else it says that its length is `1` plus whatever is the length of its remainder when you take away the first element. In programming circles, this remainder is commonly called the sequence's "tail" (and the first element is its "head").
-The rule of beta-reduction permits a transition from that expression to the following:
+Thus if we evaluated `length [10, 20, 30]`, that would give the same result as `1 + length [20, 30]`, which would give the same result as `1 + (1 + length [30])`, which would give the same result as `1 + (1 + (1 + length []))`. But `length []` is `0`, so our original expression evaluates to `1 + (1 + (1 + 0))`, or `3`.
- M [a:=N]
+Here's another way to define the `length` function:
-What this means is just `M`, with any *free occurrences* inside `M` of the variable `a` replaced with the term `N`.
+`letrec`
+ `aux match` λ `(n, xs). case xs of [] then n; _:ys then aux (n + 1, ys) end`
+`in` λ `xs. aux (0, xs)`
-What is a free occurrence?
-
-> An occurrence of a variable `a` is **bound** in T if T has the form `(\a N)`.
-
-> If T has the form `(M N)`, any occurrences of `a` that are bound in `M` are also bound in T, and so too any occurrences of `a` that are bound in `N`.
-
-> An occurrence of a variable is **free** if it's not bound.
-
-For instance:
-
-
-> T is defined to be `(x (\x (\y (x (y z)))))`
-
-The first occurrence of `x` in T is free. The `\x` we won't regard as being an occurrence of `x`. The next occurrence of `x` occurs within a form that begins with `\x`, so it is bound as well. The occurrence of `y` is bound; and the occurrence of `z` is free.
-
-Here's an example of beta-reduction:
-
- ((\x (y x)) z)
-
-beta-reduces to:
-
- (y z)
-
-We'll write that like this:
-
- ((\x (y x)) z) ~~> (y z)
-
-Different authors use different notations. Some authors use the term "contraction" for a single reduction step, and reserve the term "reduction" for the reflexive transitive closure of that, that is, for zero or more reduction steps. Informally, it seems easiest to us to say "reduction" for one or more reduction steps. So when we write:
-
- M ~~> N
-
-We'll mean that you can get from M to N by one or more reduction steps. Hankin uses the symbol →
for one-step contraction, and the symbol ↠
for zero-or-more step reduction. Hindley and Seldin use ⊳1
and ⊳
.
-
-When M and N are such that there's some P that M reduces to by zero or more steps, and that N also reduces to by zero or more steps, then we say that M and N are **beta-convertible**. We'll write that like this:
-
- M <~~> N
-
-This is what plays the role of equality in the lambda calculus. Hankin uses the symbol `=` for this. So too do Hindley and Seldin. Personally, I keep confusing that with the relation to be described next, so let's use this notation instead. Note that `M <~~> N` doesn't mean that each of `M` and `N` are reducible to each other; that only holds when `M` and `N` are the same expression. (Or, with our convention of only saying "reducible" for one or more reduction steps, it never holds.)
-
-In the metatheory, it's also sometimes useful to talk about formulas that are syntactically equivalent *before any reductions take place*. Hankin uses the symbol ≡
for this. So too do Hindley and Seldin. We'll use that too, and will avoid using `=` when discussing metatheory for the lambda calculus. Instead we'll use `<~~>` as we said above. When we want to introduce a stipulative definition, we'll write it out longhand, as in:
-
-> T is defined to be `(M N)`.
-
-We'll regard the following two expressions:
-
- (\x (x y))
-
- (\z (z y))
-
-as syntactically equivalent, since they only involve a typographic change of a bound variable. Read Hankin section 2.3 for discussion of different attitudes one can take about this.
-
-Note that neither of those expressions are identical to:
-
- (\x (x w))
-
-because here it's a free variable that's been changed. Nor are they identical to:
-
- (\y (y y))
-
-because here the second occurrence of `y` is no longer free.
-
-There is plenty of discussion of this, and the fine points of how substitution works, in Hankin and in various of the tutorials we've linked to about the lambda calculus. We expect you have a good intuitive understanding of what to do already, though, even if you're not able to articulate it rigorously.
-
-
-Shorthand
----------
-
-The grammar we gave for the lambda calculus leads to some verbosity. There are several informal conventions in widespread use, which enable the language to be written more compactly. (If you like, you could instead articulate a formal grammar which incorporates these additional conventions. Instead of showing it to you, we'll leave it as an exercise for those so inclined.)
-
-
-**Dot notation** Dot means "put a left paren here, and put the right
-paren as far the right as possible without creating unbalanced
-parentheses". So:
-
- (\x (\y (x y)))
-
-can be abbreviated as:
-
- (\x (\y. x y))
-
-and:
-
- (\x (\y. (z y) z))
-
-would abbreviate:
-
- (\x (\y ((z y) z)))
-
-This on the other hand:
-
- (\x (\y. z y) z)
-
-would abbreviate:
-
- (\x (\y (z y)) z)
-
-**Parentheses** Outermost parentheses around applications can be dropped. Moreover, applications will associate to the left, so `M N P` will be understood as `((M N) P)`. Finally, you can drop parentheses around abstracts, but not when they're part of an application. So you can abbreviate:
-
- (\x. x y)
-
-as:
-
- \x. x y
-
-but you should include the parentheses in:
-
- (\x. x y) z
-
-and:
-
- z (\x. x y)
-
-**Merging lambdas** An expression of the form `(\x (\y M))`, or equivalently, `(\x. \y. M)`, can be abbreviated as:
-
- (\x y. M)
-
-Similarly, `(\x (\y (\z M)))` can be abbreviated as:
-
- (\x y z. M)
-
-
-Lambda terms represent functions
---------------------------------
-
-All (recursively computable) functions can be represented by lambda
-terms (the untyped lambda calculus is Turing complete). For some lambda terms, it is easy to see what function they represent:
-
-> `(\x x)` represents the identity function: given any argument `M`, this function
-simply returns `M`: `((\x x) M) ~~> M`.
-
-> `(\x (x x))` duplicates its argument:
-`((\x (x x)) M) ~~> (M M)`
-
-> `(\x (\y x))` throws away its second argument:
-`(((\x (\y x)) M) N) ~~> M`
-
-and so on.
-
-It is easy to see that distinct lambda expressions can represent the same
-function, considered as a mapping from input to outputs. Obviously:
-
- (\x x)
-
-and:
-
- (\z z)
-
-both represent the same function, the identity function. However, we said above that we would be regarding these expressions as synactically equivalent, so they aren't yet really examples of *distinct* lambda expressions representing a single function. However, all three of these are distinct lambda expressions:
-
- (\y x. y x) (\z z)
-
- (\x. (\z z) x)
-
- (\z z)
-
-yet when applied to any argument M, all of these will always return M. So they have the same extension. It's also true, though you may not yet be in a position to see, that no other function can differentiate between them when they're supplied as an argument to it. However, these expressions are all syntactically distinct.
-
-The first two expressions are *convertible*: in particular the first reduces to the second. So they can be regarded as proof-theoretically equivalent even though they're not syntactically identical. However, the proof theory we've given so far doesn't permit you to reduce the second expression to the third. So these lambda expressions are non-equivalent.
-
-There's an extension of the proof-theory we've presented so far which does permit this further move. And in that extended proof theory, all computable functions with the same extension do turn out to be equivalent (convertible). However, at that point, we still won't be working with the traditional mathematical notion of a function as a set of ordered pairs. One reason is that the latter but not the former permits uncomputable functions. A second reason is that the latter but not the former prohibits functions from applying to themselves. We discussed this some at the end of Monday's meeting (and further discussion is best pursued in person).
-
-
-
-Booleans and pairs
-==================
-
-Our definition of these is reviewed in [[Assignment1]].
-
-
-It's possible to do the assignment without using a Scheme interpreter, however
-you should take this opportunity to [get Scheme installed on your
-computer](/how_to_get_the_programming_languages_running_on_your_computer), and
-[get started learning Scheme](/learning_scheme). It will help you test out
-proposed answers to the assignment.
-
-
-
-1. Declarative vs imperatival models of computation.
-2. Variety of ways in which "order can matter."
-3. Variety of meanings for "dynamic."
-4. Schoenfinkel, Curry, Church: a brief history
-5. Functions as "first-class values"
-6. "Curried" functions
-
-1. Beta reduction
-1. Encoding pairs (and triples and ...)
-1. Encoding booleans
+This may be a bit confusing. What we have here is a helper function `aux` (for "auxiliary") that accepts *two* arguments, the first being a counter of how long we've counted in the sequence so far, and the second argument being how much more of the sequence we have to inspect. If the sequence we have to inspect is empty, then we're finished and we can just return out counter. (Note that we don't return `0`.) If not, then we add `1` to the counter, and proceed to inspect the tail of the sequence, ignoring the sequence's first element. After the `in`, we can't just return the `aux` function, because it expects two arguments, whereas `length` should just be a function of a single argument, the sequence whose length we're inquiring about. What we do instead is return a λ-generated function, that expects a single sequence argument `xs`, and then returns the result of calling `aux` with that sequence together with an initial counter of `0`.
+So for example, if we evaluated `length [10, 20, 30]`, that would give the same result as `aux (0, [10, 20, 30])`, which would give the same result as `aux (1, [20, 30])`, which would give the same result as `aux (2, [30])`, which would give the same result as `aux(3, [])`, which would give `3`. (This should make it clear why when `aux` is called with the empty sequence, it returns the result `n` rather than `0`.)
+Programmers will sometimes define functions in the second style because it can be evaluated more efficiently than the first style. You don't need to worry about things like efficiency in this seminar. But you should become acquainted with, and comfortable with, both styles of recursive definition.
+It may be helpful to contrast these recursive-style definitons to the way one would more naturally define the `length` function in an imperatival language. This uses some constructs we haven't explained yet, but I trust their meaning will be intuitively clear enough.
+`let`
+ `empty? match` λ `xs.` *this definition left as an exercise*;
+ `tail match` λ `xs.` *this definition left as an exercise*;
+ `length match` λ `xs. let`
+ `n := 0;`
+ `while not (empty? xs) do`
+ `n := n + 1;`
+ `xs := tail xs`
+ `end`
+ `in n`