These notes will recapitulate, make more precise, and to some degree expand what we did in the last hour of our first meeting, leading up to the definitions of the `factorial` and `length` functions. ### Getting started ### We begin with a decidable fragment of arithmetic. Our language has some **literal values**: 0, 1, 2, 3, ... In fact we could get by with just the literal `0` and the `succ` function, but we will make things a bit more convenient by allowing literal expressions of any natural number. We won't worry about numbers being too big for our finite computers to handle. We also have some predefined functions: succ, +, *, pred, - Again, we might be able to get by with just `succ`, and define the others in terms of it, but we'll be a bit more relaxed. Since we want to stick with natural numbers, not the whole range of integers, we'll make `pred 0` just be `0`, and `2 - 4` also be `0`. Here's another set of functions: ==, <, >, <=, >=, != `==` is just what we non-programmers normally express by `=`. It's a relation that holds or not between two values. Here we'll treat it as a function that takes two values as arguments and returns a **boolean** value, that is a truth-value, as a result. The reason for using the doubled `=` symbol is that the single `=` symbol tends to get used in lots of different roles in programming, so we reserve `==` to express this meaning. I will deliberately try to minimize the uses of single `=` in this made-up language (but not eliminate it entirely), to reduce ambiguity and confusion. The `==` relation---or as we're treating it here, the `==` *function* that returns a boolean value---can at least take two numbers as arguments. Probably it makes sense for it to take other kinds of values as arguments, too. For example, it should operate on two truth-values as well. Maybe we'd want it to operate on a number and a truth-value, too? and always return false in that case? What about operating on two functions? Here we encounter the difficulty that the computer can't in general *decide* when two functions are equivalent. Let's not try to sort this all out just yet. We'll suppose that `==` can at least take two numbers as arguments, or two truth-values. As mentioned in class, we represent the truth-values like this: 'true, 'false These are instances of a broader class of literal values that I called **symbolic atoms**. We'll return to them shortly. The reason we write them with an initial `'` will also be explained shortly. For now, it's enough to note that the expression: 1 + 2 == 3 evaluates to `'true`, and the expression: 1 + 0 == 3 evaluates to `'false`. Something else that evaluates to `'false` is the simple expression: 'false That is, literal values are a limiting case of expression, that evaluate to just themselves. More complex expressions like `1 + 0` don't evaluate to themselves, but rather down to literal values. The functions `succ` and `pred` come before their arguments, like this: succ 1 On the other hand, the functions `+`, `*`, `-`, `==`, and so on come in between their arguments, like this: x < y Functions of this latter sort are said to have an "infix" syntax. This is just a convenience for how we write them. Our language will have to keep rigorous track of which functions have infix syntax and which don't, but we'll just rely on context and our brains to make sense of this for now. Functions with the ordinary, non-infix syntax can take two arguments, as well. If we had defined the less-than relation (boolean function) in that style, we'd write it like this instead: lessthan? (x, y) or perhaps like this: lessthan? x y We'll get more acquainted with the difference between these next week. For now, I'll just stick to the first form. Another set of operations we have are: and, or, not The first two of these are infix functions that expect two boolean arguments, and gives a boolean result. The third is a function that expects only one boolean argument. Our earlier function `!=` means "doesn't equal", and: x != y will be just another way to write: not (x == y) You see that you can use parentheses in the standard way. By the way, `<=` means ≤ or "less than or equals to", and `>=` means ≥. Just in case you haven't seen them written this way before. I've started throwing in some **variables**. We'll say variables are any expression that's written with an initial lower-case letter, then is followed by a sequence of zero or more upper- or lower-case letters, or numerals, or underscores (`_`). Then at the end you can optionally have a `?` or `!` or a sequence of `'`s, understood as "primes." Hence, all of these are legal variables: x x1 x_not_y xUBERANT x' x'' x? xs We'll follow a *convention* of using variables with short names and a final `s` to represent collections like sequences (to be discussed below). But this is just a convention to help us remember what we're up to, not a strict rule of the language. We'll also follow a convention of only using variables ending in `?` to represent functions that return a boolean value. Thus, for example, `zero?` will be a function that expects a single number argument and returns a boolean corresponding to whether that number is `0`. `odd?` will be a function that expects a single number argument and returns a boolean corresponding to whether than number is odd. Above, I suggested we might use `lessthan?` to represent a function that expects *two* number arguments, and again returns a boolean result. We also conventionally reserve variables ending in `!` for a different special class of functions, that we will explain later in the course. In fact you can think of `succ` and `pred` and `not` and the rest as also being variables; it's just that these variables have been pre-defined in our language to be bound to functions we agreed upon in advance. You can even think of `==` and `<` as being variables, too, bound to other functions. But I haven't given you parsing rules yet which would make them legal variables, because they don't start with a lower-case letter. We can make the parsing rules more liberal later. Only a few simple expressions in our language aren't variables. These include the literal values, and also **keywords** like `let` and `case` and so on that we'll discuss below. You can't use `let` as a variable, else the syntax of our language would become too hard to mechanically parse. (And probably too hard for our meager brains to parse, too.) The rule for symbolic atoms is that a single quote `'` followed by any single word that could be a legal variable expresses such an atom, a different atom for each different expression. Thus `'false` is a symbolic atom, but so too are `'x` and `'succ`. For the time being, I'll restrict myself to only talking about the symbolic atoms `'true` and `'false`. These constitute a special subclass of symbolic atoms that we call the **booleans** or truth-values. Nothing deep hangs on them being a subclass of a larger type in this way; it just seems elegant. Some other languages make booleans their own special type, not a subclass of another type. Others make them a subclass of the numbers (yuck). We will think of them this way. Note that when writing a symbolic atom there is no closing `'`, just a `'` at the beginning. That's enough to make the whole word, up to the next space (or whatever) count as expressing a symbolic atom. We use the initial `'` to make it easy for us to have a rich set of symbolic atoms, as well as a rich set of variables, without getting them mixed up. Variables never begin with `'`; symbolic atoms always do. We call these things symbolic *atoms* because they aren't collections. Thus numbers are also atoms, but not symbolic ones. And functions are also atoms, but again, not symbolic ones. Functions are another class of values we'll have in our language. They aren't "literal" values, though. Numbers and symbolic atoms are simple expressions in the language that evaluate to themselves. That's what we mean by calling them "literals." Functions aren't expressions in the language at all; they have to be generated from the evaluation of more complex expressions. (By the way, I really am serious in thinking of *the numbers themselves* as being expressions in this language; rather than some "numerals" that aren't themselves numbers. We'll talk about this down the road. For now, don't worry about it too much.) I said we wanted to be starting with a fragment of arithmetic, so we'll keep the function values off-stage for the moment, and also all the symbolic atoms except for `'true` and `'false`. So we've got numbers, truth-values, and some functions and relations (that is, boolean functions) defined on them. We also help ourselves to a notion of bounded quantification, as in ∀`x < M.` φ, where `M` and φ are (simple or complex) expressions that evaluate to a number and a boolean, respectively. We limit ourselves to *bounded* quantification so that the fragment we're dealing with can be "effectively" or mechanically decided. (As we extend the language, we will lose that property, but it will be a topic for later discussion exactly when that happens.) As I mentioned in class, I will sometimes write ∀ x : ψ . φ in my informal metalanguage, where the ψ clause represents the quantifier's *restrictor*. Other people write this like `[`∀ x : ψ `]` φ, or in various other ways. My notation is meant to parallel the notation some linguists (for example, Heim & Kratzer) use in writing λ x : ψ . φ, where the ψ clause restricts the range of arguments over which the function designated by the λ-expression is defined. Later we will see the colon used in a somewhat similar (but also somewhat different) way in our programming languages. But that's foreshadowing. ### Let and lambda ### So we have bounded quantification as in ∀ `x < 10.` φ. Obviously we could also make sense of ∀ `x == 5.` φ in just the same way. This would evaluate φ but with the variable `x` now bound to the value `5`, ignoring whatever it may be bound to in broader contexts. I will express this idea in a more perspicuous vocabulary, like this: `let x be 5 in` φ. (I say `be` rather than `=` because, as I mentioned before, it's too easy for the `=` sign to get used for too many subtly different jobs.) As one of you was quick to notice in class, when I shift to the `let`-vocabulary, I no longer restrict myself to just the case where φ evaluates to a boolean. I also permit myself expressions like this: let x be 5 in x + 1 which evaluates to `6`. That's right. I am moving beyond the ∀ `x==5.` φ idea when I do this. But the rules for how to interpret this are just a straightforward generalization of our existing understanding for how to interpret bound variables. So there's nothing fundamentally novel here. We can have multiple `let`-expressions embedded, as in: let y be (let x be 5 in x + 1) in 2 * y let x be 5 in let y be x + 1 in 2 * y both of which evaluate to `12`. When we have a stack of `let`-expressions as in the second example, I will write it like this: let x be 5; y be x + 1 in 2 * y It's okay to also write it all inline, like so: `let x be 5; y be x + 1 in 2 * y`. The `;` represents that we have a couple of `let`-bindings coming in sequence. The earlier bindings in the sequence are considered to be in effect for the later right-hand expressions in the sequence. Thus in: let x be 0 in (let x be 5; y be x + 1 in 2 * y) The `x + 1` that is evaluated to give the value that `y` gets bound to uses the (more local) binding of `x` to `5`, not the (previous, less local) binding of `x` to `0`. By the way, the parentheses in that displayed expression were just to focus your attention. It would have parsed and meant the same without them. Now we can allow ourselves to introduce λ-expressions in the following way. If a λ-expression is applied to an argument, as in: `(`λ `x.` φ`) M`, for any (simple or complex) expressions φ and `M`, this means the same as: `let x be M in` φ. That is, the argument `M` to the λ-expression provides (when evaluated) a value for the variable `x` to be bound to, and then the result of the whole thing is whatever φ evaluates to, under that binding to `x`. If we restricted ourselves to only that usage of λ-expressions, that is when they were applied to all the arguments they're expecting, then we wouldn't have moved very far from the decidable fragment of arithmetic we began with. However, it's tempting to help ourselves to the notion of (at least partly) *unapplied* λ-expressions, too. If I can make sense of what: `(`λ `x. x + 1) 5` means, then I can make sense of what: `(`λ `x. x + 1)` means, too. It's just *the function* that waits for an argument and then returns the result of `x + 1` with `x` bound to that argument. This does take us beyond our (first-order) fragment of arithmetic, at least if we allow the bodies and arguments of λ-expressions to be any expressible value, including other λ-expressions. But we're having too much fun, so why should we hold back? So now we have a new kind of value our language can work with, alongside numbers and booleans. We now have function values, too. We can bind these function values to variables just like other values: `let id be` λ `x. x; y be id 5 in y` evaluates to `5`. In reaching that result, the variable `id` was temporarily bound to the identity function, that expects an argument, binds it to the variable `x`, and then returns the result of evaluating `x` under that binding. This is what is going on, behind the scenes, with all the expressions like `succ` and `+` that I said could really be understood as variables. They have just been pre-bound to certain agreed-upon functions rather than others. ### Containers ### So far, we've only been talking about *atomic* values. Our language will also have some *container* values, that have other values as members. One example are **ordered sequences**, like: [10, 20, 30] This is a sequence of length 3. It's the result of *cons*ing the value `10` onto the front of the shorter, length-2 sequence `[20, 30]`. In this made-up language, we'll represent the sequence-consing operation like this: 10 & [20, 30] If you want to know why we call it "cons", that's because this is what the operation is called in Scheme, and they call it that as shorthand for "constructing" the longer list (they call it a "list" rather than a "sequence") out of the components `10` and `[20, 30]`. The name is a bit unfortunate, though, because other structured values besides lists also get "constructed", but we don't say "cons" about them. Still, this is the tradition. Let's just take "cons" to be a nonsense label with an interesting back-history. The sequence `[20, 30]` in turn is the result of: 20 & [30] and the sequence `[30]` is the result of consing `30` onto the empty sequence `[]`. Note that the sequence `[30]` is not the same as the number `30`. The former is a container value, with one element. The latter is an atomic value, and as such won't have any elements. If you try to do this: [30] + 1 it won't work. We haven't discussed what happens with illegal expressions like that, or like `'true + 1`. For the time being, I'll just say these "don't work", or that they "crash". We'll discuss the variety of ways these illegalities might be handled later. Also, if you try to do this: 20 & 30 it won't work. The consing operator `&` always requires a container (here, a sequence) on its right-hand side. And `30` is not a container. We've said that: [10, 20, 30] is the same as; 10 & (20 & (30 & [])) and the latter can also be written without the parentheses. Our language knows that `&` should always be understood as "implicitly associating to the right", that is, that: 10 & 20 & 30 & [] should be interpreted like the expression displayed before. Other operators like `-` should be understood as "implicitly associating to the left." If we write: 30 - 2 - 1 we presumably want it to be understood as: (30 - 2) - 1 not as: 30 - (2 - 1) Other operators don't implicitly associate at all. For example, you may understand the expression: 10 < x < 20 because we have familiar conventions about what it means. But what it means is not: (10 < x) < 20 The result of the parenthesized expression is either `'true` or `'false`, assuming `x` evaluates to a number. But `'true < 20` doesn't mean anything, much less what we expect `10 < x < 20` to mean. So `<` doesn't implicitly associate to the left. Neither does it implicitly associate to the right. If you want expressions like `10 < x < 20` to be meaningful, they will need their own special rules. Sequences are containers that keep track of the order of their arguments, and also those arguments' multiplicity (how many times each one appears). Other containers might also keep track of these things, and more structural properties too, or they might keep track of less. Let's say we also have **set containers** too, like this: {10, 20, 30} Whereas the sequences `[10, 20, 10]`, `[10, 20]`, and `[20, 10]` are three different sequences, `{10, 20, 10}`, `{10, 20}`, and `{20, 10}` would just be different ways of expressing a single set. We can let the `&` operator do extra-duty, and express the "consing" relation for sets, too: 10 & {20} evaluates to `{10, 20}`, and so too does: 10 & {10, 20} As I mentioned in class, we'll let `&&` express the operation by which two sequences are appended or concatenated to each other: [10, 20] && [30, 40, 50] evaluates to `[10, 20, 30, 40, 50]`. For sets, we'll let `and` and `or` and `-` do extra duty, and express set intersection, set union, and set subtraction, when their arguments are sets. If the arguments of `and` and `or` are booleans, on the other hand, or the arguments of `-` are numbers, then they express the functions we were understanding them to express before. In addition to sequences, there's another kind of expression that might initially be confused with them. We might call these **tuples** or **multivalues**. They are written surrounded by parentheses rather than square brackets. Here's an example: `(0, 'true,` λ`x. x)` That's a multivalue or tuple with 3 elements (also called a "triple"). In the programming languages and other formal systems we'll be looking at, tuples and sequences are usually understood and handled differently. This is because we apply different assumptions to them. In the case of a sequence, it's assumed that they will have homogeneously-typed elements, and that their length will be irrelevant to their own type. So you can have the sequence: [20, 30] and the sequence: [30] and even the sequence: [] and these will all be of the same type, namely a sequence of numbers. You can have sequences with other types of elements, too, for example a sequence of booleans: ['true, 'false, 'true] or a sequence of sequences of numbers: [[10, 20], [], [30]] An excellent question that came up in class is "How do we tell whether `[]` expresses the empty sequence of numbers or the empty sequence of something else?" We will discuss that question in later weeks. It's central to some of the developments we'll be exploring. For now, just put that question on a mental shelf and assume that somehow this just works out right. Now whereas sequences expect homogenously-typed elements, and their length is irrelevant to their own type, mulivalues or tuples are the opposite in both respects. They may have elements of heterogenous type, as our example: `(0, 'true,` λ`x. x)` did. They need not, but they may. Also, the type of a multivalue or tuple does depend on its length, and moreover on the specific types of each of its elements. A tuple of length 2 (also called a "pair") whose first element is a number and second element is a boolean is a different type of thing that a tuple whose first element is a boolean and whose second element is a number. Most functions expecting the first as an argument will "crash" if you give them the second instead. Earlier I said that we can call these things "multivalues or tuples". Here I'll make a technical comment, that in fact I'll understand these slightly differently. Really I'll understand the bare expression `(10, x)` to express a multivalue, and to express a tuple proper, you'll have to write `Pair (10, x)` or something like that. The difference between these is that only the tuple proper is a single value that can be bound to a single variable. The multivalue isn't a single value at all, but rather a plurality of values. This is a bit subtle, and other languages we're looking at this term don't always make this distinction. But the result is that they have to say complicated things elsewhere. If we permit ourselves this fine distinction here, many other things downstream will go more smoothly than they do in the languages that don't make it. Ours is just a made-up language, but I've thought this through carefully, so humor me. We haven't yet introduced the apparatus to make sense of expressions like `Pair (10, x)`, so for the time being I'll just restrict myself to multivalues, not to tuples proper. The result will be that while we can say: let x be [10, 20] in ... that is, sequences are first-class values in our language, we can't say: let x be (10, 'true) in ... or even: let x be (10, 20) in ... However, intuitively it ought to make sense to say: let (x, y) be (10, 'true) in ... That should just bind the variable `x` to the value `10` and the variable `y` to the value `'true`, and go on to evaluate the rest of the expression with those bindings in place. In this particular example, we could equally have said: let x be 10; y be 'true in ... but in other examples it will be substantially more convenient to be able to bind `x` and `y` simultaneously. Here's an example: `let`   `f be` λ `x. (x, 2*x)`   `(x, y) be f 10` `in [x, y]` which evaluates to `[10, 20]`. Note that we have the function `f` returning two values, rather than just one, just by having its body evaluate to a multivalue rather than to a single value. It's a little bit awkward to say `let (x, y) be ...`, so I propose we instead always say `let (x, y) match ...`. (This will be even more natural as we continue generalizing what we've done here, as we will in the next section.) For consistency, we'll say `match` instead of `be` in all cases, so that we write even this: let x match 10 in ... rather than: let x be 10 in ... ### Patterns ### What we just introduced is what's known in programming circles as a "pattern". Patterns can look superficially like expressions, but the context in which they appear determines that they are interpreted as patterns not as expressions. The left-hand sides of the binding lists of a `let`-expression are always patterns. Simple variables are patterns. Interestingly, literal values are also patterns. So you can say things like this: let 0 match 0; [] match []; 'true match 'true in ... (`[]` is also a literal value, like `0` and `'true`.) This isn't very useful in this example, but it will enable us to do interesting things later. So variables are patterns and literal values are patterns. Also, a multivalue of any pattern is a pattern. (Strictly speaking, it's only a multipattern, but I won't fuss about this here.) That's why we can have `(x, y)` on the left-hand side of a `let`-binding: it's a pattern, just like `x` is. Notice that `(x, 10)` is also a pattern. So we can say this: let (x, 10) match (2, 10) in x which evaluates to `2`. What if you did, instead: let (x, 10) match (2, 100) in x or, more perversely: let (x, 10) match 2 in x Those will be pattern-matching failures. The pattern has to "fit" the value its being matched against, and that requires having the same structure, and also having the same literal values in whatever positions the pattern specifies literal values. A pattern-matching failure in a `let`-expression makes the whole expression "crash." Shortly though we'll consider `case`-expressions, which can recover from pattern-match failures in a useful way. We can also allow ourselves some other kinds of complex patterns. For example, if `p` and `ps` are two patterns, then `p & ps` will also be a pattern, that can match non-empty sequences and sets. When this pattern is matched against a non-empty sequence, we take the first value in the sequence and match it against the pattern `p`; we take the rest of the sequence and match it against the pattern `ps`. (If either of those results in a pattern-matching failure, then `p & ps` fails to match too.) For example: let x & xs match [10, 20, 30] in (x, xs) evaluates to the multivalue `(10, [20, 30])`. When the pattern `p & ps` is matched against a non-empty set, we just arbitrarily choose one value in the set and match it against the pattern `p`; and match the rest of the set, with that value removed, against the pattern `ps`. You cannot control what order the values are chosen in. Thus: let x & xs match {10, 20, 30} in (x, xs) might evaluate to `(20, {10, 30})` or to `(30, {10, 20})` or to `(10, {30, 20})`, or to one of these on Mondays and another on Tuesdays, and never to the third. You cannot control it or predict it. It's good style to only pattern match against sets when the final result will be the same no matter in what order the values are selected from the set. A question that came up in class was whether `x + y` could also be a pattern. In this language (and most languages), no. The difference between `x & xs` and `x + y` is that `&` is a *constructor* whereas `+` is a *function*. We will be talking about this more in later weeks. For now, just take it that `&` is special. Not every way of forming a complex expression corresponds to a way of forming a complex pattern. Since as we said, `x & xs` is a pattern, we can let `x1 & x2 & xs` be a pattern as well, the same as `x1 & (x2 & xs)`. And since when we're dealing with expressions, we said that: [x1, x2] is the same as: x1 & x2 & [] we might as well allow this for patterns, too, so that: [x1, x2] is a pattern, meaning the same as `x1 & x2 & []`. Note that while `x & xs` matches *any* non-empty sequence, of length one or more, `[x1, x2]` only matches sequences of length exactly two. For the time being, these are the only patterns we'll allow. But since the definition of patterns is recursive, this permits very complex patterns. What would this evaluate to: let ([xs, ys], [z & zs, ws]) match ([[], [1]], [[10, 20, 30], [0]]) in z & ys Also, we will permit complex patterns in λ-expressions, too. So you can write: λ`(x, y).` φ as well as: λ`x.` φ You can even write: λ `[x, 10].` φ just be sure to always supply that function with arguments that are two-element sequences whose second element is `10`. If you don't, you will have a pattern-matching failure and the interpretation of your expression will "crash". Thus, you can now do things like this: `let`   `f match` λ`(x, y). (x, x + y, x + 2*y, x + 3*y);`   `(a, b, c, d) match f (10, 1)` `in (b, d)` which evaluates `f (10, 1)` to `(10, 11, 12, 13)`, which it matches against the complex pattern `(a, b, c, d)`, binding all four of the contained variables, and then evaluates `(b, d)` under those bindings, giving us the result `(11, 13)`. Notice that in the preceding expression, the variables `a` and `c` were never used. So the values they're bound to are ignored or discarded. We're allowed to do that, but there's also a special syntax to indicate that this is what we're up to. This uses the special pattern `_`: `let`   `f match` λ`(x, y). (x, x + y, x + 2*y, x + 3*y);`   `(_, b, _, d) match f (10, 1)` `in (b, d)` The role of `_` here is just to occupy a slot in the complex pattern `(_, b, _, d)`, to make it a multivalue of four values, rather than one of only two. One last wrinkle. What if you tried to make a pattern like this: `[x, x]`, where some variable occurs multiple times. This is known as a "non-linear pattern". Some languages permit these (and require that the values being bound against `x` in the two positions be equal). Many languages don't permit it. Let's agree not to do this. ### Case and if ... then ... else ... ### In class we introduced this form of complex expression: `if` φ `then` ψ `else` χ Here φ should evaluate to a boolean, and ψ and χ should evaluate to the same type. The result of the whole expression will be the same as ψ, if φ evaluates to `'true`, else to the result of χ. We said that that could be taken as shorthand for the following `case`-expression: `case` φ `of`   `'true then` ψ`;`   `'false then` χ `end` The `case`-expression has a list of patterns and expressions. Its initial expression φ is evaluated and then attempted to be matched against each of the patterns in turn. When we reach a pattern that can be matched---that doesn't result in a match-failure---then we evaluate the expression after the `then`, using any variable bindings in effect from the immediately preceding match. (Any match that fails has no effect on future variable bindings. In this example, there are no variables in our patterns, so it's irrelevant.) What that right-hand expression evaluates to becomes the result of the whole `case`-expression. We don't attempt to do any further pattern-matching after finding a pattern that succeeds. If a `case`-expression gets to the end of its list of patterns, and *none* of them have matched its initial expression, the result is a pattern-matching failure. So it's good style to always include a final pattern that's guaranteed to match anything. You could use a simple variable for this, or the special pattern `_`: case 4 of 1 then 'true; 2 then 'true; x then 'false end case 4 of 1 then 'true; 2 then 'true; _ then 'false end will both evaluate to `'false`, without any pattern-matching failure. There's a superficial similarity between the `let`-constructions and the `case`-constructions. Each has a list whose left-hand sides are patterns and right-hand sides are expressions. Each also has an additional expression that stands out in a special position: in `let`-expressions at the end, in `case`-expressions at the beginning. But the relations of these different elements to each other is different. In `let`-expressions, the right-hand sides of the list supply the values that get bound to the variables in the patterns on the left-hand sides. Also, each pattern in the list will get matched, unless there's a pattern-match failure before we get to it. In `case`-expressions, on the other hand, it's the initial expression that supplies the value (or multivalues) that we attempt to match against the pattern, and we stop as soon as we reach a pattern that we can successfully match against. Then the variables in that pattern are thereby bound when evaluating the corresponding right-hand side expression. ### Recursive let ### Given all these tools, we're (almost) in a position to define functions like the `factorial` and `length` functions we defined in class. Here's an attempt to define the `factorial` function: `let`   `factorial match` λ `n. if n == 0 then 1 else n * factorial (n-1)` `in factorial` or, using `case`: `let`   `factorial match` λ `n. case n of 0 then 1; _ then n * factorial (n - 1) end` `in factorial` But there's a problem here. What value does `factorial` have when evaluating the subexpression `factorial (n - 1)`? As we said in class, the natural precedent for this with non-function variables would go something like this: let x match 0; y match x + 1; x match x + 1; z match 2 * x in (y, z) We'd expect this to evaluate to `(1, 2)`, and indeed it does. That's because the `x` in the `x + 1` on the right-hand side of the third binding (`x match x + 1`) is evaluated under the scope of the first binding, of `x` to `0`. We should expect the `factorial` variable in the right-hand side of our attempted definition to behave the same way. It will evaluate to whatever value it has before reaching this `let`-expression. We actually haven't said what is the result of trying to evaluate unbound variables, as in: let x match y + 0 in x Let's agree not to do that. We can consider such expressions only under the implied understanding that they are parts of larger expressions that assign a value to `y`, as for example in: let y match 1 in let x match y + 0 in x Hence, let's understand our attempted definition of `factorial` to be part of such a larger expression: `let`   `factorial match` λ `n. n` `in let`   `factorial match` λ `n. case n of 0 then 1; _ then n * factorial (n - 1) end` `in factorial 4` This would evaluate to what `4 * factorial 3` does, but with the `factorial` in the expression bound to the identity function λ `n. n`. In other words, we'd get the result `12`, not the correct answer `24`. For the time being, we will fix this solution by just introducing a special new construction `letrec` that works the way we want. Now in: `let`   `factorial match` λ `n. n` `in letrec`   `factorial match` λ `n. case n of 0 then 1; _ then n * factorial (n - 1) end` `in factorial 4` the initial binding of `factorial` to the identity function gets ignored, and the `factorial` in the right-hand side of our definition is interpreted to mean the very same function that we are hereby binding to `factorial`. Exactly how this works is a deep and exciting topic, that we will be looking at very closely in a few weeks. For the time being, let's just accept that `letrec` does what we intuitively want when defining functions recursively. **It's important to make sure you say letrec when that's what you want.** You may not *always* want `letrec`, though, if you're ever re-using variables (or doing other things) that rely on the bindings occurring in a specified order. With `letrec`, all the bindings in the construction happen simultaneously. This is why you can say, as Jim did in class: `letrec`   `even? match` λ `n. case n of 0 then 'true; _ then odd? (n-1) end`   `odd? match` λ `n. case n of 0 then 'false; _ then even? (n-1) end` `in (even?, odd?)` Here neither the `even?` nor the `odd?` pattern is matched before the other. They, and also the `odd?` and the `even?` variables in their right-hand side expressions, are all bound at once. As we said, this is deep and exciting, and it will make your head spin before we're done examining it. But let's trust `letrec` to do its job, for now. ### Comparing recursive-style and iterative-style definitions ### Finally, we're in a position to revisit the two definitions of `length` that Jim presented in class. Here is the first: `letrec`   `length match` λ `xs. case xs of [] then 0; _ & ys then 1 + length ys end` `in length` This function accept a sequence `xs`, and if it's empty returns `0`, else it says that its length is `1` plus whatever is the length of its remainder when you take away the first element. In programming circles, this remainder is commonly called the sequence's "tail" (and the first element is its "head"). Thus if we evaluated `length [10, 20, 30]`, that would give the same result as `1 + length [20, 30]`, which would give the same result as `1 + (1 + length [30])`, which would give the same result as `1 + (1 + (1 + length []))`. But `length []` is `0`, so our original expression evaluates to `1 + (1 + (1 + 0))`, or `3`. Here's another way to define the `length` function: `letrec`   `aux match` λ `(n, xs). case xs of [] then n; _ & ys then aux (n + 1, ys) end` `in` λ `xs. aux (0, xs)` This may be a bit confusing. What we have here is a helper function `aux` (for "auxiliary") that accepts *two* arguments, the first being a counter of how long we've counted in the sequence so far, and the second argument being how much more of the sequence we have to inspect. If the sequence we have to inspect is empty, then we're finished and we can just return our counter. (Note that we don't return `0`.) If not, then we add `1` to the counter, and proceed to inspect the tail of the sequence, ignoring the sequence's first element. After the `in`, we can't just return the `aux` function, because it expects two arguments, whereas `length` should just be a function of a single argument, the sequence whose length we're inquiring about. What we do instead is return a λ-generated function, that expects a single sequence argument `xs`, and then returns the result of calling `aux` with that sequence together with an initial counter of `0`. So for example, if we evaluated `length [10, 20, 30]`, that would give the same result as `aux (0, [10, 20, 30])`, which would give the same result as `aux (1, [20, 30])`, which would give the same result as `aux (2, [30])`, which would give the same result as `aux(3, [])`, which would give `3`. (This should make it clear why when `aux` is called with the empty sequence, it returns the result `n` rather than `0`.) Programmers will sometimes define functions in the second style because it can be evaluated more efficiently than the first style. You don't need to worry about things like efficiency in this seminar. But you should become acquainted with, and comfortable with, both styles of recursive definition. It may be helpful to contrast these recursive-style definitons to the way one would more naturally define the `length` function in an imperatival language. This uses some constructs we haven't explained yet, but I trust their meaning will be intuitively clear enough. `let`   `empty? match` λ `xs.` *this definition left as an exercise*;   `tail match` λ `xs.` *this definition left as an exercise*;   `length match` λ `xs. let`                                          `n := 0;`                                          `while not (empty? xs) do`                                            `n := n + 1;`                                            `xs := tail xs`                                          `end`                                       `in n` `in length` Here there is no recursion. Rather what happens is that we *initialize* the variable `n` with the value `0`, and then so long as our sequence variable `xs` is non-empty, we *increment* that variable `n`, and *overwrite* the variable `xs` with the tail of the sequence that it is then bound to, and repeat in a loop (the `while ... do ... end` construction). This is similar to what happens in our second definition of `length`, using `aux`, but here it happens using *mutation* or *overwriting* the values of variables, and a special looping construction, whereas in the preceding definitions we achieved the same effect instead with recursion. We will be looking closely at mutation later in the term. For the time being, our focus will instead be on the recursive and *immutable* style of doing things---meaning no variables get overwritten. It's helpful to observe that in expressions like: let x match 0; y match x + 1; x match x + 1; z match 2 * x in (y, z) the variable `x` has not been *overwritten* (mutated). Rather, we have *two* variables `x` and its just that the second one is *hiding* the first so long as its scope is in effect. Once its scope expires, the original variable `x` is still in place, with its orginal binding. A different example should help clarify this. What do you think this: let x match 0; (y, z) match let x match x + 1 in (x, 2*x) in ([y, z], x) evaluates to? Well, consider the right-hand side of the second binding: let x match x + 1 in (x, 2*x) This expression evaluates to `(1, 2)`, because it uses the outer binding of `x` to `0` for the right-hand side of its own binding `x match x + 1`. That gives us a new binding of `x` to `1`, which is in place when we evaluate `(x, 2*x)`. That's why the whole thing evaluates to `(1, 2)`. So now returning to the outer expression, `y` gets bound to `1` and `z` to `2`. But now what is `x` bound to in the final line,`([y, z], x)`? The binding of `x` to `1` was in place only until we got to `(x, 2*x)`. After that its scope expired, and the original binding of `x` to `0` reappears. So the final line evaluates to `([1, 2], 0)`. This is very like what happens in ordinary predicate logic if you say: ∃ `x. F x and (` ∀ `x. G x ) and H x` The `x` in `F x` and in `H x` are governed by the outermost quantifier, and only the `x` in `G x` is governed by the inner quantifier. ### That's enough ### This was a lot of material, and you may need to read it carefully and think about it, but none of it should seem profoundly different from things you're already accustomed to doing. What we worked our way up to was just the kind of recursive definitions of `factorial` and `length` that you volunteered in class, before learning any programming. You have all the materials you need now to do this week's [[assignment|/exercises/assignment1]]. Some of you may find it easy. Many of you will not. But if you understand what we've done here, and give it your time and attention, we believe you can do it. There are also some [[advanced notes|topics/week1_kapulet_advanced]] extending this week's material. ### Summary ### Here is the hierarchy of **values** that we've talked about so far. * Multivalues * Singular values, including: * Atoms, including: * Numbers: these are among the **literals** * Symbolic atoms: these are also among the **literals**, and include: * Booleans (or truth-values) * Functions: these are not literals, but instead have to be generated by evaluating complex expressions * Containers, including: * the **literal containers** `[]` and `{}` * Non-empty sequences, built using `&` * Non-empty sets, built using `&` * Tuples proper and other containers, to be introduced later We've also talked about a variety of **expressions** in our language, that evaluate down to various values (if their evaluation doesn't "crash" or otherwise go awry). These include: * All of the literal atoms and literal containers * Variables * Complex expressions that apply `&` or some variable understood to be bound to a function to some arguments * Various other complex expressions involving the keywords λ or `let` or `letrec` or `case` The special syntaxes `[10, 20, 30]` are just shorthand for the more offical syntax using `&` and `[]`, and likewise for `{10, 20, 30}`. The `if ... then ... else ...` syntax is just shorthand for a `case`-construction using the literal patterns `'true` and `'false`. We also talked about **patterns**. These aren't themselves expressions, but form part of some larger expressions.