*This page is not ready to go live; just roughly copying over some material from last year.* Declarative/functional vs Imperatival/dynamic models of computation =================================================================== Many of you, like us, will have grown up thinking the paradigm of computation is a sequence of changes. Let go of that. It will take some care to separate the operative notion of "sequencing" here from other notions close to it, but once that's done, you'll see that languages that have no significant notions of sequencing or changes are Turing complete: they can perform any computation we know how to describe. In itself, that only puts them on equal footing with more mainstream, imperatival programming languages like C and Java and Python, which are also Turing complete. But further, the languages we want you to become familiar with can reasonably be understood to be more fundamental. They embody the elemental building blocks that computer scientists use when reasoning about and designing other languages. Jim offered the metaphor: think of imperatival languages, which include "mutation" and "side-effects" (we'll flesh out these keywords as we proceeed), as the pâté of computation. We want to teach you about the meat and potatoes, where as it turns out there is no sequencing and no changes. There's just the evaluation or simplification of complex expressions. Now, when you ask the Scheme interpreter to simplify an expression for you, that's a kind of dynamic interaction between you and the interpreter. You may wonder then why these languages should not also be understood imperatively. The difference is that in a purely declarative or functional language, there are no dynamic effects in the language itself. It's just a static semantic fact about the language that one expression reduces to another. You may have verified that fact through your dynamic interactions with the Scheme interpreter, but that's different from saying that there are dynamic effects in the language itself. What the latter would amount to will become clearer as we build our way up to languages which are genuinely imperatival or dynamic. Many of the slogans and keywords we'll encounter in discussions of these issues call for careful interpretation. They mean various different things. For example, you'll encounter the claim that declarative languages are distinguished by their **referential transparency.** What's meant by this is not always exactly the same, and as a cluster, it's related to but not the same as this means for philosophers and linguists. The notion of **function** that we'll be working with will be one that, by default, sometimes counts as non-identical functions that map all their inputs to the very same outputs. For example, two functions from jumbled decks of cards to sorted decks of cards may use different algorithms and hence be different functions. It's possible to enhance the lambda calculus so that functions do get identified when they map all the same inputs to the same outputs. This is called making the calculus **extensional**. Church called languages which didn't do this **intensional**. If you try to understand that kind of "intensionality" in terms of functions from worlds to extensions (an idea also associated with Church), you may hurt yourself. So too if you try to understand it in terms of mental stereotypes, another notion sometimes designated by "intension." It's often said that dynamic systems are distinguished because they are the ones in which **order matters**. However, there are many ways in which order can matter. If we have a trivalent boolean system, for example---easily had in a purely functional calculus---we might choose to give a truth-table like this for "and": true and true = true true and * = * true and false = false * and true = * * and * = * * and false = * false and true = false false and * = false false and false = false And then we'd notice that `* and false` has a different intepretation than `false and *`. (The same phenomenon is already present with the material conditional in bivalent logics; but seeing that a non-symmetric semantics for `and` is available even for functional languages is instructive.) Another way in which order can matter that's present even in functional languages is that the interpretation of some complex expressions can depend on the order in which sub-expressions are evaluated. Evaluated in one order, the computations might never terminate (and so semantically we interpret them as having "the bottom value"---we'll discuss this). Evaluated in another order, they might have a perfectly mundane value. Here's an example, though we'll reserve discussion of it until later: (\x. y) ((\x. x x) (\x. x x)) Again, these facts are all part of the metatheory of purely functional languages. But *there is* a different sense of "order matters" such that it's only in imperatival languages that order so matters. x := 2 x := x + 1 x == 3 Here the comparison in the last line will evaluate to true. x := x + 1 x := 2 x == 3 Here the comparison in the last line will evaluate to false. One of our goals for this course is to get you to understand *what is* that new sense such that only so matters in imperatival languages. Finally, you'll see the term **dynamic** used in a variety of ways in the literature for this course: * dynamic versus static typing * dynamic versus lexical [[!wikipedia Scope (programming) desc="scoping"]] * dynamic versus static control operators * finally, we're used ourselves to talking about dynamic versus static semantics For the most part, these uses are only loosely connected to each other. We'll tend to use "imperatival" to describe the kinds of semantic properties made available in dynamic semantics, languages which have robust notions of sequencing changes, and so on. To read further about the relation between declarative or functional programming, on the one hand, and imperatival programming on the other, you can begin here: * [[!wikipedia Declarative programming]] * [[!wikipedia Functional programming]] * [[!wikipedia Purely functional]] * [[!wikipedia Referential transparency (computer science)]] * [[!wikipedia Imperative programming]] * [[!wikipedia Side effect (computer science) desc="Side effects"]] Map ===
Scheme (functional part) OCaml (functional part) C, Java, Python
Scheme (imperative part)
OCaml (imperative part)
untyped lambda calculus
combinatory logic
--------------------------------------------------- Turing complete ---------------------------------------------------
  more advanced type systems, such as polymorphic types  
  simply-typed lambda calculus (what linguists mostly use)  
Declarative/functional vs Imperatival/dynamic models of computation =================================================================== Many of you, like us, will have grown up thinking the paradigm of computation is a sequence of changes. Let go of that. It will take some care to separate the operative notion of "sequencing" here from other notions close to it, but once that's done, you'll see that languages that have no significant notions of sequencing or changes are Turing complete: they can perform any computation we know how to describe. In itself, that only puts them on equal footing with more mainstream, imperatival programming languages like C and Java and Python, which are also Turing complete. But further, the languages we want you to become familiar with can reasonably be understood to be more fundamental. They embody the elemental building blocks that computer scientists use when reasoning about and designing other languages. Jim offered the metaphor: think of imperatival languages, which include "mutation" and "side-effects" (we'll flesh out these keywords as we proceeed), as the pâté of computation. We want to teach you about the meat and potatoes, where as it turns out there is no sequencing and no changes. There's just the evaluation or simplification of complex expressions. Now, when you ask the Scheme interpreter to simplify an expression for you, that's a kind of dynamic interaction between you and the interpreter. You may wonder then why these languages should not also be understood imperatively. The difference is that in a purely declarative or functional language, there are no dynamic effects in the language itself. It's just a static semantic fact about the language that one expression reduces to another. You may have verified that fact through your dynamic interactions with the Scheme interpreter, but that's different from saying that there are dynamic effects in the language itself. What the latter would amount to will become clearer as we build our way up to languages which are genuinely imperatival or dynamic. Many of the slogans and keywords we'll encounter in discussions of these issues call for careful interpretation. They mean various different things. For example, you'll encounter the claim that declarative languages are distinguished by their **referential transparency.** What's meant by this is not always exactly the same, and as a cluster, it's related to but not the same as this means for philosophers and linguists. The notion of **function** that we'll be working with will be one that, by default, sometimes counts as non-identical functions that map all their inputs to the very same outputs. For example, two functions from jumbled decks of cards to sorted decks of cards may use different algorithms and hence be different functions. It's possible to enhance the lambda calculus so that functions do get identified when they map all the same inputs to the same outputs. This is called making the calculus **extensional**. Church called languages which didn't do this **intensional**. If you try to understand that kind of "intensionality" in terms of functions from worlds to extensions (an idea also associated with Church), you may hurt yourself. So too if you try to understand it in terms of mental stereotypes, another notion sometimes designated by "intension." It's often said that dynamic systems are distinguished because they are the ones in which **order matters**. However, there are many ways in which order can matter. If we have a trivalent boolean system, for example---easily had in a purely functional calculus---we might choose to give a truth-table like this for "and": true and true = true true and * = * true and false = false * and true = * * and * = * * and false = * false and true = false false and * = false false and false = false And then we'd notice that `* and false` has a different intepretation than `false and *`. (The same phenomenon is already present with the material conditional in bivalent logics; but seeing that a non-symmetric semantics for `and` is available even for functional languages is instructive.) Another way in which order can matter that's present even in functional languages is that the interpretation of some complex expressions can depend on the order in which sub-expressions are evaluated. Evaluated in one order, the computations might never terminate (and so semantically we interpret them as having "the bottom value"---we'll discuss this). Evaluated in another order, they might have a perfectly mundane value. Here's an example, though we'll reserve discussion of it until later: (\x. y) ((\x. x x) (\x. x x)) Again, these facts are all part of the metatheory of purely functional languages. But *there is* a different sense of "order matters" such that it's only in imperatival languages that order so matters. x := 2 x := x + 1 x == 3 Here the comparison in the last line will evaluate to true. x := x + 1 x := 2 x == 3 Here the comparison in the last line will evaluate to false. One of our goals for this course is to get you to understand *what is* that new sense such that only so matters in imperatival languages. Finally, you'll see the term **dynamic** used in a variety of ways in the literature for this course: * dynamic versus static typing * dynamic versus lexical [[!wikipedia Scope (programming) desc="scoping"]] * dynamic versus static control operators * finally, we're used ourselves to talking about dynamic versus static semantics For the most part, these uses are only loosely connected to each other. We'll tend to use "imperatival" to describe the kinds of semantic properties made available in dynamic semantics, languages which have robust notions of sequencing changes, and so on. To read further about the relation between declarative or functional programming, on the one hand, and imperatival programming on the other, you can begin here: * [[!wikipedia Declarative programming]] * [[!wikipedia Functional programming]] * [[!wikipedia Purely functional]] * [[!wikipedia Referential transparency (computer science)]] * [[!wikipedia Imperative programming]] * [[!wikipedia Side effect (computer science) desc="Side effects"]] Map ===
Scheme (functional part) OCaml (functional part) C, Java, Python
Scheme (imperative part)
OCaml (imperative part)
untyped lambda calculus
combinatory logic
--------------------------------------------------- Turing complete ---------------------------------------------------
  more advanced type systems, such as polymorphic types  
  simply-typed lambda calculus (what linguists mostly use)  
What "sequencing" is and isn't ------------------------------ We mentioned before the idea that computation is a sequencing of some changes. I said we'd be discussing (fragments of, and in some cases, entire) languages that have no native notion of change. Neither do they have any useful notion of sequencing. But what this would be takes some care to identify. First off, the mere concatenation of expressions isn't what we mean by sequencing. Concatenation of expressions is how you build syntactically complex expressions out of simpler ones. The complex expressions often express a computation where a function is applied to one (or more) arguments, Second, the kind of rebinding we called "shadowing" doesn't involve any changes or sequencing. All the precedence facts about that kind of rebinding are just consequences of the compound syntactic structures in which it occurs. Third, the kinds of bindings we see in: (define foo A) (foo 2) Or even: (define foo A) (define foo B) (foo 2) don't involve any changes or sequencing in the sense we're trying to identify. As we said, these programs are just syntactic variants of (single) compound syntactic structures involving `let`s and `lambda`s. Since Scheme and OCaml also do permit imperatival constructions, they do have syntax for genuine sequencing. In Scheme it looks like this: (begin A B C) In OCaml it looks like this: begin A; B; C end Or this: (A; B; C) In the presence of imperatival elements, sequencing order is very relevant. For example, these will behave differently: (begin (print "under") (print "water")) (begin (print "water") (print "under")) And so too these: begin x := 3; x := 2; x end begin x := 2; x := 3; x end However, if A and B are purely functional, non-imperatival expressions, then: begin A; B; C end just evaluates to C (so long as A and B evaluate to something at all). So: begin A; B; C end contributes no more to a larger context in which it's embedded than C does. This is the sense in which functional languages have no serious notion of sequencing. We'll discuss this more as the seminar proceeds.