-There's no need for you to know this for our seminar. But in case you're interested...
-
-Others (and ourselves) will often talk about "functional programming languages." But it would be more appropriate to talk of functional *paradigms* or *programming patterns*. Most programming languages are hybrids that allow programmers to use any of several programming paradigms. The ones that get called "functional languages" are just ones that give functional paradigms a central place in their design, and make them very easy to use.
-
-We can divide functional languages into two classes: those that are *dynamically typed* and those that are *statically typed*.
-
-The dynamically typed languages give types more of a background role in the program. They include the Lisp family (which in turn includes all the variants of [[!wikipedia Scheme]], and also [[!wikipedia Common Lisp]], and [[!wikipedia Clojure]]). They also include [[!wikipedia Erlang]] and [[!wikipedia Joy]] and [[!wikipedia Pure]], and others.
-
-Although these languages are hospitable to functional programming, some of them also permit you to write imperatival code too. In Scheme, by convention, imperatival functions are named ending with a "!". So `(set-car! p 1)` is a Scheme expression that, when evaluated, *mutates* the pair p so that its first member changes to 1. For our purposes though, we'll mostly be working with the parts of Scheme that are purely functional. We'll be discussing the difference between functional and imperatival programming a lot during the seminar.
-
-We're using Scheme in parallel with our discussion of *untyped* lambda calculi. Scheme isn't really untyped. If you assign a string to variable x and then try to add x to 1, Scheme will realize that strings aren't the right type of value to add to integers, and will complain about it. However, Scheme will complain about it *at runtime*: that is, at the point when that particular instruction is about the be executed. This is what's meant by calling these languages "dynamically typed."
-
-In practice, dynamically typed languages allow the programmer to be more relaxed about the types of the values they're manipulating. For instance, it's trivial to create a list whose first member is a string, whose second member is an integer, and so on. You just have to keep track somehow so that you don't try doing anything with values of the wrong type, else you'll get an error at runtime.
-
-
-The other large class of languages are statically typed. This means that typing information is checked at *compile time*: that is, when you're converting your source code into a file that your computer knows how to directly execute. If you make type mistakes---for instance, you try to add a string to an integer---the compiler will choke on this so you never get to the point of even trying to run the program. Once you finally do get the program to compile, you can be more confident that errors of that sort have all been eliminated. They can't sneak up to bite you unawares while the program is running.
-
-Formerly, static typing required the programmer to add lots of annotations in her source code explicitly specifying what they type of each function argument is, what the type of the function's return value was, and so on. This is tedious, and partly for this reason dynamically typed languages have become popular and are thought of as easier to work with. However, nowadays statically typed languages tend to use "type inference": that is, you can let the computer do most of the work of figuring out what the types of everything are. For example, if you define a function like this:
+There's a lot more trivia and links here than anyone needs to know for this seminar. It's
+there for anyone who may be interested.
+
+Others (and ourselves) will often talk about "functional programming
+languages." But it would be more appropriate to talk of functional *paradigms*
+or *programming patterns*. Most programming languages are hybrids that allow
+programmers to use any of several programming paradigms. The ones that get
+called "functional languages" are just ones that give functional paradigms a
+central place in their design, and make them very easy to use.
+
+We can divide functional languages into two classes: those that are dynamically
+typed and those that are statically typed.
+
+The **dynamically typed** languages give types more of a background role in the
+program. They include the Lisp family (which in turn includes all the variants
+of [[!wikipedia Scheme (programming language) desc="Scheme"]], and also [[!wikipedia Common Lisp]], and [[!wikipedia
+Clojure]]). They also include [[!wikipedia Erlang (programming language) desc="Erlang"]] and [[!wikipedia Joy (programming language) desc="Joy"]] and
+[[!wikipedia Pure (programming language) desc="Pure"]], and others.
+
+Although these languages are hospitable to functional programming, some of them
+also permit you to write *imperatival* code (that is, code with *side-effects*)
+too. In Scheme, by convention, imperatival functions are named ending with a
+"!". So `(set-car! p 1)` is a Scheme expression that, when evaluated, *mutates*
+the pair p so that its first member changes to 1. For our purposes though,
+we'll mostly be working with the parts of Scheme that are purely functional.
+We'll be discussing the difference between functional and imperatival
+programming a lot during the seminar.
+
+We're using Scheme in parallel with our discussion of *untyped* lambda calculi.
+Scheme isn't really untyped. If you assign a string to variable x and then try
+to add x to 1, Scheme will realize that strings aren't the right type of value
+to add to integers, and will complain about it. However, Scheme will complain
+about it *at runtime*: that is, at the point when that particular instruction
+is about the be executed. This is what's meant by calling these languages
+"dynamically typed."
+
+In practice, dynamically typed languages allow the programmer to be more
+relaxed about the types of the values they're manipulating. For instance, it's
+trivial to create a list whose first member is a string, whose second member is
+an integer, and so on. You just have to keep track somehow so that you don't
+try doing anything with values of the wrong type, else you'll get an error at
+runtime.
+
+
+The other large class of languages are **statically typed**. This means that
+typing information is checked at *compile time*: that is, when you're
+converting your source code into a file that your computer knows how to
+directly execute. If you make type mistakes---for instance, you try to add a
+string to an integer---the compiler will choke on this so you never get to the
+point of even trying to run the program. Once you finally do get the program to
+compile, you can be more confident that errors of that sort have all been
+eliminated. They can't sneak up to bite you unawares while the program is
+running.
+
+Formerly, static typing required the programmer to add lots of annotations in
+her source code explicitly specifying what the type of each function argument
+is, what the type of the function's return value was, and so on. This is
+tedious, and partly for this reason dynamically typed languages have become
+popular and are thought of as easier to work with. However, nowadays statically
+typed languages tend to use "type inference": that is, you can let the computer
+do most of the work of figuring out what the types of everything are. For
+example, if you define a function like this: