This page aims to summarize the major "topics" (bits of conceptual technology) the course will cover, as well as some connecting "themes" that show up in different ways in several of the topics.
### Topics ###
* Basics of (especially "functional"-style) programming: including pattern matching, recursive definitions, abstract datatypes, **modularity, separating interfaces from implementation**
* The syntax and proof theory of the untyped lambda calculus; at least some discussion of its semantics
* Different **evaluation-order strategies** for formal systems, different "normalizing" properties
* Combinatorial logic, or the dispensability of variables
* Recursive definitions and recursive types, fixed-point combinators
* Other **type sophistication**: abstract datatypes, polymorphic types, perhaps dependent types
* **Mutation** and imperative/iterative idioms
* **Monads**, especially as a way to **represent "dynamic" computation using "static" or declarative idioms**
* The origin of monads in category theory
* **Controlling evaluation order and effects**, especially by means of **continuations**
* Giving **order-independent characterizations of an order of evaluation**
* Perhaps some of the relations between typed programming languages and logics (the "Curry-Howard" correspondence or "isomorphism" [sic])
* Going back and forth between **representing a computational idiom *as a function* versus *as a static structure* **
* We'll link to, but probably not discuss, material for JP's logic class about formal grammars, automata, and decidability
* The different styles in which computer scientists give **"semantics" for their languages**, and how this relates to what we think of as semantics
* If time permits, "indeterministic" or "preemptively parallel" computation and linear logic
* If time permits, "dynamic" or "epistemic" logics
### Themes ###
All of the bolded items in the above list represent recurring themes and special focii of the course:
1. various issues about order, mutation, and the relation between dynamic and static representations
2. various ways of having fun with, and learning from, the restrictions of a good type system
3. monads, especially as a tool for modularity, also their connections to issues 1 and 2
4. continuations, especially their connections to issue 1
5. in what ways are programming languages usable for what we think of as semantics? in what ways are the "semantics" of those programming languages?
### Other lists ###
The logical systems we'll be looking at include:
* the so-called "pure" or untyped (monotyped?) lambda calculus
* combinatorial logic
* the simply-typed lambda calculus
* polymorphic types with System F
The "programming languages" we'll be looking at (which aren't in the list of "logical systems") include:
* Jim's made-up programming language
* Scheme (using the Racket and/or Chicken implementations)
* OCaml
* Haskell (using the GHC implementation)