Seminar in Semantics / Philosophy of Language
or: What Philosophers and Linguists Can Learn From Theoretical Computer Science But Didn't Know To Ask
This course is co-taught by Chris Barker and Jim Pryor. Linguistics calls it "LING-GA 3340" and Philosophy calls it "PHIL-GA 2296". The seminar meets in spring 2015 on Thursdays from 4 until a bit before 7 (with a short break in the middle), in the Linguistics building at 10 Washington Place, in room 103 (front of the first floor).
One student session to discuss homeworks will be held every Wednesday from 5-6, in Linguistics room 104 (back of the first floor).
Index of Main Content (lecture notes and more)
Untyped lambda calculus evaluator on this site
This wiki will be undergoing lots of changes throughout the semester, and particularly in these first few days as we get it set up, migrate over some of the content from the previous time we taught this course, and iron out various technical wrinkles. Please be patient. When you sit down to read the wiki, it's a good idea to always hit "Refresh" in your browser to make sure you're reading the latest additions and refinements of the website. (Sometimes these will be tweaks, other times very substantial. Updates will happen at miscellaneous hours, sometimes many times in a given day.)
If you're eager to learn, though, you don't have to wait on us to be ready to serve you. You can go look at the archived first version of this course. Just keep in mind that the text and links there haven't been updated. And/or you can get started on installing the software and ordering some of the books.
As we mentioned in class, if you're following the course and would like to be emailed occasionally, send an email to email@example.com, saying "lambda" in the subject line. Most often, we will just post announcements to this website, rather than emailing you. But occasionally an email might be more appropriate.
The student session has been scheduled for Wednesdays from 5-6, in Linguistics room 104 (back of the first floor).
Those of you interested in additional Q&A but who can't make that time, let us know.
You should see these student sessions as opportunities to clear up lingering issues from material we've discussed, and help get a better footing for what we'll be doing the next week. It's expected you'll have made at least a serious start on that week's homework (due the following day) before the session.
Here is information about How to get the programming languages running on your computer. If those instructions seem overwhelming, note that it should be possible to do a lot of this course using only demonstration versions of these languages that run in your web browser.
Henceforth, unless we say otherwise, every homework will be "due" by Wednesday morning after the Thursday seminar in which we refer to it. (Usually we'll post the assignment shortly before the seminar, but don't rely on this.) However, for every assignment there will be a "grace period" of one further week for you to continue working on it if you have trouble and aren't able to complete the assignment to your satisfaction by the due date. You shouldn't hesitate to talk to us---or each other!---about the assignments when you do have trouble. We don't mind so much if you come across answers to the assignment when browsing the web, or the Little Schemer book, or anywhere. So long as you can reason yourself through the solutions and experience for yourself the insights they embody.
We reserve the privilege to ruthlessly require you to explain your solutions in conversations at any point, in section or in class.
You should always aim to complete the assignments by the "due" date, as this will fit best with the progress of the seminar.
The assignments will tend to be quite challenging. Again, you should by all means talk amongst yourselves, and to us, about strategies and questions that come up when working through them.
We will not always be able to predict accurately which problems are easy and which are hard. If we misjudge, and choose a problem that is too hard for you to complete to your own satisfaction, it is still very much worthwhile (and very much appreciated) if you would explain what is difficult, what you tried, why what you tried didn't work, and what you think you need in order to solve the problem.
(Week 1) Thursday 29 Jan 2015
Help on learning Scheme, OCaml, and Haskell; The differences between our made-up language and Scheme, OCaml, and Haskell; What do words like "interpreter" and "compiler" mean? (in progress)
(Lambda Evaluator) Usable in your browser. It can help you check whether your answer to some of the (upcoming) homework questions works correctly.
(Week 2) Thursday 5 February 2015
Also, if you're reading the Hankin book, try reading Chapters 1-3. You will most likely need to come back again and read it multiple times; but this would be a good time to make the first attempt.
We posted answers to Week 1's homework.
(Week 3) Thursday 12 February 2015
Also, by this point you should be able to handle all of The Little Schemer except for Chapters 9 and 10. Chapter 9 covers what is going on under the hood with
letrec, and that will be our topic for next week. You can also read Chapter 4 of Hankin on Combinatory Logic.
We posted answers to Week 2's homework.
(Week 4) Thursday 19 February 2015
Now you can read Sections 3.1 and 6.1 of Hankin; and browse the rest of Hankin Chapter 6, which should look somewhat familiar.
If you're reading along in the Pierce book, we've now covered much of the material in his Chapters 1-7.
We posted answers to Week 3's homework.
(Week 5) Thursday 26 February 2015
There is some assigned reading for our next meeting. This comes in two batches. The first batch consists of this footnote from Kaplan's Demonstratives. Also recommended, but not mandatory, is this selection from Chapter 4 of Jeff King's 2007 book The Nature and Structure of Content. The second batch consists of this paper from Michael Rieppel, a recent Berkeley Philosophy PhD, on Frege's "concept horse" problem. Also recommended, but not mandatory, is this selection from Chapter 5 of King's book. (It reviews and elaborates his paper "Designating propositions".)
If you're interested in the scholarly background on Frege's "concept horse" problem, here is an entry point.
If you're reading along in Hankin, you can look at Chapter 7.
If you're reading along in the Pierce book, the chapters most relevant to this week's discussion are 22 and 23; though for context we also recommend at least Chapters 8, 9, 11, 20, and 29. We don't expect most of you to follow these recommendations now, or even to be comfortable enough yet with the material to be able to. We're providing the pointers as references that some might conceivably pursue now, and others later.
(Week 6) Thursday March 5
We will be discussing the readings posted above.
Topics: Kaplan on Plexy; King on that-clauses and "the proposition that P"; Rieppel on Frege and the concept HORSE
(Week 7) Thursday March 12
Many of these were updated or first posted on Mon 23 March.
(Week 8) Thursday March 26
(Week 9) Thursday April 2
Updated notes on Installing and Using the Juli8 Libraries on Sun 5 April. Continued to fix some bugs and improve the monad transformers. Latest version posted Monday evening, 13 April: v1.6. This version is needed to run the gsv2.ml code.
Reading for Week 10: Groenendijk, Stokhof, and Veltman, "Coreference and Modality" (1996)
(Week 10) Thursday April 9
Topics: We will be discussing the reading posted above. Here are notes and links to code.
(Week 11) Thursday April 16
We postponed class this week to enable people to attend the Partee talk.
(Week 12) Thursday April 23
(Week 13) Thursday April 30
(Week 14) Thursday May 7
Topics: Continuations (continued)
(Makeup class) Monday May 11, 2--5 pm
Topics: Linguistic applications of continuations
The overarching goal of this seminar is to introduce concepts and techniques from theoretical computer science and show how they can provide insight into established philosophical and linguistic problems.
This is not a seminar about any particular technology or software. Rather, it's about a variety of conceptual/logical ideas that have been developed in computer science and that linguists and philosophers ought to know, or may already be unknowingly trying to reinvent.
Philosophers and linguists tend to reuse the same familiar tools in ever more (sometime spectacularly) creative ways. But when your only hammer is classical logic, every problem looks like modus ponens. In contrast, computer scientists have invested considerable ingenuity in studying the design of their conceptual tools (among other things), and they've made much progress that we can benefit from.
"Why shouldn't I reinvent some idea X for myself? It's intellectually rewarding!" Yes it is, but it also takes time you might have better spent elsewhere. After all, you can get anywhere you want to go by walking, but you can accomplish more with a combination of walking and strategic subway rides.
More importantly, the idiosyncrasies of your particular implementation may obscure what's fundamental to the idea you're working with. Your implementation may be buggy in corner cases you didn't think of; it may be incomplete and not trivial to generalize; its connection to existing literature and neighboring issues may go unnoticed. For all these reasons you're better off understanding the state of the art.
The theoretical tools we'll be introducing aren't part of the diet of most everyday programmers, but they are prominent in academic computer science, especially in the fields of functional programming and type theory.
Of necessity, this course will lay a lot of logical groundwork. But throughout we'll be aiming to mix that groundwork with real cases in our home subjects where these tools can (or already do, covertly) play central roles.
Our aim for the course is to enable you to make these tools your own; to have enough understanding of them to recognize them in use, use them yourself at least in simple ways, and to be able to read more about them when appropriate.
Who Can Participate?
The course will not presume previous experience with programming. We will, however, discuss concepts embodied in specific programming languages, and we will encourage experimentation with running, modifying, and writing computer programs.
The course will not presume lots of mathematical or logical background, either. However, it will demand a certain amount of comfort working with such material; as a result, it will not be especially well-suited to be a first graduate-level course in formal semantics or philosophy of language. If you have concerns about your background, come discuss them with us.
If you hope to have the class satisfy the logic requirement for Philosophy PhD students, this needs to be discussed with us and approved in advance. If this would be your first or only serious engagement with graduate-level formal work you should consider carefully, and must discuss with us, (1) whether you'll be adequately prepared for this course, and (2) whether you'd be better served by taking a logic course with a more canonical syllabus. This term you could take PHIL-GA 1003, Logic for Philosophers, offered by Joel Hamkins on Wednesdays 12-2.
Faculty and students from outside of NYU Linguistics and Philosophy are welcome to audit, to the extent that this coheres well with the needs of our local students.
During the course, we'll be encouraging you to try out various things in Scheme
and OCaml. Occasionally we will also make remarks about Haskell. All three of these
are prominent functional programming languages. The term "functional" here means they have
a special concern with functions, not just that they aren't broken. But what precisely is
meant by "functional" is somewhat fuzzy and even its various precisifications take some
time to explain. We'll get clearer on this during the course. Another term used roughly the same as "functional"
is "declarative." At a first pass, "functional" or "declarative" programming is primarily focused on complex
expressions that get computationally evaluated to some (usually simpler) result. In class I gave the examples
1+2 (which gets evaluated in arithmetic to
1+2 < 5 (which gets evaluated in arithmetic to a truth-value), and
(which gets evaluated in arithmetic to
1). Also Google search strings, which get evaluated by Google servers to a
list of links.
In truth, nothing that gets marketed as a "programming language" is really completely 100% functional/declarative, and even the
languages I called "imperatival" will have some "functional" fragments (they evaluate
3, also). So these labels aren't
strictly exclusive. The labels are better thought of as concerning different
styles or idioms of programming. Languages like Scheme and OCaml and especially Haskell get called "functional languages" because
described as "more functional" than other languages, like C.
In any case, here is How to get the programming languages running on your computer. And here is some more context for the three languages we will be focusing on:
Scheme is one of two or three major dialects of Lisp, which is a large family of programming languages. Scheme is the more clean and minimalist dialect of Lisp, and is what's mostly used in academic circles. Scheme itself has umpteen different "implementations", which share most of their fundamentals, but have slightly different extensions and interact with the operating system differently. One major implementation is called Racket, and that is what we recommend you use. If you're already using or comfortable with another Scheme implementation, though, there's no compelling reason to switch.
Another good Scheme implementation is Chicken. For our purposes, this is in some respects superior to Racket, and in other respects inferior.
Racket and Chicken stand to Scheme in something like the relation Firefox stands to HTML.
Caml is one of two major dialects of ML, which is another large family of programming languages. Caml has only one active "implementation", OCaml, developed by the INRIA academic group in France. Sometimes we may refer to Caml or ML more generally; but you can assume that what we're talking about always works more specifically in OCaml.
Haskell is also used a lot in the academic contexts we'll be working through. Its surface syntax differs from Caml, and there are various important things one can do in each of Haskell and Caml that one can't (or can't as easily) do in the other. But these languages also have a lot in common, and if you're familiar with one of them, it's generally not hard to move between it and the other.
Like Scheme, Haskell has a couple of different implementations. The dominant one, and the one we recommend you install, is called GHC, short for "Glasgow Haskell Compiler".
It's not mandatory to purchase these for the class. But they are good ways to get a more thorough and solid understanding of some of the more basic conceptual tools we'll be using. We especially recommend the first three of them.
An Introduction to Lambda Calculi for Computer Scientists, by Chris Hankin, currently $18 paperback on Amazon.
The Little Schemer, Fourth Edition, by Daniel P. Friedman and Matthias Felleisen, currently $29 paperback on Amazon. This is a classic text introducing the gentle art of programming, using the functional programming language Scheme. Many people love this book, but it has an unusual dialog format that is not to everybody's taste. Of particular interest for this course is the explanation of the Y combinator, available as a free sample chapter at the MIT Press web page for the book.
The Seasoned Schemer, also by Daniel P. Friedman and Matthias Felleisen, currently $29 paperback on Amazon. This is a sequel to The Little Schemer, and it focuses on mutation and continuations in Scheme. We will be covering those topics in the second half of the course.
The Little MLer, by Matthias Felleisen and Daniel P. Friedman, currently $31 paperback / $29 kindle on Amazon. This covers much of the same introductory ground as The Little Schemer, but this time in a dialect of ML. It doesn't use OCaml, the dialect we'll be working with, but instead another dialect of ML called SML. The syntactic differences between these languages is slight. (Here's a translation manual between them.) Still, that does add an extra layer of interpretation, and you might as well just use The Little Schemer instead. Those of you who are already more comfortable with OCaml (or with Haskell) than with Scheme might consider working through this book instead of The Little Schemer. For the rest of you, or those of you who want practice with Scheme, go with The Little Schemer.
The Haskell Road to Logic, Math and Programming, by Kees Doets and Jan van Eijck, currently $22 on Amazon is a textbook teaching the parts of math and logic we cover in the first few weeks of Logic for Philosophers. (Notions like validity, proof theory for predicate logic, sets, sequences, relations, functions, inductive proofs and recursive definitions, and so on.) The math here should be accessible and familiar to all of you. What is novel about this book is that it integrates the exposition of these notions with a training in (part of) Haskell. It only covers the rudiments of Haskell's type system, and doesn't cover monads; but if you wanted to review this material and become comfortable with core pieces of Haskell in the process, this could be a good read. (The book also seems to be available online here.)
The rest of these are a bit more advanced, and are also looser suggestions:
- Computational Semantics with Functional Programming, by Jan van Eijck and Christina Unger, currently $42 on Amazon. We own this but haven't read it yet. It looks like it's doing the same kind of thing this seminar aims to do: exploring how natural language meanings can be understood to be "computed". The text uses Haskell, and is aimed at linguists and philosophers as well as computer scientists. Definitely worth a look.
Another good book covering the same ground as the Hankin book, but more thoroughly, and in a more mathematical style, is Lambda-Calculus and Combinators: an Introduction, by J. Roger Hindley and Jonathan P. Seldin, currently $74 hardback / $65 kindle on Amazon. This book is substantial; and although it doesn't presuppose any specific mathematical background knowledge, it will be a good choice only if you're already comfortable reading advanced math textbooks. If you choose to read both the Hankin book and this book, you'll notice the authors made some different terminological/notational choices. At first, this makes comprehension slightly slower, but in the long run it's helpful because it makes the arbitrariness of those choices more salient.
Another good book, covering a bit of the same ground as the Hankin and the Hindley & Seldin, but focusing especially on typed lambda calculi, is Types and Programming Languages, by Benjamin Pierce, currently $77 hardback / $68 kindle on Amazon. This book has many examples in OCaml. It seems to be the standard textbook for CS students learning type theory.
The next two books focus on the formal semantics of typed programming languages, both in the "denotational" form that most closely corresponds to what we mean by semantics, and in the "operational" form very often used in CS. These are: The Formal Semantics of Programming Languages, by Glynn Winskel, currently $38 on Amazon, and Semantics of Programming Languages, by Carl Gunter, currently $41 on Amazon.
All wikis are supposed to have a SandBox, so this one does too.
This wiki is powered by ikiwiki.