We begin with some unfinished business from last week.

1

Re the contrast between "the standard (or intended) model" of the language of arithmetic and "non-standard models". Some models are neither. For example, there are models isomorphic to but different from the intended model. There are also models that disagree with the standard model about some non-logical claims.

We discussed non-standard models of true arithmetic: these will agree with the standard model on all sentences in true arithmetic, which given the way that theory is defined, means they will agree with the standard model on all sentences (and so be elementarily equivalent to it).

Helping ourselves to the (as yet unproved-by-us) fact that Peano arithmetic is a proper subset of true arithmetic: A broader class of models will be non-standard models merely of Peano arithmetic. These will agree with the standard model on all sentences in the theory of Peano arithmetic, but may disagree about other sentences.

2

The notion of an \(\omega\)-inconsistent set came up in our discussion of an example of Sam's in an earlier class. This is a set of formulas of the form \(\{\, \neg\forall x Fx, F(0), F(S0), F(SS0), \dots \,\}\) (or alternatively, one of the form \(\{\, \exists x Fx, \neg F(0), \neg F(S0), \neg F(SS0), \dots \,\}\)). We call a theory \(\omega\)-inconsistent if it can prove every member of an \(\omega\)-inconsistent set. If a theory is not \(\omega\)-inconsistent, then it is \(\omega\)-consistent.

A related notion is \(\omega\)-incompleteness: this is when a theory can prove each of \(F(0), F(S0), F(SS0), \dots\) but can't prove \(\forall x Fx\). Whereas \(\omega\)-inconsistent theories can prove the negation of \(\forall x Fx\).

Our paradigm theories of arithmetic (minimal arithmetic, Peano arithmetic, true arithmetic) are \(\omega\)-consistent, and any theory whose sentences are all true on the standard model will also have to be \(\omega\)-consistent. But some consistent theories, including for example some consistent extensions of minimal arithmetic, can be \(\omega\)-inconsistent.

Let's say that a theory is arithmetically sound if it contains only sentences that are true in the standard model of arithmetic, that is, it is a subset of true arithmetic. Then (assuming Peano arithmetic to be consistent) the following entailments are all one-way:

contained in Peano arithmetic \(\Rightarrow\) is an arithmetically sound theory \(\Rightarrow\) is an \(\omega\)-consistent theory \(\Rightarrow\) is a deductively consistent theory

3

Re different theories of arithmetic.

  1. Smith discusses a theory he calls "baby arithmetic". This has as axioms schematic, unquantified versions of the core Peano axioms, without induction. It suffices to prove every quantifier-free sentence in true arithmetic, so it is deductively complete if you think of its language as being one that lacks quantifiers. It's also an effectively decidable theory.

  2. There are some extensions of baby arithmetic that we'll refer to later. These continue to lack quantifiers, and add additional axioms schemas like the ones for \(+\) and \(*\) but for new arithmetical operations. They also add in a (quantifier-free) induction rule. One of these extensions is called "primitively recursive arithmetic", and an extension of that with a strong form of the induction rule was used by Gentzen to prove that Peano arithmetic is consistent.

  3. I mentioned in class last week a theory called Presburger arithmetic. This has quantifiers, but lacks multiplication. Otherwise it's like Peano, and includes induction. Like theory #1, this theory is also deductively complete (for its restricted language), and effectively decidable. Note that it's okay for the question What first-order sentences are theorems of this specific theory? to be decidable, even though the more general question What first-order sentences follow from arbitrary specified premises? is undecidable.

  4. Another theory is called Skolem arithmetic. This has quantifiers and multiplication, but lacks all of \(+\), \(S\) and \(<\). I don't know how one does induction in this theory (lacking \(S\)), but I've read that this theory is also deductively complete (for its restricted language), and effectively decidable. So it's not multiplication itself that will introduce undecidability and the like, but the combination of \(+\) and \(*\).

  5. The more familiar arithmetical theories have quantifiers and the full signature of the language of arithmetic: \((0, S, +, *, <)\), though some of these symbols may be defined in terms of others. The strongest of these theories is true arithmetic, which includes all the sentences that are true on the standard model of the language of arithmetic.

  6. A weaker theory (but we haven't yet ourselves proved that it's weaker) is Peano arithmetic, which we discussed last week. We assume we're dealing with the first-order version of Peano arithmetic. Many authors refer to Peano arithmetic as P.

  7. Still weaker theories are what I'll call minimal arithmetics like Robinson arithmetic or the minimal arithmetic discussed in Boolos Burgess and Jeffrey. Those two are distinct theories: they don't merely have different axioms, but each contains some theorems the other lacks. For our theoretical purposes, though, they're on a par. Annoyingly, Burgess calls his theory Q (and calls Robinson arithmetic, which he discusses only in passing, R); but many other authors use Q to refer to Robinson arithmetic. These theories are "minimal" in comparison to Peano arithmetic, but they are more powerful than, for example, theory #1 above. I will use \(Q\) to refer to either of these theories (you can choose which one).

    Despite being more powerful than "baby arithmetic", these minimal arithmetics are also "more ambitious", in having more expressive language. So they fail to be deductively complete, and as we'll see later, they fail also to be decidable. As we proceed we'll see that the stronger theory Peano also fails to be deductively complete; but that takes Goedelian ingenuity to realize. In the case of minimal arithmetics, it's much more obvious that they aren't deductively complete. Although they have as axioms that \(\forall x.x+0=x\), and although they can prove every instance of the unquantified form of the converse, \(0+\alpha=\alpha\), these theories aren't capable of proving the generalization \(\forall x.0+x=x\). But neither do they prove its negation: their theorems are a subset of the theorems of Peano and true arithmetic. (See Smith section 13.3 for discussion.)

Smith discusses some of these arithmetics in Episodes 3 and 4 of his Goedel without (too many) tears.

You could complicate this map in other ways. For instance, you could add first-order set theory somewhere in between Peano arithmetic and true arithmetic. Although it lacks the signature of arithmetic, arithmetical claims can be translated into its language, and the Peano axioms will then be provable. On the other hand, Goedel's results will still apply and so (the translations of) some theorems of true arithmetic will remain unprovable.

One notable difference between theory #5 and most of the others is that the others have nice formal axiomatizations. (The theories described in #2 can't be fully axiomatized, either, because they essentially require a new inference rule.) In some cases the others have infinitely many axioms, but these axioms still fit a few decidable patterns. It's not obvious --- and as it turns out, it won't be true --- that true arithmetic can be axiomatized in the same way. For some purposes, we'll want to confine our attention to theories that can be axiomatized in nice ways:

4

Let's explain the notion of a primitively recursive function on the natural numbers.

First we have the base functions, which are:

We also include as primitively recursive functions anything which is built up out of other primitively recursive functions via two operations:

Nothing else is a primitively recursive function.

As an example of how this works, here's how we define the factorial function:

\(\operatorname{fact}(0)\) = \(S(\operatorname{zero}())\) = \(\operatorname{compose}(S,\operatorname{zero})()\)
\(\operatorname{fact}(y+1)\) = \(\operatorname{mult}(S(y),\operatorname{fact}(y))\) = \(\operatorname{mult}(S(\operatorname{first}(y,\operatorname{fact}(y))),\operatorname{second}(y,\operatorname{fact}(y)))\) = \(\operatorname{compose}(\operatorname{mult},\operatorname{compose}(S,\operatorname{first}),\operatorname{second})(y,\operatorname{fact}(y))\)

It's just the recursive definition you're already familiar with, massaged into a canonical form --- which looks verbose and awkward in this case but is needed for full generality. What plays the role of \(f\) in the official definition of the recursion operation is \(\operatorname{compose}(S,\operatorname{zero})\); and what plays the role of \(g\) is \(\operatorname{compose}(\operatorname{mult},\operatorname{compose}(S,\operatorname{first}),\operatorname{second})\).

All primitively recursive functions functions are total (defined for all arguments from \(\mathbb{N}^m\), for the appropriate adicity \(m\)) and effectively decidable. In fact there are also other effectively decidable functions on the natural numbers that aren't primitively recursive, but it would take you a while to come up with one. Most arithmetic functions you can think of --- exponentiation, factorial, greatest common divisor, is-this-number-a-prime?, what-is-the-nth-prime?, and so on --- are all primitively recursive.

One notable characteristic of these functions, which I'll only take the time to state intuitively not rigorously, is that though caculating their result for a given argument may require "looping" or "iterating" (think of how the factorial of \(n\) is determined, which will in turn require multiplication, which also involves iterated addition, which in turn requires iterated applications of the successor function), there is a finite bound to how many iterations will be required, which you could specify "in advance" as a (p.r.) function of what the supplied argument is.

Here's a useful illustration of that. One operation we can primitively recursively define is a function bounded least zero, which takes some other primitively recursive function \(f\), and returns a function \(h\) where:

\(h(x_1,\dots,x_n,j) = \)the least \(i\) from \(0\dots j-1\) where \(f(x_1,\dots,x_n, i)=0\), or \(j\) if there is no such \(i\)

You can see by the definition that we'll only need at most \(j\) many iterated applications of \(f\) to determine the answer; and that we'll always be able to provide an answer, so long as \(f\) itself always gives us an answer.

5

The notion of a (not-necessarily-primitively) recursive function is an extension of the preceding. It adds in the ability to build new functions by yet another basic operation on other recursive functions, namely the operation of unbounded least zero. That is, give me the least \(i\) where \(f(\dots, i)=0\) period. Not just the least \(i\) up to some bound specified in advance. This introduces several connected novelties. First, we're allowing ourselves now to do unbounded iterations. For us to get an answer effectively, there has to be some finite number of steps after which the answer will be delivered. But we needn't be able to specify in advance, in a general way that abstracts from the specific \(f\) and auxiliary arguments we're working with, how many steps that will be. Also, for some cases there may be no answer. What is the unbounded least zero of the successor function, or the function that returns 1 for every argument? There isn't any. In these cases it's obvious that there won't be, but in general it won't even be effectively decidable whether a recursive function does or doesn't have a least zero somewhere. The only way to tell might be to try to calculate it, which calculation needn't ever terminate. Finally, once we've allowed in some recursive functions that sometimes have no defined answer --- that is, are partial functions rather than total functions on the natural numbers --- the possibility exists that \(f\) is itself such a function. So when we're trying to determine \(f\)'s least zero, another way we might fail to get an answer is if \(f\) turns out to be undefined for some argument before we reach an answer. (Consider the case where \(f(1)=0\) but \(f(0)\) is undefined.)

These (not-necessarily-primitively) recursive functions are sometimes called total or partial recursive functions (depending on whether they happen to be total or merely partial), or just recursive functions simpliciter. They're also sometimes called \(\mu\)-recursive functions, because \(\mu\) is used as a symbol for the unbounded least zero operation, also called the "minimization" operation. (If you browse around in philosophical logic and neighboring fields, you'll learn that \(\mu\) gets used in a variety of different unrelated ways, so be on guard about that.)

It turns out that the class of (not-necessarily-primitively) recursive functions on natural numbers is provably equivalent to the class of functions on natural numbers calculable by a Turing machine, and by various other proposed general calculation systems. For this and other reasons, it is broadly assumed that all of these classes coincide with our intuitive notion of an effectively computable function. Or at least with one interesting precisification of that intuitive idea, the one we now have the best understanding of.

Smith discusses primitively recursive and (not-necessarily-primitively) recursive functions in Episodes 5 and 6 of his Goedel without (too many) tears. See also Boolos Burgess and Jeffrey chapters 6 and 7.

6

The next thing I propose to do is to walk through an analogue of Goedel's First Incompleteness Theorem, but we'll do it not for any theory of arithmetic, but rather for a theory that talks about the syntax of its own language. So the intended model of this theory will have in its domain expressions in the theory's own language. It was folklore when I was a grad student that this would be a pedagogically simple and effective way to understand the heart of the First Incompleteness Theorem. I think Kripke was the one who proposed this. I've never seen this idea developed, but it still sounds like a good idea, so let's try it out. Afterwards, we can dig into the more complicated details of how to carry out the same kind of argument within a theory of arithmetic.

To begin, then, we'll have a language of first-order predicate logic with identity. It will be extended with some syntactic predicates and functors, like the predicate \(\operatorname{is a wff}\) and the functor \(\operatorname{the negation of}\). Now we'll also want the ability for some expressions to refer to themselves, as in the (intuitively true) sentence \(\textrm{This very sentence is not a conjunction}\). However, there's a certain awkardness here. What if we conjoin that claim with some other truth, such as \(0=0\). Should \(\textrm{this very sentence}\) then continue to refer merely to the first conjunct --- in which case, the whole conjunction would intuitively then be true. Or should \(\textrm{this very sentence}\) now refer to the largest sentence containing it --- which now is a conjunction, hence the first conjunct and so too the whole sentence would intuitively be false. This kind of ambiguity might remind you of the scope ambiguities that arise when we treat descriptions as syntactic terms. A similar kind of solution suggests itself here. We can introduce a new quantifier, exhibited in the two sentences:

\((\operatorname{Self} x. \neg(x~\operatorname{is a conjunction})) \wedge 0=0\)
\(\operatorname{Self} x ( \neg(x~\operatorname{is a conjunction}) \wedge 0=0 )\)

These illustrate the intuitively true and the intuitively false examples we just discussed.

I want to add one last element to our language. This will be a class of primitive predicates that I'll write as \(T\textrm{-provable}\), \(S\textrm{-provable}\), and so on. Their intended interpretation will be to apply to exactly those expressions in the domain that are sentences provable from some fixed sets of sentences \(T, S, \dots\) of this language.

So much for the syntax. The semantics for this language will be as typical for a first-order language with equality, with the following additions.

\([\![\, \operatorname{Self} x \phi \,]\!]_{\mathscr{M}\, q} = [\![\, \phi \,]\!]_{\mathscr{M}\, q[x:=\mathtt{"}\operatorname{Self} x \phi\mathtt{"}]}\)

That should be understood as a stipulated part of our semantics, true for all the models whose domains contain sentences of this language.

\([\![\, T\textrm{-provable}~\alpha \,]\!]_{\mathscr{M}\, q} = \mathsf{True}\) when \([\![\, \alpha \,]\!]_{\mathscr{M}\, q}\) is some sentence \(\phi\) in the domain of \(\mathscr{M}\), and \(T\vdash \phi\) in our assumed proof system; else \(\mathsf{False}\)

That clause doesn't need to be a logical stipulation; you could take it just to be a characterization of the models we'll be interested in.

Okay, now against that background, let \(G\) abbreviate the sentence \(\operatorname{Self} x (\neg\,T\textrm{-provable}~x)\). Intuitively, this says \(\textrm{This very sentence is not provable from }T\).

Let us assume that \(T\) is a consistent theory. Might it be that \(G\) is provable from \(T\)? Well, when we say "provable" we're assuming some kind of proof system. Let's assume that the proof system we're working with is sound. Then for \(G\) to be provable from \(T\) it must be that \(T\models G\). Which models can make \(G\) true? Well, by the definition of \(G\) and the semantic clause for \(\operatorname{Self}\), these will be ones where \([\![\, \neg\,T\textrm{-provable}~x \,]\!]_{\mathscr{M}\, q[x:=G]}\) is true. Those will be ones where \([\![\, T\textrm{-provable}~x \,]\!]_{\mathscr{M}\, q[x:=G]}\) is false. Those will be ones where \(T\not\vdash G\) (which holds either for all models or for none). Hence, if \(T\) is consistent, then it can prove \(G\) only if it can't prove \(G\); that is, it can't prove \(G\).

So if \(T\) is consistent, it can't prove \(G\). Given our semantics, though, that means that \([\![\, G \,]\!]\) will be true for any model and assignment --- at least, restricting our attention to models that satisfy the semantic clauses we stated above. (This isn't surprising, given what \(G\) intuitively says, and those models honor.) So \(G\) will be true (on the intended models) but not provable from \(T\).

Might it be, on the other hand, that \(\neg G\) is provable from \(T\)? Given what we just said, it should be clear that if \(T\) is consistent and the proof system is sound, then this could not be. \(T\) cannot prove the negation of something that's true on every model.

(To show this more flat-footedly: we assume that \(T\) is consistent and our proof system is sound. Then if \(T\vdash \neg G\), that entails that \(T\models \neg G\). Which models can make \(\neg G\) true? By the definition of \(G\) and the semantic clauses for \(\neg\) and \(\operatorname{Self}\), these will be ones where \([\![\, \neg\,T\textrm{-provable}~x \,]\!]_{\mathscr{M}\, q[x:=G]}\) is false. Those will be ones where \([\![\, T\textrm{-provable}~x \,]\!]_{\mathscr{M}\, q[x:=G]}\) is true. Those will be ones where \(T\vdash G\) (which holds either for all models or for none). Hence, if \(T\) is consistent, then it can prove \(\neg G\) only if it can prove \(G\); that is, it can't prove \(\neg G\).)

Summarizing: if \(T\) is a consistent theory, then:

If on the other hand \(T\) is inconsistent, then \(T\) can prove everything; and \(\not\models G\) (indeed \(\models \neg G\)).

7

That was cool. But you might suspect that we got the results we did only because we let in that strange new quantifier \(\operatorname{Self}\), or some other perversity like \(\textrm{this very sentence}\). As we'll see, there are more oblique ways for sentences to refer to themselves, that don't require any such obvious novelties. In fact, it will turn out that the ability to effect the necessary kind of self-reference, and also to express notions like \(T\textrm{-provable}\), are already present in our minimal arithmetics. But we'll get to that. First, let's see how we might conduct the same kind of argument in a system that doesn't have the quantifier \(\operatorname{Self}\). We will however retain \(T\textrm{-provable}\), and we'll be more explicit about some of the other syntactic functors and predicates in our language (and how they should be interpreted).

Some predicates in our language might be:

Suppose \(B\) is some wff in the extension of the last predicate (on our intended models). For example, it might be the wff \(x~\textrm{is a wff containing exactly one free variable}\). (On the intended models) this wff will be false on an assignment where \(x\) is mapped to the wff \(y=z\). It will be true on an assignment where \(x\) is mapped to the wff \(y=y\). What about an assignment where \(x\) is mapped to the wff \(B\) itself? This will also be true, since \(B\) has exactly one free variable. That is, \(B\) is true when interpreted as a claim about itself. To force this interpretation, we'd have to have some kind of quotation device in the language, and then construct the sentence, call it \(B'\):

\(\mathtt{"}x~\textrm{is a wff containing exactly one free variable}\mathtt{"}~\textrm{is a wff containing exactly one free variable}\)

We can call the sentence \(B'\) a self-application of the formula \(B\) (wrt variable \(x\)). (Sometimes it's called a "diagonalization" of \(B\); see Smith section 32 for explanation.)

I want to assume that our domain contains some sentences like \(B'\) in it. However, I don't want to deal with the complexity of introducing quotation functors or term constants into language. So the way I'll suppose we get to designate \(B'\) is by using the functor \(\operatorname{Sub}_x(\alpha,\beta)\). The intended interpretation of this is to pick out the wff you get by substituting a quoted name of the formula designated by term \(\beta\) for free occurrences of \(x\) in the formula designated by term \(\alpha\). (That is, it picks out the formula \([\![\, \alpha \,]\!]_{\mathscr{M}}[x\leftarrow \mathtt{"}[\![\, \beta \,]\!]_{\mathscr{M}}\mathtt{"}]\).) Then we can refer to the self-application of the wff designated by term \(\beta\) as \(\operatorname{Sub}_x(\beta,\beta)\). That is, if \(\beta\) is a term in the language designating formula \(B\), then \(\operatorname{Sub}_x(\beta,\beta)\) is a term designating sentence \(B'\).

In addition to terms that construct self-applications of formulas, we can presumably also have predicates that test for whether some argument is the self-application of a formula (wrt some variable \(x\)). Thus we could have a sentence like:

\(\operatorname{Sub}_x(\beta,\beta)~\textrm{is the self-application of}~\beta\)

which should come out true, on the intended interpretations of \(\operatorname{Sub}_x\) and \(\textrm{is the self-application of}\), so long as \(\beta\) designates the wff \(B\), or indeed any wff free wrt variable \(x\).

Now let \(H\) abbreviate the formula:

\(\forall y. y~\textrm{is the self-application of}~x \supset \neg\,T\textrm{-provable}~y\).

Intuitively, this says that \(x\)'s self-application is not provable from \(T\). Or, phrased a bit differently: Nothing is a proof from \(T\) of the self-application of \(x\).

Now let \(G\) be the self-application of \(H\). That is, \(G\) says that "\(x\)'s self-application is not provable from \(T\)"'s self-application is not provable from \(T\). Or, going with the alternate phrasing: Nothing is a proof from \(T\) of the self-application of "Nothing is a proof from \(T\) of the self-application of \(x\)."

If you think it through, you'll see that \(G\) is the very sentence that it's saying isn't \(T\)-provable. Hence --- at least if we restrict our attention to models where our various syntactic predicates and functors have their intended interpretations --- we'll get the same results we got in the previous section. That is, so long as \(T\) is consistent:

And we did this all without having any apparatus like the \(\operatorname{Self}\) quantifier, or self-referential terms like \(\textrm{this very sentence}\). The strategy used here in fact parallels the strategy used in Goedel's actual proof of the First Incompleteness Theorem. It's just that he does everything "coded" into arithmetic itself, instead of in a language whose intended models are construed as talking directly about the language's own syntax and proofs.

8

So let's start talking about how to "code" all this into arithmetic.

There are two keys ideas here. The first is that we settle on some way of coding arbitrary terms and formulas of the language whose provability we want to discuss into single numbers. We also want to settle on some way of coding finite sequences of formulas (e.g., proofs) into numbers. It's okay if the code numbers for some multi-formula sequences are the same as the code numbers for some single formulas, so long as we keep track of what some code number we're working with is supposed to encode.

In these discussions, authors often typographically distinguish between mathematical terms and variables in their metalanguage, which they might write as \(m\), and official numerals for those terms in the language of arithmetic. I've been abbreviating the latter as \(S^m0\); other conventions are to write \(\overline{\textrm{m}}\) or \(\mathbf{m}\). It's also important to keep track of when we're talking about a linguistic expression and when we're talking about its code number. One convention is to use \(\#\xi\) to abbreviate the code number of expression \(\xi\); another is to use corner quotes, as in \(\ulcorner \xi\urcorner \). So:

That's the first idea. You may have come across it in some form before. All that matters about the coding scheme is that we should have some effective procedure for encoding formulas (and sequences) into numbers, and then decoding numbers back into formulas (and sequences).

The second idea is more complex. This is the idea that if you take some decidable function on numbers (perhaps a primitively recursive one), we'd like to be able to define or express it in our arithmetical theories. Similarly, if you have some decidable set or relation, we'd like to be able to define or express these in our arithmetical theories.

The way we do this is as follows. We say that some predicate \(\theta\) weakly arithmetically defines a relation \(R\) in a theory \(T\) when:

if \(R(m,n,\dots)\), then \(T\vdash \theta(S^m0,S^n0,\dots)\), and
if not, then \(T\not\vdash \theta(S^m0,S^n0,\dots)\)

By "predicate" here I mean to include any open formula whose free variables can be replaced by \(S^m0, S^n0, \dots\) We needn't be restricted to atomic predicates in the language.

We say that predicate \(\theta\) strongly arithmetically defines relation \(R\) in theory \(T\) (or for short, just arithmetically defines \(R\) in \(T\)) when:

if \(R(m,n,\dots)\), then \(T\vdash \theta(S^m0,S^n0,\dots)\), and
if not, then \(T\vdash \neg\theta(S^m0,S^n0,\dots)\)

We presume \(T\) is consistent, and doesn't just prove both of those claims in all cases; so the "if"s in the above can be read as "iff"s.

Sadly, there is a variety of terminology for these notions. The terminology I'm using is close to Burgess's. Smith says instead that theory \(T\) captures relation \(R\) by the open formula \(\theta(x,y,\dots)\) (see his Definition 15).

Another variation in terminology is that sometimes we'll want to talk about predicates that (strongly) arithmetically define a relation in the theory of true arithmetic. Burgess would say that these predicates arithmetically define the relation simpliciter (with no reference to a specific theory). Smith says the relations are expressed by the corresponding formulas. (Again, with no reference to a specific theory, but merely to the relevant language; see his definitions 12 and 14. Note also that when Smith talks about a language, he understands it as coming together with its intended interpretation; so where he says "language of arithmetic", he means what I mean by "language of arithmetic together with its standard model".) The variation in usage is unfortunate. But I think it's clearest if we say explicitly "predicate \(\theta\) arithmetically defines relation \(R\) in true arithmetic" if that's what we want to be talking about.

The notion of a predicate arithmetically defining a set in theory \(T\), rather than a relation, is just a limiting case of the previous, where we have \(\theta(\alpha)\) instead of \(\theta(\alpha,\beta,\dots)\). We can also talk about a predicate arithmetically defining a function in theory \(T\), just when the predicate defines the graph of that function. That is:

if \(f(m,n,\dots)=r\), then \(T\vdash \theta(S^m0,S^n0,\dots,S^r0)\), and
if \(f(m,n,\dots)\ne r\), then \(T\vdash \neg\theta(S^m0,S^n0,\dots,S^r0)\)

As before, we presume \(T\) is consistent, so these "if"s can be read as "iff"s.

In the case of functions, it's sometimes convenient to work with a somewhat stronger notion. For predicate \(\theta\) to arithmetically define \(f\) as just described, \(T\) would have to prove \(\neg\theta(S^m0,S^n0,\dots,S^r0)\) for every case where \(f(m,n,\dots)\ne r\). But it wouldn't necessarily have to prove that generalization, that I just stated in the metalanguage. Our stronger notion will involve \(T\)'s also being able to prove the generalization. That is, we'll say that predicate \(\theta\) functionally defines \(f\) in theory \(T\) when:

if \(f(m,n,\dots)=r\), then \(T\vdash \theta(S^m0,S^n0,\dots,S^r0)\)
if \(f(m,n,\dots)=r\), then \(T\vdash \forall x.x\ne S^r0 \supset \neg\theta(S^m0,S^n0,\dots,x)\)
if \(f(m,n,\dots)\ne r\) then \(T\) proves neither of those things

Or equivalently:

\(f(m,n,\dots)=r\) iff \(T\vdash \forall x.x= S^r0 \subset\supset \theta(S^m0,S^n0,\dots,x)\)

Smith calls this notion captures* (see his Definition 54), and Burgess calls it arithmetically representing (rather than just defining) the function. I've seen others use terms like I'm proposing here, which seem to me to be more memorable. Again, it's a pity about the terminology.

One last bit of terminology here. I'll say that some sentence \(\phi\) in the language of arithmetic is an arithmetization of some claim about linguistic expressions, syntactic properties and manipulations of them, and provability from other formulas, iff: \(\phi\) ascribes some predicates that arithmetically define the relevant properties and functions, in a specified arithmetical theory \(T\), to the specified code numbers for those various linguistic expressions. This brings together both of the important ideas described in this section.

9

Now it turns out that our minimal arithmetics (I'll just speak of "theory \(Q\)", though as I said different authors mean slightly different things by that) are powerful enough to arithmetically define every (primitively or not) recursive function, and so too the relations and sets for which those are the characteristic functions. (We call the relations or sets "primitively recursive" when their characteristic functions are primitvely recursive.) Our minimal arithmetics are also powerful enough to do the stronger thing of functionally defining the (primitively or not) recursive functions. Given the entrenched view that recursive functions exhaust the effectively calculable functions, this means that minimal arithmetic can define all the effectively decidable functions.

For Goedel's purposes, that strong result isn't really needed. It's enough that the more modest claim is also true, that minimal arithmetic is powerful enough to define all the primitively recursive functions. (See Smith's Theorem 27.) This is because all the syntactic properties and manipulations, and provability notions that we need to work with, are primitively recursive.

Here I'll mention some further notions that you may encounter when you read about these issues. They aren't necessary for the rest of our discussion here, though.

One kind of quantification we can call bounded. We might represent that as \(\exists x\lt n. \phi\) or \(\forall x\lt n.\phi\), where these are to be understood as abbreviating the following formulas in the language of arithmetic:

\(\exists x (x\lt S^n0 \wedge \phi)\)
\(\forall x (x\lt S^n0 \supset \phi)\)

Of course, we could have used \(\le\) instead of \(\lt\).

It's sometimes useful to talk about what can be arithmetically defined by formulas in the language of arithmetic that use no quantifiers, or if they do use quantifiers, only used them in those "bounded" patterns. These formulas are sometimes said to have quantifier complexity \(\Delta\) (or \(\Delta_0\)). Burgess uses different language; he calls these formulas rudimentary.

Now consider formulas that can be put into prenex form (all quantifiers at the front), where they consist of a sequence of zero or more unbounded \(\exists\)s, followed by a formula of complexity \(\Delta\). These formulas are sometimes said to have quantifier complexity \(\Sigma_1\). Burgess calls them \(\exists\)-rudimentary. In fact we only need to think about a single unbounded \(\exists\); a sequence of more than one will be equivalent to a single unbounded \(\exists\) followed by further bounded \(\exists\)s. Note also that \(\Delta\) formulas count as \(\Sigma_1\), because we allow for the case where there are zero leading unbounded \(\exists\)s.

Similarly, formulas that in prenex form consist of a sequence of zero or more unbounded \(\forall\)s, followed by a formula that's of class \(\Delta\), are said to have quantifier complexity \(\Pi_1\) or to be \(\forall\)-rudimentary. Here too we really only need to think about a single unbounded \(\forall\).

The subscripts in \(\Sigma_1\) and \(\Pi_1\) are to allow for cases where we have a sequence of zero or more \(\forall\)s followed by a \(\Sigma_1\) formula. That would be a \(\Pi_2\) formula; and so on.

Some interesting facts:

There are also interesting relations between these syntactic categories and what kinds of functions and relations can be arithmetically defined in terms of them. I won't pause to explain that, though. You can read Smith or Boolos Burgess and Jeffrey for more details.

10

Okay, so now we're in a position to finally talk about Goedel's First Incompleteness Theorem. In popular discussion, there are really three different, closely related but importantly-different-in-the-details theorems here. So we might do better to talk about Goedel's First Incompleteness Triad. And even then, each corner of the triad can be formulated in subtly different ways; and furthermore, what's interesting is not just the Triad itself but all sorts of corollaries that quickly follow from it.

But I will exposit this in terms of a Triad. One of the corners of the Triad is closest to what Goedel strictly speaking proved in 1931.

In the following, I will sometimes talk of an axiomatizable theory. By this you can take me to mean a theory where the set of code numbers of the theory's axioms, for our specified coding scheme, is primitively recursive. Correlates of our results would also hold if the axioms were merely decidable, but that would complicate our discussion. In any event, the only theories we're dealing with have primitively recursive axiomatizations, and I will just call these theories "axiomatizable."

In some cases I'll want to discuss theories that may not be axiomatizable in that way, such as true arithmetic. (Indeed we'll see it isn't so axiomatizable.) In that event, I'll omit the adjective "axiomatizable." In other cases I'll want to talk more specifically about theories (like our minimal arithmetics \(Q\)) that are finitely axiomatizble.

I'll call the three corners of our Triad A, B, and C. Let's begin with Theorem B. (I'm staying consistent with the labeling I used in class, but am now discussing the corners in a different order.)

B. If \(T\) is an axiomatizable theory in the language of arithmetic that is deductively consistent and strong enough to arithmetically define all decidable relations on the numbers, then \(T\) is deductively incomplete.

As it turns out, minimal arithmetic and its extensions like Peano arithmetic are strong enough to arithmetically define all (not-necessarily-primitively) recursive relations on the numbers, which we assume is the same as all decidable relations on the numbers. So this theorem says that these theories are, if consistent, deductively incomplete. Hence they fall short of true arithmetic, which by definition includes all the sentences that are true in the standard model of arithmetic.

This theorem is what Smith calls his Theorem 5. He derives it as a quick result from two other theorems:

Smith-3. If \(T\) is an axiomatizable theory in the language of arithmetic that is deductively consistent and deductively complete, then it is decidable whether any given sentence is a theorem of \(T\).

(See also Burgess Theorem 15.7) We already know that we can effectively enumerate the consequences of any (at least effectively enumerable) set of premises, so we could decide the question whether a given \(\phi\) is a theorem of \(T\) just by enumerating the consequences of \(T\)'s axioms; if \(T\) is deductively complete we're guaranteed to eventually see \(\phi\) or \(\neg\phi\) in the list, and then we can give our answer.

What if \(T\) is inconsistent? Well, in that case too it's easy to decide whether \(\phi\) is an axiom. Just always say "yes."

But what if we don't know whether \(T\) is consistent or not? This is a good point to get clear on: it doesn't matter. When we ask whether we can decide whether some formula belongs to some set, it doesn't matter how the set was described. If it's in fact the set that includes all the wffs in the language, then always saying "yes" is in fact an effective decision procedure for saying whether an arbitrary formula belongs to the set. Even if you don't know, and can't prove, that the way you in fact specified the set is equivalent to the set as specified here. If you want a finer-grained notion of deciding questions, you may well be able to find or develop such. But that is not the way the standard notions that we're working with are understood. To take another example, it is effectively decidable what numbers belong to this set:

\(\{\, x\mid x~\textrm{is even}\vee \textrm{God exists} \,\}\)

If God exists, then the decision procedure is to always say "yes"; if God doesn't exist, then the decision procedure is to say "yes" iff the number is even (which is a decidable question). Now which of those is the decision procedure? I wish I knew. But in either event, there is a decision procedure, which can effectively give the correct answer for any argument \(x\) in a finite number of steps. So the set --- whichever set it turns out to be --- is decidable.

In the present context too, we don't need to know or prove anything about whether \(T\) is consistent. If it is, then the decision procedure implied by Smith-3 is the more laborious one I described of eumerating all the theorems and seeing which of \(\phi\) or \(\neg\phi\) appears. If it isn't consistent, then the decision procedure is the easier one of always saying "yes, the formula you're asking about is provable."

The other theorem Smith uses to derive our Goedel Theorem B is:

Smith-4. If \(T\) is an axiomatizable theory in the language of arithmetic that is deductively consistent and strong enough to arithmetically define all decidable relations on the numbers, then \(T\) is not decidable.

See Smith sections 7 and 8 for an informal justification for this. (The introduction to Episode 6 is also relevant.)

It follows that if you have axiomatizable + consistent + strong-enough-to-define-the-decidable, you can't also have complete. That's what Goedel Theorem B says.

One downside of our Goedel Theorem B is that it doesn't actually construct a sentence that's neither provable nor disprovable. It just tells us that there is such a sentence. Also, it's somewhat difficult to get clear on a rigorous understanding of "effectively decidable," and to persuade ourselves that \(Q\) and Peano arithmetic are "strong-enough-to-define-the-decidable."

The other two corners of our triad won't have these shortcomings.

Back in section 7, we posited the formula \(H\), which was:

\(\forall y. y~\textrm{is the self-application of}~x \supset \neg\,T\textrm{-provable}~y\)

Notice that \(H\) is a formula with one free variable \(x\). We let \(G\) be the self-application of \(H\) wrt that variable. In effect, \(G\) says "I am not provable from \(T\)".

What Goedel did was (using the term I introduced at the end of section 8) arithmetize this construction.

To see how this works, I think it's helpful to separate the relevant notions into three parallel versions. We'll let \(Q\) be our relevant \(T\); then let's say that:

Similarly, let's say that:

At this point, we could define \(Q\textrm{-provable}~y\) as \(\exists p. \mathsf{is~a~coded~proof~from}~Q(p,y)\), then define \(H\) as we did before:

\(\forall y. \mathsf{is~the~coded~self{-}application}(x, y) \supset \neg\exists p. \mathsf{is~a~coded~proof~from}~Q(p, y)\)

Smith works instead with the equivalent \(U\), which we can understand as:

\(\forall p \forall y. \mathsf{is~the~coded~self{-}application}(x,y) \supset \neg\,\mathsf{is~a~coded~proof~from}~Q(p, y)\)

Both of these say, intuitively, that Nothing is a proof of \(x\)'s self-application. We then apply \(U\) to itself, and let \(G\) be the result. Now we'll be able to show, as in section 7, that if \(Q\) is consistent, then \(Q\not\vdash G\) and \(Q\not\vdash \neg G\).

The remaining corners of our First Incompleteness Triad differ in what assumptions they rely on to show this.

One of the corners is the theorem that:

A. If \(T\) is an axiomatizable theory in the language of arithmetic that is not merely deductively consistent but arithmetically sound (that is, all its axioms are true on the standard model of that language), then \(T\) will be deductively incomplete.

In other words, axiomatizable + arithmetically sound implies incomplete. This theorem doesn't tell us that true arithmetic is incomplete, because it's not axiomatizable. It does tell us that axiomatizable theories like minimal arithmetic and Peano arithmetic are incomplete, if their theorems are subsets of true arithmetic. (The most salient way in which they'd fail to be sound is if they were deductively inconsistent. But being sound is a stronger constraint than merely being consistent.)

This theorem approximates what Smith calls his Theorem 1. See also the refinements in his Theorems 37 and 38.

To prove this theorem, we first construct a Goedel sentence \(G\) that concerns provability in \(T\). We don't require the relations \(\textrm{is a coded proof from}~T~\textrm{of}\) and \(\textrm{is the coded self-application of}\) to be arithmetically definable in \(T\), but merely in true arithmetic.

Next we establish the lemma that \(G\) will be "standardly true" --- that is, true in the standard model of arithmetic --- iff it's not provable from \(T\). How? Observe that the lhs of this biconditional requires that for all proof-codes \(m\), \([\![\, \neg\,\mathsf{is~a~coded~proof~from}~T(S^m0, S^{\#G}0) \,]\!]_{\mathscr{N}}\) is \(\mathsf{True}\); which given what the embedded formula defines in true arithmetic, will be the case iff no \(m\) is the code for a proof from \(T\) of \(G\). That establishes our lemma.

Then we observe that if \(T\) could prove \(G\), it would (by our lemma) therefore be proving something that's not standardly true --- contradicting the assumption that \(T\) is arithmetically sound. So \(T\) can't prove \(G\), and (again by our lemma) this shows that \(G\) is standardly true. Hence if \(T\) is arithmetically sound, it can't prove \(\neg G\) either.

That's all it takes to prove Goedel Theorem A. The hard work is in the construction of \(G\), and defending the claim that the relevant notions can be arithmetically defined.

Though it's easy to prove Theorem A, this theorem relies on semantic notions like a theory's being arithmetically sound: its theorems being true in a certain model. So some philosophers (especially around the time Goedel was writing) would be suspicious of it for that reason. The third corner of our triad, on the other hand, only relies on non-semantic notions, and notions that (unlike those used in Theorem B) are entirely rigorous, like the notion of a primitively recursive function. Our theorem here is:

C. If \(T\) is an axiomatizable theory that is deductively consistent and extends \(Q\) (really, extends any theory of arithmetic strong enough to arithmetically define all primitively recursive relations; \(Q\) is such a theory), then there will be a "Goedel sentence" \(G\) such that \(T\not\vdash G\), and also (unless \(T\) is \(\omega\)-inconsistent) \(T\not\vdash \neg G\).

Simplifying, this says that axiomatizable + consistent + strong-enough-to-define-the-primitively-recursive implies incomplete. This is what Burgess calls Theorem 17.9. (His Theorem 17.7 approximates our Theorem B and Theorem C in different ways.) Theorem C approximates what Smith calls his Theorem 2; see also the refinements in his Theorems 44, 45, and 48.

This is what Goedel historically proved. He didn't talk about \(Q\), which hadn't at that point yet been formulated, but merely about theories being strong-enough-to-define-the-primitively-recursive.

A few years later Rosser came up with a more complicated sentence (the Rosser sentence, that says "For any proof of me, there's an earlier proof of my negation"), that has the advantage of allowing us to remove the hedge about \(\omega\)-consistency, and just rely on \(T\)'s merely being consistent. But it has the disadvantage of being harder to think about. So I'll just continue using the Goedel sentence.

First, we construct our sentence \(G\), concerning provability in \(Q\) and where the relevant relations are arithmetically definable in \(Q\).

Next, let's persuade ourselves that \(Q\not\vdash G\). Suppose for reductio that \(Q\) did prove \(G\). That is:

  1. \(Q\vdash \forall p \forall y. \mathsf{is~the~coded~self{-}application}(S^{\#U}0,y) \supset \neg\,\mathsf{is~a~coded~proof~from}~Q(p, y)\)

Equivalently:

  1. \(Q\vdash \forall p. \neg\,\mathsf{is~a~coded~proof~from}~Q(p, S^{\#G}0)\)

Well, if \(Q\) did prove \(G\), then the proof would have a code number \(m\). Then by virtue of what \(\mathsf{is~a~coded~proof~from}~Q\) defines in \(Q\), that would mean that:

  1. \(Q\vdash \mathsf{is~a~coded~proof~from}~Q(S^m0, S^{\#G}0)\)

But (ii) and (iii) could only both be true if \(Q\) were inconsistent, which our Theorem assumes is not the case.

Next, let's ask whether \(Q\) might prove \(\neg G\)? That is:

  1. \(Q\vdash \neg\forall p \forall y. \mathsf{is~the~coded~self{-}application}(S^{\#U}0,y) \supset \neg\,\mathsf{is~a~coded~proof~from}~Q(p, y)\)

Equivalently:

  1. \(Q\vdash \neg\forall p. \neg\,\mathsf{is~a~coded~proof~from}~Q(p, S^{\#G}0)\)

However, we saw above that for no \(m\) could \(Q\) prove (iii). Indeed, it can be shown that \(Q\) will prove the negation of (iii) for every \(m\). Hence, if \(Q\) also proved (iv), then \(Q\) would be \(\omega\)-inconsistent.

That completes the proof of Theorem C.

All of this would be reproducable in any extension \(T\) of \(Q\) that remained axiomatizable and consistent. (We need \(T\) to be axiomatizable so that the notion of being a proof from \(T\) can be primitively recursive.)

If we rely not just on the fact that \(\textrm{is~the~coded~self{-}application~of}\) is arithmetically definable, but on the fact that it's also functionally definable, then we can prove that not only is \(G\) not provable in \(Q\), it's also provable in \(Q\) that

  1. \(G \subset\supset \neg\,Q\textrm{-provable}(S^{\#G}0)\)

See Smith sections 38, 39, and 40.3; and Burgess Lemma 17.1.

11

Okay, so \(G\) is neither provable nor disprovable from some favored axiomatization of arithmetic; why don't we then just add \(G\) as a further axiom? You can do that. But then --- so long as your new theory is still axiomatizable and deductively consistent --- the First Incompleteness Theorem will still apply, and so there will be a new Goedel sentence that talks about its own unprovability in the new theory. Plus, since the new theory is an extension of the old one, the new Goedel sentence won't be provable from the old theory, either. (See Smith sections 3.2 and 34.4.)

12

Here are some of the other results that quickly follow from the First Incompleteness Theorem, or the apparatus we develop to prove it.

13

What about Goedel's Second Incompleteness Theorem?

Quickly, what's done here is to prove in a slight strengthening of \(Q\) --- let's call it \(Q'\) (see Smith Definition 63 for details) --- that:

\((\bot~\textrm{is not provable from}~Q') \supset G~\textrm{is not provable from}~Q'\)

(See Smith Theorem 57.) That just amounts to recapitulating part of the proof of the First Incompleteness Theorem but within \(Q'\), instead of in our informal metalanguage. But since the consequent of this conditional is (provably in \(Q\)) equivalent to the Goedel sentence for \(Q'\), this tells us:

\(Q'\vdash (\bot~\textrm{is not provable from}~Q') \supset G\)

and hence:

if \(Q'\vdash \bot~\textrm{is not provable from}~Q'\), then \(Q'\vdash G\).

However we know from the First Incompleteness Theorem that if \(Q'\) is consistent, then \(Q'\not\vdash G\). Result:

if \(Q'\) is consistent, then \(Q'\not\vdash \bot~\textrm{is not provable from}~Q'\).

(See Smith Theorem 58.) Another way to say this is: if \(Q'\) is consistent, it can't prove that that fact about itself. The same holds for any axiomatizable extension of \(Q'\), such as Peano arithmetic. (See Burgess Theorem 18.1.)

That needn't mean that it's impossible to rigorously prove that these theories are consistent; only that it's not possible to prove it in the theories themselves. One might prove it in a stronger theory; or in a different theory, that's neither weaker nor stronger. (As mentioned in section 3, Gentzen proved the consistency of Peano arithmetic in a different theory, a strengthened version of "primitively recursive arithmetic.") But then it can fairly be asked, why are we entitled to the assumption that these other theories are consistent?

There are many serious and hard questions about the significance of the Second Incompleteness Theorem, that we won't try to sort out here.