Notably, this last clause includes its not mattering how many objects there are in the world. Except: in mainstream logic we always assume that there are *some* objects, which our quantifiers always range over and which our term constants always designate. Arguably this is not true for natural language.
* Some terms seem to be "empty," or to fail to designate anything. Perhaps the chemical term $phlogiston$ works like this, or the name $Sherlock~Holmes$, in some of its uses. (These examples and others are controversial.)
* Sometimes we seem to use quantifiers without commiting ourselves to the reality of what we're talking about, as when we say:
> Some of the things posited by my colleagues don't exist.
or:
> Some of the children you might have had will never in fact exist.
Arguments can and should be had about what's really going in such cases. I just observe that mainstream logic makes no attempt to accommodate them. Later in the term, we may consider **universally free** logics, which permit the domain of quantification to be empty and also permit term constants to be meaningful even when non-referring. (Those logics do still construe $\exists$ as being existentially committal.)

This semantic conception is not the only way that logical notions can and have historically been analyzed. See Sider pp. 1--3, 6--top 11, and 28--29 for the semantic conception, and pp. 1--2 and 8--9 for some alternatives; see also Gamut pp. 114--16. Even today, some formalisms are more productively first thought of in non-semantic terms. But this is by far the dominant contemporary understanding, and it is what we will use.
#. We assume, then, that the semantic properties of a language mathematically settle what its logical properties are. You can raise questions about *how we know* what the language's semantic properties are, especially in the case where it's not a language whose syntax and semantics we're formally stipulating. You can raise questions about whether the language has *enough* semantic properties to settle certain logical issues. You can raise questions about *how we know* what mathematically settles what --- what patterns we're entitled to follow in our metalinguistic reasoning about the language whose logical properties we're studying. Those are all decent questions. Perhaps some of them even belong in part to the "philosophy of logic." But they aren't part of what one does when engaging in the kind of metalogical inquiry we're getting familiar with in this course. So if any of these questions grip you or keep you up at night, file them away to come back to another time.
Likewise, if you think the nature of logical consequence has to in some way be metaphysically prior to this notion of "mathematically settling" that I'm helping myself to, that too may be a good question, but isn't one we're going to pursue here.
#. So, as I said, we assume that the semantic properties of a language mathematically settle that it has certain logical properties --- for instance, settle that some sentence is logically true, or settle that some set of sentences logically entail another sentence, or settle that they are logically incompatible with each other. Now though the logician ignores much of the epistemology of this, she does concern herself with questions about what "effective methods" there are to establish that such properties are present or absent.
I'll call the combination of a language and a specified semantics for it a **logical system**. (Usually the term **logic** is reserved for more than this: some at-least-sound proof system is also expected.)
In the best kind of case, we'll have a **decision procedure** for a logical system: some effective recipe or algorithm that's guaranteed to answer our questions about logical properties, yes-or-no, after some finite number of steps. (Though there's no guarantee it will be very fast or efficient.) **Proof or derivation systems** for a logic are generally not of that sort, because generally such decision procedures are unavailable. In practice, proof or derivation systems are instead frameworks that let you creatively search for an answer. The proof system will ensure that if you find one, you can rely on its being correct. But it may take some ingenuity and artistry on your part to do this efficiently.
What I just said is a good picture to start with. But it needs to be filled in and qualified in some ways. First, as I hinted, in some logical settings we do have decision procedures. When we're dealing just with **sentential logic**, for example --- also called propositional logic and some other names --- then questions about what's logically true and what logically entails what are decidable. The familiar truth-tables are one effective decision procedure. Some *fragments* of first-order **predicate logic** are also decidable. ([Monadic predicate logic](syntax.htm#monadic) is one such; there are others.) But first-order predicate logic in full generality is not decidable. Still, at least the logical truths of first-order predicate logic can at least be *enumerated* by a Turing machine. More on this in a moment. In some logical settings, we cannot even do that much. The logical truths of *some fragments* of second-order logic can also be enumerated; but not *all* of the logical truths of second-order logic.
#. To get a better handle on this, let's think more carefully about what we mean by a formal proof. A formal proof will just be some string that conforms to certain formally specified rules. (Remember that most of the arguments you're producing in this class *aren't* formal proofs; though they should be rigorous in the way that informal arguments in mathematics are standardly expected to be.) It should be effectively decidable whether some string does constitute a formal proof of sentence $\phi$ in a given proof system. That's just part of what we mean by a "proof system."
Now generally a proof system is intended to track the logical properties of a specific logical system. (Though sometimes the proof system is devised first, and the corresponding semantics supplied later.) There are two important notions about how a proof system might relate to semantically-grounded logical properties of its intended logical system. We say that the proof system is **sound** when things that you can prove in it always do have the corresponding semantically-grounded logical properties. If a proof system is known to be unsound, we throw it away. No one wants that. In the other direction, we say that a proof system is **complete** when, if certain semantically-grounded logical properties are present, the corresponding results will be provable. This is harder to get. In the case of first-order predicate logic, the various proof systems we have are known to be both sound and complete. So here provability *does* entail semantically-grounded logical truth, and vice versa. But for some logical systems, such as second-order logic, we not only lack complete proof systems, we know it's not possible to get them. (If they're also sound. You can always get a complete proof system for cheap by making every sentence provable. But as I said, nobody wants a proof system that isn't sound.)
Proof systems that aren't complete can still be useful, because we may find proofs of sentences we didn't already know to be logical truths, and so finding a proof contributes to the advancement of our knowledge. But if the proof system isn't complete, there will always be some semantically-grounded logical truths that are "left out", that such proofs can never reach. (If there are multiple proof systems, the truths left out by one system may be different from those left out by another.)
#. Sometimes logicians concern themselves not just with what's *logically true* in a system, but with *what's logically entailed by a given set of axioms*. It's natural to think that questions about decidability, or provability, with respect to the latter will go hand-in-hand with questions about decidability, or provability, in the background logic. However that is not so. Some fragments of arithmetic, for example, are decidable even though the logical properties of their language are not decidable in general. And other fragments of arithmetic, famously, must fall short of being able to prove all their truths, though their background logic does not have the corresponding failing. We will discuss this all later. For now, I'm just trying to give you an initial orientation.
#. Let's return to the difference between proof systems and decision procedures. The fundamental difference here is that a decision procedure is guaranteed eventually to give you a yes-or-no answer. Whereas, with a proof system that happens *not* to be a decision procedure, you lack the guarantee of getting one of the answers. You can try and try to prove something, but the mere fact that you've failed so far doesn't mean that success isn't still out there waiting around the corner to be had. Now in some cases we can (informally) prove that *particular claims* will be (formally) unprovable. But in general we can't rely on knowing that. The mere existence of a proof system won't *in general* enable us to effectively decide whether some claim that's so far unproven is in principle unprovable.
I said earlier that "in practice," proof systems require ingenuity and artistry. By their nature, decision procedures will be "mindless and mechanical." Theoretically, there's nothing about a proof system which *demands* creativity, and no sense in which creativity can *stretch the limits* of what the proof system may achieve via more mindless, not-necessarily-efficient, strategies. (Creativity may help you to discover a different proof system.) After all, formal proofs are just finite strings, so we can enumerate all the candidates. Moreoever, we can effectively decide whether any given candidate really is a proof in a given system of a given sentence $\phi$. So if there's a proof of $\phi$ to be had, there's an effective enumerating procedure that's guaranteed eventually to yield it. It's just that, in the case of undecidable systems, we *won't* be able to effectively tell whether a proof we've so far failed to find will never arrive.
The place for "ingenuity" and "artistry" in the use of a proof system is in perhaps finding a proof of the desired conclusion faster than the effective enumerating procedure I just described would.
#. We'll come back to all of this. The important points for now are that: (a) facts about what's logically true, what logically entails what, and so on, are grounded in a language's semantics. (b) For some systems, we have effective procedures that are sound, complete, and moreover capable of *deciding* whether those properties are present. They'll not just enable us to prove the properties are present when and only when they are; they'll also enable us to tell that the properties are absent. (c) For other systems, like predicate logic, we only have the first two virtues. Our proof systems will enable us to prove the properties are present when and only when they are. (d) For other systems --- ones that we won't look at for a while and if we do we'll only look at briefly --- the gods are even less kind.
But since we're going to be talking largely about systems of the sort described in (b) and (c), these are cases where many semantically-grounded properties coincide with many proof-grounded properties. And it can be hard to keep track of which words goes with which notions. For instance, is a "tautology" defined as semantically-guaranteed-to-be-true, or as provable-from-no-premises? Since the semantic properties and the provability properties happen to coincide in the region we'll first and mainly be exploring, this can be very hard to keep track of. Indeed authors use some of these words differently, and some authors aren't even consistent with their own usage. Nonetheless, I think this is worth putting some effort into keeping track of. When the context of inquiry changes, you'll want to know whether you're talking about a notion that is *stipulated* to be semantic, and may or may not as a interesting logical result turn out to coincide with something proof-theoretic, or whether it's the other way around.
* Standard practice is to use the terms **logical truth** and **logical consequence/entailment** for the semantically-grounded notions. Some authors say "semantic entailment" to emphasize this. The core idea here is that $\phi$ is logically true in system $S$ (written $\models_S\phi$, normally I will omit the subscript and leave it to context to settle what system is being discussed) when $\phi$ will remain true no matter how its non-logical predicate, sentence, functor, and term constants are interpreted, and no matter what the extra-linguistic world is like (so long as $S$'s domain of quantification, if it has quantifiers, is not empty). And a set of sentences $\Gamma$ logically entail $\phi$ (written $\Gamma\models\phi$) when no matter what variations of the sort just described are allowed, whenever $\Gamma$ are all true, so too is $\phi$. We will make these ideas precise later.
Logical truths are also called **valid** or **universally valid** sentences. This is confusing given the use of "valid" in mainstream philosophy to mean an argument whose premises logically entail its conclusion. (Amongst logicians, "valid" and "sound" are seldom used with their mainstream philosophical meanings.) When something is a logical truth (valid sentence) of sentential logic, it's called a **tautology**. Some authors also apply the term "tautology" to logical truths in other systems that have the form of a sentential tautology, such as $\forall x Fx \vee \neg\forall x Fx$. These count as logical truths because of structural properties they have that are represented by sentential logic.
* Standard practice is to say that two expressions (primarily, formulas) are **logically equivalent** (some say "semantically equivalent") when they come out having the same interpretation, no matter what variations of the sort previously described are allowed. We can write that $\phi$ and $\psi$ are logically equivalent as $\phi\bimodels\psi$. We'll see another way to express this same idea later, and also how to express some other claims that in the context of the logical systems we're looking at turn out to be equivalent.
See Sider pp. 33--37, 95--98; Bostock pp. 24--25; Gamut pp. 46, 48--52, 99; and Burgess pp. 119--20, 148 for expositions of these notions.
* One natural correlative notion would be of a **logical falsehood**: a sentence guaranteed to come out false, no matter what variations of the sort previously described are allowed. Gamut calls this a "contradiction" (pp. 52--53, 99). I suspect some authors use that term for a provability notion instead, but I haven't yet caught any of the ones we're looking at doing it.
* Another natural correlative notion would be of a set of sentences $\Gamma$ that are **logically incompatible**, that is, that are guaranteed *never* to come out simultaneously true, no matter what variations of the sort previously described are allowed. We can represent that as $\Gamma \models \bot$. Many authors (Burgess is an example) call such sets **unsatisfiable** (pp. 120, 126, 148).
Here "satisfying" is something that a model or interpretation does to a set of sentences. Earlier we used that verb in a different way: it was something that an object might do to an open formula like $Fx \wedge x