Sample answers in purple.

  1. Answer the following:

    1. If a relation is transitive and reflexive, must it be symmetric? No
    2. If a relation is transitive and irreflexive, must it be asymmetric? Yes
    3. If a relation is transitive and symmetric, must it be reflexive? Tricky! The answer is no, because the antecedent permits there to be objects that don't stand in the relation to anything, including themselves.
    4. If a relation is asymmetric, must it be irreflexive? Yes
    5. What's a more familiar name for a symmetric pre-order? An equivalence relation --- it's transitive, reflexive, and symmetric. An unintuitive limiting case of a kind of "order" relation.
    6. What is the difference between a weak and a strict partial order? Weak orders are reflexive, like \(\le\); strict orders are irreflexive, like \(<\). For partial orders, the other requirements are transitivity and anti-symmetry. (Anti-symmetry plus irreflexivity, as in the case of a strict partial order, amounts to asymmetry.)
    7. What is the difference between a pre-order and a partial order? Partial orders add the requirement of anti-symmetry. So \(a\) and \(b\) can only stand in the relation symmetrically when \(a=b\). Weak pre-orders permit distinct objects to stand in the relation symmetrically.
    8. What is the difference between a partial order and a total order? Total orders are partial orders where every pair of distinct objects are comparable. For a strict order, this means we'll have trichotomy: exactly one of \(aRb\), \(a=b\), or \(bRa\) will be true.
    9. What does it mean for a relation to be trichotomous? See end of previous answer. Many of you didn't notice that trichotomy requires the options to exclude each other: that is, that one and only one of them obtain.
    10. What does it mean for a relation to be Euclidean? If \(aRb\) and \(aRc\) then \(bRc\) (and \(cRb\)).
    11. What does it mean for a relation to be serial? \(\forall x\exists y.xRy\)
    12. What does it mean for a relation to be dense? Whenever \(xRz\), there's some \(y\) (other than \(x\) and \(z\)) such that \(xRy\) and \(yRz\). Among familiar numbers, the rationals and reals are dense, on their standard ordering.
    13. What does it mean for \(a\) and \(b\) to be comparable wrt some relation? One of \(aRb\) or \(bRa\) is true.
    14. What is the "ancestral" or transitive closure of the relation "is a parent of"? Might it be, perhaps, the relation of being an "ancestor"? Often when taking the transitive closure of a relation one also allows the identity relation as a limiting case, which may make a difference if the relation wasn't reflexive to start with. In that case, the ancestral of "parent" would be "is identical to or an ancestor of".
    15. What is the equivalence class of Ben under the relation "has the same height as"? the class/set of people (or whatever the relevant universe is) who have the same height as Ben, including Ben himself
    16. Is the relation "is at least as old as" symmetric, anti-symmetric, or asymmetric? Tricky! None of the above. Some of you chose anti-symmetric: but that would imply that you and I are at least as old as each other (that is, if we have the same age), we are numerically identical.
    17. Is the relation "is a brother of" symmetric? Depends on what the relevant domain or universe is. Amongst boys, yes. Amongst siblings of mixed genders, no.
    18. What is the difference between a maximal element, a greatest element, and a least upper bound (also called a "supremum") wrt some ordering relation? Maximal elements: nothing is \(R\) (e.g. longer than) it. There may be several of these; consider the maximally long words in the dictionary. Greatest element: it's \(R\) everything else in the relevant set. When this exists, it is unique. (There could be a longest word in the dictionary, for all I know.) The previous notions are defined for a specified ordering relation and set. The notion of an upper bound is defined against a richer background: our target set needs to be a subset of a (potentially) larger set on which the ordering relation is also defined. An upper bound of the order relation must be part of the larger set, and may or may not be part of the smaller set; it will be an object that everything (else) in the smaller set stands in order relation to. A least upper bound will be an upper bound which stands in the order relation to any other upper bound. When a least upper bound is part of the smaller set, it is a greatest element with respect to that set and order.
    19. If an ordered set has any upper bounds, must it have a least upper bound? No. If the ordered set is dense, at least in the region where the smaller set stops and the larger set continues, there might be smaller and smaller upper bounds all of which are still outside the smaller set. Any interval of reals will have a least upper bound, on the standard ordering; but an interval of rationals need not. (This is the fundamental difference between rationals and reals.) To use an example one of you cited: let the smaller set be the subset of rationals less than \(\sqrt 2\). There is no largest rational in that set, and neither is there any smallest rational outside of that set. So the set has many upper bounds (any rational greater than \(\sqrt 2\)); but no least upper bound. Another example: consider the set of strings \(\{\, ab,ba,aba,bab,baba \,\}\) ordered by the relation "is a substring of". The smaller set \(\{\, ab,ba \,\}\) has three upper bounds in the larger set, none of which belong to the smaller set and none of which is least.
  2. What does it mean for an ordering relation on some set to be well-founded? Is the relation "is an initial substring of" on the set of strings over some alphabet well-founded? Is the set well-ordered by that relation? Well founded: any non-empty subset has some minimal element(s). Yes, that relation is well-founded on that set; but it might not be total, and so fail to be a well-ordering. If the strings are generated from a single letter, the relation will be total; but in general it won't be. Distinct strings can be such that neither is an initial substring of the other. When we have a well-founded relation that's total, and so a well-ordering, the minimal element in each non-empty subset will be, more strictly, a least element and so unique.

  3. Try to explain each of the following in language that's less technical and which you're sure you understand, but is as rigorous as is compatible with the first requirements.

    1. some theory is deductively consistent The theory doesn't prove any sentence and its negation. (There are multiple ways to define consistency.) If "theories" are understood to be deductively closed, this will entail that the theory doesn't include any sentence and its negation; but the former definition is more general as it doesn't make any assumptions about whether "theories" are understood so as to always be deductively closed.
    2. some theory is deductively closed Any formula provable from premises in the theory is also in the theory.
    3. some theory is deductively complete This assumes that we've specified what the theory's language is. It means that for every wff in the language (or perhaps we restrict ourselves to sentences), either that formula or its negation is provable from the theory. As with the definition of consistent, this might be phrased in terms of being \(\in\) the theory rather than in terms of being provable from the theory.
    4. some theory is effectively decidable The question whether some arbitrary formula is provable from (or belongs to) the theory is answerable via some "effective" (mechanical/no-ingenuity-required) procedure, guaranteed to deliver the correct answer after a finite number of steps.
    5. some theory is effectively enumerable Some "effective" procedure can list the formulas provable from (or belonging to) the theory, in such a way that for any correct answer, eventually (after some finite number of steps) that answer will be listed. (And no incorrect answer is ever listed.) Generally it is not prohibited that the same answer be listed multiple times; but that can be prohibited if you like.
  4. In Sider's proof of completeness (section 2.9, starting p. 62), he uses the notion of a "maximally consistent" set of sentences. Although he defines "maximal" and "consistent" separately, his definitions entail that a set is maximally consistent iff it's deductively consistent, and no further sentences can be consistently added (that is, for any \(\phi\notin \Gamma\), \(\Gamma\cup \{\, \phi \,\}\) is deductively inconsistent).

    1. Prove that a set is maximally consistent iff it's deductively consistent, deductively closed, and deductively complete.
      Let our set be \(\Gamma\).
      Right-to-left: \(\Gamma\) is consistent. Now assume \(\phi\notin \Gamma\). By closure it follows that \(\Gamma\not\vdash \phi\). By completeness it follows that \(\Gamma\vdash \neg\phi\). By the rule of thinning for \(\vdash \), we get \(\Gamma,\phi\vdash \neg\phi\). So \(\Gamma,\phi\) is not consistent. This was all shown for arbitrary \(\phi\); hence \(\Gamma\) counts as "maximally consistent."
      Left-to-right, for consistent: obvious.
      Left-to-right, for closed: Suppose for reductio that \(\Gamma\vdash \phi\) but \(\phi\notin \Gamma\). Then by the assumption that \(\Gamma\) is maximally consistent it follows that \(\Gamma\cup \{\, \phi \,\}\) is inconsistent. But then \(\Gamma\vdash \neg\phi\). (Proof omitted.) Since we're assuming that \(\Gamma\) also proves \(\phi\), this violates \(\Gamma\)'s being (maximally) consistent. Hence our reductio supposition fails.
      Left-to-right, for complete: Suppose \(\Gamma\not\vdash \phi\). Then \(\Gamma,\neg\phi\) will be consistent; but since \(\Gamma\) is assumed to be maximally consistent, \(\neg\phi\) must therefore \({}\in\Gamma\). Hence \(\Gamma\vdash \neg\phi\). Discharging our supposition, we can conclude that either \(\Gamma\vdash \phi\) or \(\Gamma\vdash \neg\phi\).
    2. Prove that if a maximally consistent set contains a formula \(\phi\wedge\psi\), then it also contains \(\phi\) and contains \(\psi\). By its being maximally consistent, it must contain either \(\phi\) or \(\neg\phi\), but by its being maximally consistent it can't contain \(\phi\wedge\psi\) and also contain \(\neg\phi\). So it contains \(\phi\). The argument for \(\psi\) is similar.
    3. Prove that if a maximally consistent set contains a formula \(\neg(\phi\wedge\psi)\), then it either contains \(\neg\phi\) or it contains \(\neg\psi\). Suppose for reductio that it contains neither. Then by maximal consistency it must contain \(\phi\) and also contain \(\psi\). Then since we showed in (a) a maximally consistent set is deductively closed, it contains \(\phi\wedge\psi\). But then this violates the assumptions that the set is consistent and contains \(\neg(\phi\wedge\psi)\). So our reductio supposition fails.
  5. The following is an alternative formulation of one of the important logical facts we've discussed in the past several weeks. What is the short name of that fact?

    If \(\Gamma\cup \{\, \neg\phi \,\}\) is satisfiable, then every finite subset of \(\Gamma\cup \{\, \neg\phi \,\}\) is deductively consistent.

    Soundness. If this isn't obvious, then contrapose the above claim, and reflect that if (some finite subset of) \(\Gamma\cup \{\, \neg\phi \,\}\) is inconsistent, then \(\Gamma\vdash \phi\).

  6. On the webnotes page on Peano arithmetic and non-standard models, I discuss a proof from Compactness for the result that there must be models of true arithmetic that have an additional element in the domain, coming "after" (in the \(<\)-ordering) all the numbers designated by \(0, S0, SS0, \dots\) The proof begins "Here's how we can persuade ourselves that there are such models." Make sure you understand this proof. (a) In order to understand the proof, did you have to work out or spell out some of the details more explicitly? if so, please tell me what they were, preferably by presenting a fuller version of the proof. No, in fact I couldn't have said it better myself.

    1. Without any further details about such models, we already now know enough about them to prove they are not isomorphic to the intended model of arithmetic. How can we prove this? For all we've said so far, the model with the extra element may have a countable domain, just like the intended model. So why are the models not isomorphic? The isomorphism \(f\) has to be such that \(f([\![\, S^{n+1}0 \,]\!]_{\mathscr{N}})=[\![\, S \,]\!]_{\mathscr{M}}(f([\![\, S^n0 \,]\!]_{\mathscr{N}}))\), and so on, which given our assumptions about the construction of the models \(\mathscr{N}\) and \(\mathscr{M}\), in the end means that \(f\) maps the interpretation of \(S^n0\) in the one to the same element in the other. This will be true for every element in the domain of the standard model \(\mathscr{N}\), which has no elements but the ones designated by \(0, S0, SS0, \dots\) So which element in the standard model should get mapped to the new object \(i\) in the domain of model \(\mathscr{M}\)? Since \(f\) has to be a function between the models, it can't be any of \([\![\, 0 \,]\!]_{\mathscr{N}}, [\![\, S0 \,]\!]_{\mathscr{N}}, \dots\), whose image under \(f\) we just discussed. But then there's nothing left for \(f\) to map to \(i\). So \(f\) can't be onto the domain of the new model; which it has to be if it's an isomorphism.

    Suggestion: begin by spelling out for yourself what it takes for two models to be isomorphic. You should already be able to do this, on the basis of your understanding of what a model for the language of arithmetic will look like, and your understanding from earlier in the term of what it is for two algebraic structures to be isomorphic. Explicit definitions for what it takes for models to be isomorphic are also present in some of the big list of readings for 5 Nov. You can look at those to see if you got it right, or got stuck.

    Once you know what it is for two models to be isomorphic, prove that any model with that extra element isn't isomorphic to the intended model of arithmetic.

  7. At the end of class on 5 Nov, we discussed an argument from Sider's section 4.5 (p. 105) to the effect that there can't be any sentence in first-order logic which says exactly that there are finitely many objects; so neither can there be its negation, which says exactly that there are infinitely many objects. At the same time, we also discussed a sentence that entails that there are infinitely many objects. Why doesn't an analogue of Sider's argument show that latter sentence to be impossible? The negation of that sentence is compatible with there still being infinitely many objects; hence there is no need for the whole set to be unsatisfiable, and thus no conflict with Compactness. We also discussed some sentences that entail that there are finitely many objects: give an example, and explain why an analogue of Sider's argument doesn't show these sentences to be impossible. Any sentence that says there are exactly, or at most \(n\) objects, is an example. This will be incompatble with (many) single other sentences in Sider's set; so again, there is no conflict with Compactness, which would only kick in if all the finite subsets were satisfiable.

  8. Fill in the gaps in this proof that the power set of a set \(S\) always has a higher cardinality than S does.

    Suppose for reductio that the power set of \(S\) is no larger than \(S\) is. By definition of what it is for one set to be no larger than another, there must then be an injection \(f\) from the power set of \(S\) to \(S\). (Spell out what this consequence amounts to.) For each \(X\) in the power set of \(S\), \(f(X)\) is some member of \(S\), and this will either be a member of \(X\) or it won't. If \(f(X)\in X\), call \(X\) "happy"; else call \(X\) "unhappy". No member of the power set of \(S\) will be both happy and unhappy (why?). Because every \(f(X)\) will either be \(\in\) or \(\notin \) \(X\).

    Let \(U\) be the image under \(f\) of all the unhappy members of the power set of \(S\); that is, the set \(\{\, u\in S \mid \exists X\in 2^{S}. f(X)=u \wedge f(X) \notin X \,\}\). \(U\) is a member of the power set of \(S\) (why?); so \(U\) must be happy or unhappy. Answer here is that \(U\) is a set of some members of \(S\).

    Suppose \(U\) is happy; then \(f(U)\in U\) (why?). By definition of "happy". But by definition of \(U\), it includes only those \(f(X)\) where \(X\) is unhappy. This is the step which glosses over the fact that we're relying on \(f\) being an injection. Strictly, the definition of \(U\) says that it includes all those \(f(X)\) where \(X\) is unhappy. If \(f\) is not an injection, some of those might also be \(f(Y)\) for some distinct \(Y\) which is happy. If \(f\) is an injection, though, that possibility can be excluded, so we can infer that for every \(f(X)\) in \(U\) there's a unique \(X\) whose image it is under \(f\), and by definition of \(U\) one such (that is, the unique such) \(X\) will be unhappy. So then \(U\) is unhappy.

    Suppose \(U\) is unhappy; then \(f(U)\) must be in \(U\) (why?). By definition of \(U\), it includes \(f(X)\) for every unhappy \(X\). But if \(f(U)\in U\), then by definition of "happy", \(U\) is happy.

    So \(U\) is happy iff \(U\) is unhappy; but as we said, no member of the power set of \(S\) can be both happy and unhappy.

    So our supposition that there is an injection \(f\colon 2^{S} \to S\) fails. So the power set of \(S\) must be larger than \(S\) after all.

    What step(s) in this argument relied on \(f\)'s being an injection, and would fail if we weren't allowed to assume that? Spell that step in the proof out more explicitly.

  9. Fill in the gaps in this proof that if a proof system is semantically sound and (strongly) complete, then Compactness holds.

    Soundness for a proof system entails that if some finite subset of a set of sentences \(\Gamma\) can prove a contradiction, then \(\Gamma\) has no model (why were we entitled to say "finite" here?). (Strong) completeness entails the converse: that if \(\Gamma\) has no model, then from some finite subset one can prove a contradiction (why were we entitled to say "finite" here?). In both cases, this is because proofs in the mainstream logics we're considering always use only a finite number of premises. Taken together, this gives us that \(\Gamma\) has no model iff from some finite subset one can prove a contradiction. Equivalently, \(\Gamma\) has a model iff from every finite subset one cannot prove a contradiction. Any finite subset that has a model will be one that cannot prove a contradiction (why?). Soundness, see problem #98. So if every finite subset of \(\Gamma\) has a model, then \(\Gamma\) has a model (why does this follow from what we've said?) Every finite subset of \(\Gamma\) has a model \(\Rightarrow\) from every finite subset of \(\Gamma\) one cannot prove a contradiction \(\Rightarrow\) \(\Gamma\) has a model. That is Compactness.

  1. Fill in the gaps in this proof that if any two at-most-countable models of a deductively closed theory \(T\) are isomorphic, then \(T\) is deductively complete.

    Suppose for reductio that \(T\) is not deductively complete. Then for some sentence \(\psi\), both \(T\cup \{\, \psi \,\}\) and \(T\cup \{\, \neg\psi \,\}\) are deductively consistent. (I specified that the theory is deductively closed so that this will follow even if completeness is defined in terms of \(\in\).) So these sets are satisfiable (why?). Semantic completeness for our proof system tells us that unsatisfiable sets are inconsistent. By Loewenheim-Skolem, these sets must have at-most-countable models \(\mathscr{M}_1\) and \(\mathscr{M}_2\). These models must be isomorphic (why?). By assumption of the proof, any two at-most-countable models are isomorphic. So they must make true all the same sentences (why?). Isomorphic models always do this. Isomorphism for models is defined in such a way as to make no difference to the truth-values of formulas. So \(T\) must be deductively complete after all (why?). No model can make true both \(\psi\) and \(\neg\psi\); so our reductio supposition fails.