Phil 735: Take-home Final

You must work on this exam alone. You are free to consult any printed or online resources (from this course or not), but not to receive more direct help from any other agents. Matt and Jim can answer clarificatory questions but can’t offer you substantive guidance or hints.

Your answers are due before Friday Dec 15 at 4pm. At that time we will meet (some of us will be doing so online, we’ll let you know if everyone will be) to review the exam and discuss outstanding philosophical issues that emerged during the semester.

  1. Show that this is a theorem of S4: □(p ⊃ ◊p), but that this is not: p ⊃ □◊p.

  2. You are considering 5 different mutually exclusive and exhaustive hypotheses: s, t, w, x, and y. At first you accept that s ∨ t is true, and all the logical consequences of this, including this one that we’ll call φ: s ∨ t ∨ w. Call this entire set of sentences you start out accepting 𝓚. But then, using the AGM framework, you wish to withdraw/contract the set of sentences you accept so as to no longer accept φ. Which of the following descriptions gives all of the “remainder sets” that are maximal subsets of 𝓚 that don’t entail φ?

    1. the single set Cn(s ∨ t ∨ x ∨ y)
    2. the two sets Cn(s ∨ t ∨ x) and Cn(s ∨ t ∨ y)
    3. the two sets Cn(s ∨ x ∨ y) and Cn(t ∨ x ∨ y)
    4. the four sets Cn(s ∨ x), Cn(s ∨ y), Cn(t ∨ x), and Cn(t ∨ y)
    5. the single set Cn(x ∨ y)

    Justify your answer.

  3. You have a coin. You’re certain that it’s either (i) fair, (ii) biased with a ¹/₄ chance of coming up heads, or (iii) biased with a ³/₄ chance of coming up heads. At t0 you divide your credence equally among those three chance hypotheses, and you satisfy the Principal Principle. Then you start flipping the coin. Between t0 and t1 you flip it once and it comes up heads, between t1 and t2 it comes up tails, and then between t2 and t3 it comes up heads again.

    1. For each of the times t1, t2, and t3 calculate your updated credences in each of the three chance hypotheses, and your overall credence at each time that the next flip will come up heads.
    2. The first heads result you witness (between t0 and t1) changes your overall credence in “next will be heads” by a certain amount. The second heads result you witness (between t2 and t3) also changes your overall credence in “next will be heads” by a certain amount. How do these two amounts compare? (If you’ve been working in fractions, it may help to convert to decimals here.)
  4. For this problem, assume the standard Bayesian framework. You become certain that while you, Matt, and Jim were sleeping last night, Ram chose exactly one of you at random, snuck into that person’s home, and injected them with an undetectable poison. You have no further relevant information. You’re certain that, though Ram is a psychopath, he never lies. You run into Ram at time t.

    1. At t, you ask Ram whether Matt was poisoned, and he says No. What is your new credence that you were poisoned?

    2. At t, you instead say to Ram, “Look, I already know that at least one of Matt and Jim wasn’t poisoned, since you only poisoned one of us. Can you please tell me which one wasn’t poisoned? If neither of them was poisoned, just tell me that Matt wasn’t poisoned.” Ram says, “OK — Matt wasn’t poisoned.” What is your new credence that you were poisoned?

    3. At t, you instead say to Ram, “Look, I already know that at least one of Matt and Jim wasn’t poisoned, since you only poisoned one of us. Can you please tell me which one wasn’t poisoned? If neither of them was poisoned, just tell me that Matt wasn’t poisoned.” Ram says, “OK — Jim wasn’t poisoned.” What is your new credence that you were poisoned?

    4. At t, you instead say to Ram, “Look, I already know that at least one of Matt and Jim wasn’t poisoned, since you only poisoned one of us. Can you please tell me which one wasn’t poisoned? If neither of them was poisoned, just flip a fair coin (in a way that I cannot detect) to determine which person you will tell me wasn’t poisoned.” Ram says, “OK — Matt wasn’t poisoned.” What is your new credence that you were poisoned?

    5. At t, instead of speaking to Ram, you instead reason to yourself: “I know that at least one of Matt and Jim wasn’t poisoned. I hereby introduce the name ‘Lucky’ to refer to whichever of them wasn’t poisoned; if neither of them was poisoned, let ‘Lucky’ refer to Jim.” You thereby come to know that Lucky wasn’t poisoned. What is your new credence that you were poisoned?

  5. Let Old(.) be a probability distribution with the following values:

    Old( E ∧  F ∧  G) = 1/36
    Old( E ∧  F ∧ ¬G) = 2/36
    Old( E ∧ ¬F ∧  G) = 3/36
    Old( E ∧ ¬F ∧ ¬G) = 4/36
    Old(¬E ∧  F ∧  G) = 5/36
    Old(¬E ∧  F ∧ ¬G) = 6/36
    Old(¬E ∧ ¬F ∧  G) = 7/36
    Old(¬E ∧ ¬F ∧ ¬G) = 8/36
    1. If New(.) is the result of Jeffrey conditionalizing on the partition {E, ¬E}, such that the posterior New(E) = 30/56, what are the values of New(.) for the eight propositions specified above?

    2. What is the E:¬E Bayes Factor of the update from Old(.) to New(.)?

  6. Let Old(.) and New(.) be two probability distributions, and let E be a proposition in your algebra such that 0 < Old(E) < 1. Prove that the following claims entail each other.

    1. For all propositions H in your algebra, New(H) = Old(H|E)⋅New(E) + Old(H|¬E)⋅New(¬E).
    2. For all propositions H in your algebra, New(H|E) = Old(H|E) and New(H|¬E) = Old(H|¬E).
  7. Asked to justify his decision to bring a life jacket on our department hike, Ram says, “I’d rather have it and not need it than need it and not have it.”

    1. Supposing the relevant states of the world are N (needing a life jacket) and ¬N, and the relevant acts are B (bringing life jacket) and ¬B, what two outcomes is Ram referring to, and how is he claiming their utilities compare for him?
    2. Explain why on Savage’s utility theory, this fact about Ram’s utilities does not necessarily make his decision rationally permissible.
  8. Suppose an agent assigns cr(P) = ¹/₃ and sets her preferences according to Savage-style expected utilities. Explain how she might nevertheless prefer a guaranteed $10 to a gample that pays $40 on P and nothing otherwise, if dollars have declining marginal utility for her.

  9. Imagine our algebra contains the propositions J, K, and L, and we want a “representor” that gives these three propositions credences such that:

    0.2 ≤ cred(J) ≤ 0.5
    0.4 ≤ cred(K) ≤ 0.7
    0.6 ≤ cred(L) ≤ 0.8

    and such that, for each of these three constraints, there is at least one distribution in the representor that assigns each endpoint value to the relevant proposition.

    1. Specify a representor associated with those constraints in which every probability distribution assigns p(J) < p(K).
    2. Specify a representor associated with those constraints in which at least one probability distribution assigns p(J) < p(K) and at least one probability distribution assigns p(J) > p(K).
    3. What is the minimum number of distributions you could have in a representor that would allow it to satisfy the conditions in (b)? Explain why you need that many.
  10. Here are some of Mary’s credences:

    p(A) = .6
    p(E) = .4
    p(~E) = .6
    p(A|E) = .8
    p(A|~E) = .3

    Mary obeys the Ratio Formula, but she does not obey all of the probability axioms. Make a synchronic Dutch Book against her. Explain why the bets you describe are ones she’d regard as reasonable to sell/buy. You can assume she values each additional dollar same, and that she values only money.

  11. Throughout this problem, use a scoring rule S that is similar to the Brier score, except without the square: it calculates the inaccuracy of a credence in a proposition at a world as the absolute value of the linear distance between the credence’s value and the truth (0 or 1) of the proposition at that world.

    Suppose that my p(A)=.51 and my p(¬A)=.49, and that those are the only two propositions that my credences are defined over.

    1. If A is true, what is the inaccuracy of my actual credences?
    2. If A is false, what is the inaccuracy of my actual credences?
    3. What is my expected inaccuracy for my actual credences?
    4. What, in light of my current credences, is the expected inaccuracy of a credence distribution where p(A)=.6 and p(¬A)=.4?
    5. If I am trying to minimize my expected inaccuracy and remain probabilistically coherent, what does S tell me to do?