You must work on this exam alone. You are free to consult any printed or online resources (from this course or not), but not to receive more direct help from any other agents. Matt and Jim can answer clarificatory questions but can’t offer you substantive guidance or hints.
Your answers are due before Friday Dec 15 at 4pm. At that time we will meet (some of us will be doing so online, we’ll let you know if everyone will be) to review the exam and discuss outstanding philosophical issues that emerged during the semester.
Show that this is a theorem of S4: □(p ⊃ ◊p)
, but that this is not: p ⊃ □◊p
.
You are considering 5 different mutually exclusive and exhaustive hypotheses: s
, t
, w
, x
, and y
. At first you accept that s ∨ t
is true, and all the logical consequences of this, including this one that we’ll call φ
: s ∨ t ∨ w
. Call this entire set of sentences you start out accepting 𝓚
. But then, using the AGM framework, you wish to withdraw/contract the set of sentences you accept so as to no longer accept φ
. Which of the following descriptions gives all of the “remainder sets” that are maximal subsets of 𝓚
that don’t entail φ
?
Cn(s ∨ t ∨ x ∨ y)
Cn(s ∨ t ∨ x)
and Cn(s ∨ t ∨ y)
Cn(s ∨ x ∨ y)
and Cn(t ∨ x ∨ y)
Cn(s ∨ x)
, Cn(s ∨ y)
, Cn(t ∨ x)
, and Cn(t ∨ y)
Cn(x ∨ y)
Justify your answer.
You have a coin. You’re certain that it’s either (i) fair, (ii) biased with a ¹/₄
chance of coming up heads, or (iii) biased with a ³/₄
chance of coming up heads. At t0 you divide your credence equally among those three chance hypotheses, and you satisfy the Principal Principle. Then you start flipping the coin. Between t0 and t1 you flip it once and it comes up heads, between t1 and t2 it comes up tails, and then between t2 and t3 it comes up heads again.
For this problem, assume the standard Bayesian framework. You become certain that while you, Matt, and Jim were sleeping last night, Ram chose exactly one of you at random, snuck into that person’s home, and injected them with an undetectable poison. You have no further relevant information. You’re certain that, though Ram is a psychopath, he never lies. You run into Ram at time t.
At t, you ask Ram whether Matt was poisoned, and he says No. What is your new credence that you were poisoned?
At t, you instead say to Ram, “Look, I already know that at least one of Matt and Jim wasn’t poisoned, since you only poisoned one of us. Can you please tell me which one wasn’t poisoned? If neither of them was poisoned, just tell me that Matt wasn’t poisoned.” Ram says, “OK — Matt wasn’t poisoned.” What is your new credence that you were poisoned?
At t, you instead say to Ram, “Look, I already know that at least one of Matt and Jim wasn’t poisoned, since you only poisoned one of us. Can you please tell me which one wasn’t poisoned? If neither of them was poisoned, just tell me that Matt wasn’t poisoned.” Ram says, “OK — Jim wasn’t poisoned.” What is your new credence that you were poisoned?
At t, you instead say to Ram, “Look, I already know that at least one of Matt and Jim wasn’t poisoned, since you only poisoned one of us. Can you please tell me which one wasn’t poisoned? If neither of them was poisoned, just flip a fair coin (in a way that I cannot detect) to determine which person you will tell me wasn’t poisoned.” Ram says, “OK — Matt wasn’t poisoned.” What is your new credence that you were poisoned?
At t, instead of speaking to Ram, you instead reason to yourself: “I know that at least one of Matt and Jim wasn’t poisoned. I hereby introduce the name ‘Lucky’ to refer to whichever of them wasn’t poisoned; if neither of them was poisoned, let ‘Lucky’ refer to Jim.” You thereby come to know that Lucky wasn’t poisoned. What is your new credence that you were poisoned?
Let Old(.)
be a probability distribution with the following values:
Old( E ∧ F ∧ G) = 1/36
Old( E ∧ F ∧ ¬G) = 2/36
Old( E ∧ ¬F ∧ G) = 3/36
Old( E ∧ ¬F ∧ ¬G) = 4/36
Old(¬E ∧ F ∧ G) = 5/36
Old(¬E ∧ F ∧ ¬G) = 6/36
Old(¬E ∧ ¬F ∧ G) = 7/36
Old(¬E ∧ ¬F ∧ ¬G) = 8/36
If New(.)
is the result of Jeffrey conditionalizing on the partition {E, ¬E}
, such that the posterior New(E) = 30/56
, what are the values of New(.)
for the eight propositions specified above?
What is the E:¬E
Bayes Factor of the update from Old(.)
to New(.)
?
Let Old(.)
and New(.)
be two probability distributions, and let E
be a proposition in your algebra such that 0 < Old(E) < 1
. Prove that the following claims entail each other.
H
in your algebra, New(H) = Old(H|E)⋅New(E) + Old(H|¬E)⋅New(¬E)
.H
in your algebra, New(H|E) = Old(H|E)
and New(H|¬E) = Old(H|¬E)
.Asked to justify his decision to bring a life jacket on our department hike, Ram says, “I’d rather have it and not need it than need it and not have it.”
N
(needing a life jacket) and ¬N
, and the relevant acts are B
(bringing life jacket) and ¬B
, what two outcomes is Ram referring to, and how is he claiming their utilities compare for him?Suppose an agent assigns cr(P) = ¹/₃
and sets her preferences according to Savage-style expected utilities. Explain how she might nevertheless prefer a guaranteed $10
to a gample that pays $40
on P
and nothing otherwise, if dollars have declining marginal utility for her.
Imagine our algebra contains the propositions J
, K
, and L
, and we want a “representor” that gives these three propositions credences such that:
0.2 ≤ cred(J) ≤ 0.5
0.4 ≤ cred(K) ≤ 0.7
0.6 ≤ cred(L) ≤ 0.8
and such that, for each of these three constraints, there is at least one distribution in the representor that assigns each endpoint value to the relevant proposition.
p(J) < p(K)
.p(J) < p(K)
and at least one probability distribution assigns p(J) > p(K)
.Here are some of Mary’s credences:
p(A) = .6
p(E) = .4
p(~E) = .6
p(A|E) = .8
p(A|~E) = .3
Mary obeys the Ratio Formula, but she does not obey all of the probability axioms. Make a synchronic Dutch Book against her. Explain why the bets you describe are ones she’d regard as reasonable to sell/buy. You can assume she values each additional dollar same, and that she values only money.
Throughout this problem, use a scoring rule S
that is similar to the Brier score, except without the square: it calculates the inaccuracy of a credence in a proposition at a world as the absolute value of the linear distance between the credence’s value and the truth (0 or 1) of the proposition at that world.
Suppose that my p(A)=.51
and my p(¬A)=.49
, and that those are the only two propositions that my credences are defined over.
A
is true, what is the inaccuracy of my actual credences?A
is false, what is the inaccuracy of my actual credences?p(A)=.6
and p(¬A)=.4
?S
tell me to do?