Near the end of the last webnotes, we summarized three major limits for the frameworks we’re looking at, and said we’d go on to describe two more limits, and explore 4 doubts/challenges. The limits we’ve seen so far are:
For this week, we asked you to read Sturgeon’s Chapter 6 and a portion of Lin’s article. Sturgeon raises two complaints against AGM theory. The first of these will constitute our fourth limit for these frameworks. That is:
I have sympathy for Sturgeon’s claims that suspension can’t be reduced to what the agent does or doesn’t believe and disbelieve. That may have the result that the norms for rational suspension won’t be straightforward consequences of the norms for belief.
In fact, Sturgeon seems in a position to push this even harder than he does. Although he rejects the claim that suspension consists in the absence of belief and disbelief, he’s willing to allow the normative principle that a fully rational agent who considers the question whether P will do exactly one of: believe P, disbelieve P, or suspend judgment about P (p. 186). Arguably, Sturgeon’s picture of suspension as a “committed neutrality” challenges whether rational agents do have to fall under any of those headings, even when considering the question whether P. Isn’t it an option to not yet have any commitment, neither to the truth of P, nor to its falsity, nor to neutrality?
These issues are important to the overall dialectical structure of Sturgeon’s book. But when we focus on how they should shape our evaluation of the AGM framework, the proponents of AGM might reasonably reply:
Those are interesting claims that we can discuss. But even if you’re right that there are doxastic norms my theory doesn’t deliver, that just shows a respect in which my theory is incomplete. I wasn’t aiming to give any norms beyond the norms on what agents should and shouldn’t believe.
For this reason, we’ll count this first complaint of Sturgeon’s as concerning an (alleged) limit of the AGM framework — one that you may count as more or less serious, but is still dialectically different than a doubt/challenge to whether that framework does correctly what it aims to do.
The other complaint that Sturgeon raises is related to an objection that Lin discusses, and will be the fourth of the doubts/challenges we discuss below.
The AGM framework also has a fifth limit, that is important and calls for more discussion.
Your revision function ★
takes two inputs: a belief set and a sentence. How does its behavior when taking one belief set as input constrain its behavior when taking another? For example, how does the behavior of your 𝓚 ★ __
constrain the behavior of your (𝓚 ★ A) ★ __
, or vice versa?
AGM Postulates 7–9 tell us a little about such constraints, but not very much. And the fact is that all the AGM Postulates taken together don’t constrain this very much. Even if one knew exactly how your revision function worked with 𝓚
as input, that would tell one very little about how it works with 𝓚 ★ A
(or other belief sets) as input. This limit of the AGM framework is often described as there being few constraints on how revision “iterates,” that is, on how the first revision from 𝓚
to 𝓚 ★ A
relates to subsequent revisions.
Underlying this issue is that we’ve discussed several different aspects of your acceptance/belief state: the coarse-grained belief set, including all the sentences you accept or believe, and also finer-grained properties like your ranking of which remainder sets are “best,” or your entrenchment ordering ⊑
. These latter properties are finer-grained because they map many-to-one onto belief sets. Agents can have different entrenchment orderings but the same belief set. We need the finer-grained tools to determine what is the specific behavior of your contraction function (and its paired revision function). But the output of your revision function is only a new coarse-grained belief set. We’re not told what your specific new entrencment ordering should be.
Consider this example. I see a distant animal, and believe that it’s not a bird and can’t fly. However my entrenchment ordering is such that ¬bird ⊏ (¬bird ∨ fly)
, with the result that if I were to learn it is a bird, I’d come to believe it can fly. But in fact that’s not what I learn; I learn instead only that the animal can fly. So I revise my beliefs, without yet concluding the animal is a bird. In my new state, I now have an entrenchment ordering where ¬bird ⊏ (¬bird ∨ ¬fly)
, with the result that if I were to now learn the animal is a bird, I’d come to believe it cannot fly.
Without further explanation, changing my doxastic dispositions in that way seems rather perverse. But AGM makes no objection to it.
Some research has gone into extending AGM with further postulates, to address this. Darwiche and Pearl proposed futher constraints in the shape of the existing AGM Postulates 7–9. Others propose constraints on how entrenchment orderings should evolve alongside your belief sets.
Another possibility is more radical, and proposes moving away from the representation of the agent’s acceptance/belief state as a deductively closed set of sentences. Instead, let it just be a (possibly unclosed) set of sentences. The aim of these theories is not to get away from the assumption of logical omniscience. They still count agents as “accepting or believing” all the deductive consequences of the sets that represent their state. The point is instead to make finer discriminations among those states.
Suppose agent X has a acceptance/belief state consisting of {A, B}
, and agent Y has a acceptance/belief state consisting of {A, A iff B}
. These have the same deductive closures, so at this point their beliefs are the same. But although both agents believe B
and believe A iff B
, in the case of agent X intuitively the former is their reason (together with A
) for believing the latter, whereas with agent Y it’s the other way around. This difference is not captured in their belief sets. Now if each agent learns ¬A
, we should expect that agent X will end with a new state {¬A, B}
, whereas agent Y will end with a new state {¬A, A iff B}
. Now agent X’s beliefs will include B
but agent Y’s will include ¬B
. Since we’re representing their state in a finer-grained way, we can expect it to capture more of the specific differences in how the two agents will revise upon learning A
.
It may even be that there’s a third agent Z who starts with the state {A, A iff B}
, and then after many revisions and contractions has come around to the state {A, B}
. Since those two states have the same deductive closure, and AGM revisions are functions taking belief sets as input, on the AGM framework the agent would now be required to revise in exactly the same way to information ¬A
as they would have originally. (Unless it were ever rationally permissible to replace one’s revision function with another.) But that’s not obviously correct. If we instead represent the agent’s states in the finer-grained way now being considered, and let revision functions operate on those finer-grained inputs, we might make better sense of agents having the same total (deductively closed) belief set but nontheless reasonably having different dispositions to respond to new information — even when the agents are different time-slices of a single person who hasn’t changed their revision strategies in any fundamental way.
These are interesting descendents of the AGM framework. But for the rest of this discussion, let’s stick with the base system, that represents acceptance/belief states as deductively closed, and revision functions take those belief sets as inputs.
We’ll turn now to our four doubts/challenges to the AGM framework.
★
Recall the Success Postulate for AGM’s revision functions, which says that A ∈ 𝓚 ★ A
, that is, that newly learned sentences A
are always included in the resulting belief set. If the prior belief set 𝓚
was incompatible with A
, then it’s the belief set that has to yield (some of its sentences have to be withdrawn). It’s never the new information that yields.
Why should we think that tensions between old beliefs and new information must always be resolved in that way?
Can’t there be sentences A
where, even if they’re consistent, they’re so implausible in light of our existing beliefs that we should instead keep the old beliefs, and reject the new information?
Or sometimes we may want to reject part of the new information, and only incorporate the rest of it into our beliefs.
The Success Postulate says those ways of changing our beliefs are never rational. That’s not obviously correct.
Some research has gone into replacing AGM’s Success Postulate with something weaker. But there is no consensus about whether it should be replaced, or what the best replacement is.
⊖
Recovery: 𝓚 ⊆ Cn((𝓚 ⊖ C) ∪ {C})
Recall we noted that if a contraction function ⊖
is defined in terms of ★
using the Harper Identity, the result will always satisfy this Postulate.
The Recovery postulate is very controversial. It can seem intuitively correct, at least in some cases. If you first doubt or lose a belief C
and then regain it, shouldn’t you be back where you started?
𝓚
, which includes the two beliefs:(C) Tony is a criminal
(D) Tony is a criminal who commits drive-by shootings (he’s a drive-by shooter)
C
. Now your state is 𝓚 ⊖ C
. You will in this new state also have withdrawn D
, since D
entails C
. Next you receive new information that makes you accept:(S) Tony is a criminal who shoplifts (he’s a shoplifter)
S
is compatible with 𝓚 ⊖ C
: we’re not supposing that when you withdrew belief in C
you came to accept its negation ¬C
. So your resulting new belief set should be Cn((𝓚 ⊖ C) ∪ {S})
. Since S
entails C
, Cn((𝓚 ⊖ C) ∪ {S})
will be a superset of Cn((𝓚 ⊖ C) ∪ {C})
. By Recovery, this latter set must include everything in 𝓚
, including D
. So then its superset Cn((𝓚 ⊖ C) ∪ {S})
must also include D
.
Consider what that result means. You start off believing C
and D
; then you withdraw both beliefs; then you learn that Tony is a shoplifter (S
).
Of course you will now again believe he’s a criminal; that’s entailed by S
. But we’ve just argued that Recovery says you must also restore your earlier withdrawn belief that Tony is a drive-by shooter (D
). This seems an unwelcome result.
There may plausibly be some cases where learning S
prompts you to reconsider whether you were right to withdraw your earlier belief in D
. What’s troubling is that Recovery seems to say in these cases you must always restore your belief in D
.
Recall that AGM is in a sense equivalent to System R for nonmonotonic consequence, and this is in turn equivalent to System P with the additional rule of Rational Monotonicity. We’ll address that additional rule shortly, but first let’s focus on the fact that AGM is essentially a strengthening of System P.
We observed that one derived rule for System P is:
G ∧ D ƕ Q
, then G ƕ D ⊃ Q
Here’s a reason to hesitate about that. It seems like it imples that a rational agent will never make any ampliative inferences at all.
For instance, suppose that after observing 100 ravens that were all black (let this be D
), you would conclude that all ravens are black (let this be Q
). But you haven’t yet made the observations. So your current ƕ
is such that D ƕ Q
. (We’re letting G
be the empty tautology ⊤
.)
The derived rule 8 then tells us that you must already accept that D ⊃ Q
, or in other words, that if any ravens are non-black (¬Q
), some of them will show up in the first 100 you observe (¬D
).
This reflects the “epistemic conservatism” of System P (and the other frameworks we’re considering): their guiding idea that when you learn something D
, you won’t expand your beliefs any more than you logically have to, to accommodate the new information. Compare the AGM Inclusion Postulate, which says 𝓚 ★ A ⊆ Cn(𝓚 ∪ {A})
.
Thus the only way that learning D
could justify your concluding Q
is if you already accepted D ⊃ Q
.
This is an ironic result, for systems that aim to articulate minimal rationality constraints on nonmonotonic consequence.
The doubt we want to spend the most time developing targets another aspect of this “epistemic conservatism.” For this, however, we will focus on the additional rule of Rational Monotonicity, and related commitments of these systems. (Our discussion here will overlap with Sturgeon’s second complaint against AGM, and the objection discussed in the Lin selection.)
If Γ ƕ̸ ¬D
for every D ∈ Δ
, and Γ ƕ Q
, then also Γ, Δ ƕ Q
.
Δ
contain a single sentence D
, and G
be the conjunction of sentences in Γ
:If G ƕ̸ ¬D
, and G ƕ Q
, then also G ∧ D ƕ Q
.
ƕ
into belief-revision talk in the way we described earlier, this is equivalent to:If ¬D ∉ 𝓚 ★ G
, then if Q ∈ 𝓚 ★ G
, then Q ∈ 𝓚 ★ (G ∧ D)
Q
):If ¬D ∉ 𝓚 ★ G
, then 𝓚 ★ G ⊆ 𝓚 ★ (G ∧ D)
This is just our And-Preservation Postulate for AGM, with variables changed. (In a footnote, Lin even calls this postulate “Rational Monotonicity.”)
If D
is consistent with the sentences in 𝓚
(that is, ¬D ∉ 𝓚
), then 𝓚 ⊆ 𝓚 ★ D
Here’s an initial line of thought resisting these postulates. What if D
is consistent with 𝓚
, but undermines your support for some element of 𝓚
? So now 𝓚 ★ D
should lose some element of 𝓚
? Couldn’t that happen, if you ever made a defeasible inference?
(Maybe the issues raised in our previous doubt should be teaching the lesson that these frameworks actually don’t want to allow any defeasible inferences. But we’ll consider the present argument on its own.)
Sturgeon’s “Canadian Case” (pp. 190–2) presses this line of thought. You start out believing that Pierre, like most other Canadians, speaks only English. (Call this Q
.) Then you learn that Pierre is a Canadian from Montreal (that is our D
). Your new information is compatible with your existing belief set, but intuitively you should now withdraw your belief that Pierre speaks only English (Q
). Sturgeon writes:
Good reckoning can be spoilt by the introduction of new information which is logically consistent with the information to hand… (p. 192)
That is, your new information may give you reason to withdraw more than is logically necessary.
Lin develops this challenge in the selection we had you read (see pp. 351–3 and pp. 380–1). Lin’s argument is a variation of a famous counterexample that Stalnaker formulated against the rule of Rational Monotonicity. We’ll describe Stalnaker’s argument first, and then explain the very minor variation in Lin’s presentation.
Stalnaker’s example works like this. (He’s extending an example Quine used earlier to discuss counterfactuals.)
(V) Verdi is Italian
(B) Bizet is French
(S) Satie is French
(E) Verdi and Bizet are compatriots
two things seem plausible. First, you could plausibly learn this in a way that made it appropriate for you to retain your belief that Satie is French (S
). That is, your initial ƕ
could be such that:
E ƕ S
(E′) Verdi and Satie are compatriots
we have:
E ƕ̸ ¬E′
But now, consider whether if you learned both E
and E′
, you would be obliged to still believe that Satie is French (S
). It seems not. The composers may be French or may be Italian. So:
E ∧ E′ ƕ̸ S
But this contradicts Rational Monotonicity.
The only difference between Stalnaker’s version of this argument and Lin’s presentation is that Lin understands Stalnaker’s version to be one where you learn a single conjunctive sentence E ∧ E′
, whereas in Lin’s own discussion, he’s supposing you first learn E
, and is discussing what happens if you then go on to further learn E′
. This difference doesn’t seem to affect the intuitive force of the cases.
As Lin sums up his presentation of the argument:
Let us focus on this agent’s second revision of beliefs, prompted by information
E′
. InformationE′
is compatible with what she believes right before receiving this information, and she drops her belief in [S
] nonetheless. So this agent’s second revision of beliefs violates Preservation. But there seems nothing in the specification of the scenario that prevents the agent from being perfectly rational. So this seems to be a counterexample to the Preservation Thesis. (p. 353)
(Lin goes on in the rest of the selection to discuss how the dialectic may continue, with the proponents of AGM trying to explain this apparent counter-example away.)