Wednesday, April 8, 2009

Bayesians and Probability One

It is often said that a Bayesian agent should not assign probability one to a proposition unless it is a logical truth. However, this principle is often appealed to without any reference or argument. I guess this is because people take the principle to be so self-evident that it doesn't need any support, but can anyone point me to any "standard" references or discussions of the principle?

10 comments:

  1. This principle seems dodgy to me. "2+2=4" isn't a logical truth. "I exist" isn't a logical truth. Is there a reason why I shouldn't assign a probability of 1 to these? What's the argument?

    ReplyDelete
  2. I think this is largely because once you set the probability of a proposition to 1, it can never leave. So, due to our epistemic limitations, we want to be cautious about assigning a probability to a proposition and then not having the option to revise that probability in light of any evidence. On the other hand, we don't mind doing this for tautologies.

    On the other hand, Bayesians will have to admit that events of probability 1 can fail to occur and some events with Pr(0) can occur. So, maybe it wouldn't be a serious epistemic problem if we assigned some proposition Pr(1) and it turned out to be false.

    ReplyDelete
  3. To be sure, most Bayesians do allow for extremal probabilities (0 or 1) when it comes to established empirical facts, although some people side with Jeffrey and say that we hardly ever know anything for sure, and that we should not give extremal probability to any contingent fact, even after observation.

    The principle that Gabriele talks about is often applied to prior probability, i.e., to the probability assignment that expresses our uncertain opinion before we have observed anything. Jeffrey, in line with the foregoing, advocates so-called "regular" or "strictly coherent" priors, which only assign extremal values to tautologies and contradictions.

    Regularity is a widely shared Bayesian principle. For instance, the famed convergence results by Gaifman and Snir depend on it. But it is also connected to serious problems for Bayesians, as highlighted in 'The well-calibrated Bayesian' by Philip Dawid. If we do not assign the true hypothesis some positive probability at the onset, we are forever stuck with false ones. And what if we have uncountably many mutually exclusive possibilities?

    As for references on strict coherence, have a look at Skyrms' paper in Philosophical Studies 77:1, 39-55, 1995: many references in there.

    ReplyDelete
  4. Carnap was one of the first to really advocate regularity/strict coherence, in the form demanding that no non-contradictory statement should be assigned a logical probability of zero (e.g., section 7 of his 'A Basic System of Inductive Logic', 1971). If, like Jeffrey, you think that the proper successor to Carnap's inductive probability is credence, then to carry over as much of that project as possible (as Jeffrey also apparently wanted) one might wish to retain regularity. The idea one supposes is that, as a reguirement on initial credences (priors) at least, regularity displays an admirable open-mindedness to the possible future course of evidence and the hypotheses it will favour. One wouldn't want to rule out the truth accidentally ab initio.

    As a requirement on posterior credence regularity has been less favoured, though some of the early people (Kemeny) did seem to like it. Hájek in his SEP article gives the following nice counterexample:

    "... someone who assigns probability 0.999 to this sentence ruling the universe can be judged rational, while someone who assigns it probability 0 is judged irrational. Note also that the requirement of regularity seems to afford a new argument for the non-existence of God as traditionally conceived: an omniscient agent, who gives probability 1 to all truths, would be convicted of irrationality. Thus regularity seems to require ignorance, or false modesty."

    Lewis also imposed regularity, but with a different explicit motivation. He says ('Subjectivist's Guide', p. 88 in Phil Papers II):

    "I should like to assume that it makes sense to conditionalize on any but the empty proposition. Therefore, I require that C is regular: C(B) is zero, and C(A/B) is undefined, only if B is the empty proposition, true at no worlds. You may protest that there are too many alternative possible worlds to permit regularity. But that is so only if we suppose, as I do not, that the values of the function C are restricted to the standard reals. Many propositions must have infinitesimal C-values, and C(A/B) often will be defined as a quotient of infinitesimals, each infinitely close but not equal to zero. (See Beinstein and Wattenberg (1969).) The assumption that C is regular will prove convenient, but it is not justified only as a convenience. Also it is required as a condition of reasonableness: one who started out with an irregular credence function (and who then learned from experience by conditionalizing) would stubbornly refuse to believe some propositions no matter what the evidence in their favor."

    You can see the Carnap/Jeffrey motivation at the end, but the primary reason is to permit well-defined conditional probabilities on any non-trivial proposition; within the Kolmogorov ratio definition of conditional probability, it is easy to see that regularity is needed to get this to work. But, as Jan-Willem hints too, there are generally too many non-trivial propositions to all get positive real credence, so Lewis also needs to opt for infinitesimals with all their problems... So to satisfy Lewis's aim, it looks better to follow Hájek's recent arguments for taking conditional probability as basic.

    Finally, one could argue for regularity not as a thesis about logical truths, but as one about a priority, so one should assign probability one only to a priori truths. Again this doesn't look that plausible as a constraint on posterior credence. But it does seem to fit well with some older remarks about what one should be 'certain' of in non-probabilistic epistemological traditions, in particular, the idea that all and only a priori propositions are those which are certain and unrevisable. If 'unrevisability' is to be cashed out probabilistically, it seems that assigning probability 1 to them is the natural way to do it (see, e.g., this Spohn paper. So if one is swayed by these epistemological reflections, that might provide an argument for regularity.

    ReplyDelete
  5. As far as I know, being a logical truth is a sufficient condition for having probability 1, but not a necessary condition: imagine I choose a natural number, x; the probability that x is not 4 is 1, but is not a logical or mathematical truth.

    ReplyDelete
  6. Thanks for all your comments.

    Anonymous @ 8:49

    I agree--the principle should apply to a posteriori truths (as Antony suggests), but my understandingt is that it is usually formulated as to apply to logical truths (am I wrong?)

    Jan-Willelm,

    I definitely would side with Jeffrey on that (although I am less optimistic than he is about the prospects for formal epistemology once one concedes that). In particular, it strikes me as epistemologically implausible that there can be any "established empirical facts" that are in principle unrevisable. Moreover, as Jeffery suggests in various places, if there is anything like empirical evidence this would seem to be non-propositional.

    Antony,

    I think a few contemporary epistemologists would challenge the assumption that all a priori knowledge is unrevisable. Say, I see what seems to be a valid proof of a certain true mathematical statement, M. That would seem to provide me with a priori evidence that M expressed a true proposition. But what if, on closer scrutiny, the proof turns out to contain a loophole? Wouldn't this discovery undermine my a priori justification in believing that M?

    ReplyDelete
  7. This comment has been removed by the author.

    ReplyDelete
  8. Gabriele: I thought you were discussing Regularity, the principle (as you put it) that

    "a Bayesian agent should not assign probability one to a proposition unless it is a logical truth."If we replace 'logical' by 'a priori', we get the principle that only a priori truths should get probability 1. This is not the converse principle that all a priori truths should get probability 1; the example you give is plausible enough as a counterexample to Converse Regularity. So I don't quite see the relevance of your last remark.

    It seems not indefensible (though I'm not sure I want to defend it), considered as a constraint on priors, that only unrevisable propositions should ever get probability 1. If not, some revisable propositions would thereby be unrevisable by conditioning alone, and thus (according to orthodox Bayesianism anyway) not revisable by rational means at all.

    ReplyDelete
  9. Antony,

    My remark is relevant because you explicitly mentioned the traditional idea all and only a priori propositions are those which are certain and unrevisable.

    ReplyDelete
  10. Oh I see now. Sorry. I agree with you that all a priori sentences are unrevisable is not particularly defensible; I was just introducing the idea of a close connection between a priority and unrevisability as a way of motivating Regularity.

    ReplyDelete

Note: Only a member of this blog may post a comment.