I often want to feel that science and philosophy have been enriched by the introduction and deployment of various kinds of probability theories. I am also often attracted to the further thought that Bayes theory should be a normative ideal in helping me establish how one should think about, say, evidence or risk.

Yet reflecting on this moving review, http://ndpr.nd.edu/review.cfm?id=15666, of Swinburne's recent book (Was Jesus God?) or, say, Earman's earlier well known use of Bayes' theory Hume's Abject Failure (not to mention the assignment of probabilities in risk models used by the financial industry or the manipulation of, say, Bayes' theory in research applications by scientists) has made me wonder if the following isn't true: if the competent use of a probability theory can lead one to assign a positive value to the probability that God has anthropomorphic qualities this should be seen as a reductio ad absurdum of the theory. I know there are many good objections to this melancholic thought (not to mention that I may be unable to construct a valid argument for it). But at what point does one admit that the problem is with the class of tools rather than the uses made by them? Or is the National Rifle Association's claim that "Guns do not kill people...people kill people" really convincing?

This seems like a very strange objection. If anything, one of the great values of a Bayesian approach of reasoning is that one learns to avoid assigning extremal values to probability assessments. I take this to be an important corrective of overconfidence. The theory enforces a kind of (minor) epistemic humility that can serve us rather well. After all, if we grant that we might be wrong on our strongly-held scientific (or in this case, religious) beliefs, we might be in a better position to reorient ourselves when, on occasion, we turn out to be mistaken. I do not see the value in false certainty. It might feel good, but ultimately it seems to be epistemically pernicious.

ReplyDeleteFair enough, Ryan. I agree with you in principal. But does the use of Bayes' theory really engender such (minor) epistemic humility in its users in practice? (There may be a nice experiment for the experimental philosophers/economists here.)

ReplyDeleteI don't know Swimburn's book (I'll try to find it, it looks funny), but my subjective prior probability of his argument been fallacious is close to one!

ReplyDeleteAfter all, Bayes THEOREM does not (and cannot) say nathing about the probability of single sentences: it only imposes a constraint to the MATHEMATICAL RELATIONS that can exist between the probabilities of different statements; so, one can attach ANY probability one wants to SOME propositions by strategically manipulating the probabilities of OTHER propositions. Bayes theorem is only useful for attaching probabilities (as opposed to CONNECTING probabilities) when one has some 'reasonable' epistemic constraint on some of the other probabilities (a constraint that cannot be justified by Bayes theorem, or by the MATHEMATICAL theory of probability, for it is an empirical question).

To be clear: from the review it does not seem Swinburne's book uses Bayes' theory at all.

ReplyDeleteThe ontological argument doesn't make me doubt the legitimacy of deductive logic. Similarly, wacky arguments using Bayes' Theorem don't make me doubt Bayesianism. Of course: I do think people use Bayes' Theorem in cases where there may not be well-defined probabilities and where they couldn't really know the probabilities even if they were defined, just as people make unsound deductive arguments.

ReplyDeleteThe NRA argument is not convincing. If spurious application of Bayes' theorem were actually being used to kill people, then I would fully support rigorous background checks and licensing procedures before allowing people to use Bayes' theorem.

ReplyDeleteOkay, I feel a bit funny defending a half-baked post, but let me make some suggestions here.

ReplyDeleteFirst in response to P.D. Magnus (nice to meet you!): one way to understand the significance of Kant's old objection to Anselm's version of the ontological argument, that existence is not a predicate, is to raise questions about the logical tools available to Anselm.

Second (also in response to P.D): the mere existence of many formal theories encourages people to pretend there are "well-defined probabilities" and go ahead and assign values in all kinds of policy-oriented areas (or applications to funding agencies). Related to this is some appeal (as Jesus Zamora claims) to the existence of some 'reasonable' constraints. But good judgment is a scarce good.

Third, within philosophy, we often treat arguments that are dressed up with such numerical values with a whole lot more seriousness than if such values are omitted.

Fourth against Hitchcock (whose work on causation I admire), how confident are you that the spurious application of Bayes' theorem (and other approaches) does not cause terrible harms in policy sciences? A lot of the very best risk models encouraged very damaging behavior in recent decades. (As an aside: there is plenty of evidence that negative economic growth tends to lead to reduced life-expectancy, depression, etc. So, your joke may be very apt!) Have you investigated the routine use of such models in industry and government? I wouldn't mind being reassured, but my own (current) field-work in finance does not lead me to be reassured.

I doubt that the problems are solely the result of mis-application of Bayes' theorem. Obviously my suggestion of regulating the use of Bayes' theorem was tongue-in-cheek. But the relevant analog here, that of regulating banking and investment, is something that I would strongly support. Countries that have strongly regulated banking systems (e.g. Canada) have not suffered the catastrophic failures experienced in countries with de-regulation (e.g. Iceland), although of course in the global economy, everyone is hurting.

ReplyDeleteBut of course I digress from the original point of your thread. I will add to your list of complaints that I frequently see papers on the doomsday paradox, especially, that commit basic fallacies of confirmation theory of the sort pointed out by Carnap decades ago. (E.g. If E confirms H, and E' implies E, it doesn't follow that E' confirms H.)

I did not mean to apply that problems are generated solely by the use of Bayes (or other such) theorems. I do think there is a general problem with the uses such theories are put in various contexts.

ReplyDeleteCanada is interesting case because they have (in effect) a banking cartel. It's unclear to me if it escaped the worst due to lack of competition (so that profit margins on bread and butter banking stayed high) or the sane regulation of banking and mortgages not to mention saner monetary policy. (Of course, even though commodity prices have come down, Canada continues to benefit from strong demand in those areas. Until China implodes, Canada is a good place to be.)

apply=imply

ReplyDeleteEarman's book inspired me to write (= so infuriated me that I couldn't help writing) a paper that appeared in Hume Studies a few years ago, in which I argue that Hume _deliberately_ avoided applying mathematical concepts of probability to empirical questions. (Hume is often thought simply to have been ignorant of mathematical probability, or perverse in coming up with rules of probability that violate the axioms of mathematical probability--but this is not so.) I show that Hume's concept of evidential probability is part of a long line of thinking about probability that can be traced back to ancient Roman law.

ReplyDeleteAll of which to say: It is perhaps time to at least question the hegemony of mathematical probabilty in cases of reasoning about evidence, at least in some contexts. (Hume can say perfectly sensible things about miracles, for example, without having to assign precise degrees of probability to the law of nature in question, the evidence of the senses, the reliability of the testimony, or anything else.) The mathematical apparatus gives us a false illusion of precision in most cases, and it doesn't at all capture how humans actually reason about evidential probability. As for the workman vs. tools question in Eric's original post, my intuition is that the very framework of mathematical probability IS to blame for various mistakes/problems such as the one he mentions. The example in a reply of risk management and policy is even more deeply worrisome! Maybe Hume could help there....

Does anyone know who said that philosophers have to be careful to avoid the "premature regimentation of the explanandum"? I've always thought that was a nice phrase - I think it was one of the logical positivists or empiricists but I can't recall who.

ReplyDeleteYes, Bill, your paper on Hume was important to my development into seeing why folks may wish to resist applying mathematical concepts to certain questions. (Hume gets it from Spinoza, by the way; Spinoza thought that applying math was an imaginative activity and, thus, could not secure highest epistemic security.)

ReplyDeleteI still think that mathematical tools can be used in the human sciences & policy areas, but they should come with big warning labels, 'handle with care.'

I myself have been tempted toward something like Eric's reductio interpretation of Swinburne's use of Bayes' theorem. (I haven't read the book mentioned, but he most certainly has used Bayesian reasoning in his arguments for the existence of God on the basis of fine-tuning and other phenomena in the past, such as his book _The Existence of God_.) But this is a base impulse for a frequentist like me, which I choose to resist! As it happens Swinburne's is not a subjectivist about probability but subscribes to a rather specific version of a logical probability view. I think there are plenty of points at which one could object to his arguments (the ones I'm familiar with, anyway) that would not touch his use of Bayes' theorem.

ReplyDeleteSwinburne is really very smart and sophisticated -- I debated Bayesianism with him at length over lunch one day when he was a visitor in my department. But he is very committed to a certain kind of defense of theism, and uses his understanding of confirmation theory towards this aim. But I have seen some really ham-handed uses of Bayesian reasoning along similar lines, and I will say this much: The dominance of Bayesianism among philosophers of science seems to have led some philosophers not schooled in debates regarding these issues to regard the deployment of Bayes's theorem in such settings as completely unproblematic -- they seem to be unaware that there is any debate over Bayesianism -- much less that most statisticians and working scientists are NOT Bayesians. Philosophers outside of phil sci should be aware, certainly, of the strong arguments that have been advocated on behalf of Bayesian approaches, but also of the many criticisms that have been raised in the literature against them.

Kent, I think we agree (on the whole), and I like how you have extracted something sensible out of my original post.

ReplyDeleteHowever, it is my (very subjective!) impression that many scientists are Bayesians in their grant applications (maybe because grant agencies like to see this).

Unfortunately, false premises, flawed reasoning steps, and failed conditions of applicability can all be smothered in technical detail. This is so for any inferential system, Bayesianism not excluded.

ReplyDeleteFor example, if I adopt the premise that "the moon is made of blue cheese", and the further premise that "if the moon is made of blue cheese, then Brigitte Bardot is a man", then I can deduce by classical logic that, indeed, Bardot is a man.

In the case of Swinburne's conclusions, the situation may be much the same. To many of us, the conclusion that God's existence is more probable than not will sound awkward. The first question to be asked is: what probability assignments did he take as input to his inferences? Or in other words, what were the premises?

This is not to say that there are unreasonable premises in Swinburne's book. And it is also not to say that Swinburne's inferences are fallacious. Let's suppose there are no flaws, and that the premises are all innocuous. Then we can still object to the conclusion.

After all, it is far from clear that Bayesian inference is the correct inferential system for the case at hand. For one, it seems very unrealistic to suppose that our doxastic attitudes towards the proposition "God exists" can be captured by a probability assignment.

In other words, the conditions for applying Bayesian inference might not be met. I take it that many of the above comments are actually concerned with that.

Hi Jan Willem (missed you at our last Quine reading group!), your comments are reasonable, open-minded, and fair. Nevertheless, I think they miss the point. My initial post was designed to introduce the worry that some inferential systems facilitate their misguided application. This is why I raised the comparison with the NRA-slogan. We all recognize that tools need skill in their application. What we find it harder to discuss is how facility in operating certain tools may go hand in hand with bad judgment in their application(s). The very design of the tool rather than end-users may be (part of) the problem. (Think of hone the law treats children's toys, guns, cars, etc.) Moreover, in policy science we also need to worry about all the (perverse) incentives for experts and policy-makers that encourage the use of technical tools (when the manner of doing so can create potentially disastrous results). A blunt policy response can be barring the use of certain tools; another can be extensive (and expensive) re-training/re-policing/re-monitoring of users of tools, etc. Should philosophy avoid its responsibility in what happens during the widespread adoption of its favored tools?

ReplyDelete