I received the following reply to an earlier blog post concerning whether it was possible for someone to rationally believe in the miraculous.
Dear Alan,
I am a Czech grad student in philosophy who wants to write a dissertation concerning contemporary analytical philosophy of religion, mainly the evidence for the resurrection of Jesus Christ. I am interested especially in R. Swinburne and W. L. Craig. Your post about Ehrman is very useful, so I will try to ask you for an advice.
Currently, I have, say, a problem with an argument against miraculous events in Jordan Howard Sobel’s Logic and Theism (2004, Cambridge UP).
There is, in Sobel, pp. 332f., a proof of a Bayesian proposition which Sobel calls Hume’s Theorem. It concerns conditions of the establisment of an event by a testimony and, in adapted notation, reads as follows: [(P(tM) > 0) and (P(M/tM) > 0.5] only if [P(M) > P(tM and not-M)]. P: probability; M: an event occurs; tM: a testimony for M occurs.
Nice to make your acquaintance, Vlastimil. I’ve not yet had the chance to read Sobel’s book, so I’m relying on your summary. My first question is why Sobel presents “Hume’s Theorem” (HT) as
(HT) [(P(tM) > 0) and (P(M/tM) > 0.5] only if [P(M) > P(tM and not-M)]
and not as
(HT*) P(M/tM) > 0.5 only if [(P(tM) > 0) and P(M) > P(tM and not-M)]
In other words, why put P(M/tM) > 0 on the LHS of the “only if” rather than on the RHS? It may not matter much, but (HT*) seems much more natural to me.
Now, if M is a miraculous event, P(M) is, according to Sobel, ALWAYS extremely small: nearly 0. Why? It seems Sobel gets the value of P(M) as the [ratio] of miraculous events of certain kind on the one hand [versus] of all events of this kind on the other. E.g., sum of human-water-walking-events / sum of human-water-walking-events + sum of human-disability-of-water-walking-events; or sum of the resurrected people / sum of those people who died.
So Sobel is relying on a simple ratio definition of probability (# of cases of a specified type / total # of cases). I can see why he would. Not only is that how Hume would do it, and not only does it make the probability estimations more straightforward, but it also easily gets him his desired conclusion, namely, that P(M) be very low. And if P(M) is inevitably really small, then it’s going to be very hard to satisfy (HT) because P(M) will very rarely, if ever, be greater than P(tM and not-M).
But why think that the simple ratio definition of probability is the appropriate conception of probability to apply here. Offhand, it seems to me that an epistemic definition of probability (e.g., betting quotients) makes more sense. Isn’t P(M/tM) supposed to be an epistemic probability. Isn’t it supposed to measure the degree of rational credence of M given tM? And obviously we should apply the same conception of probability on both sides of (HT). So if P(M/tM) is read as an epistemic probability, then so should P(M), and now it’s not at all clear that P(M) must invariably be low. In fact, depending on one’s background beliefs, P(M) may in some cases be moderately high, or at least high enough to beat out P(tM and not-M).
For this reason, I’m not much impressed with Sobel’s argument. By appealing to additional background beliefs here I am essentially endorsing a version of your (b) proposal, namely, that P(M) needs to be assessed in the light of one’s total relevant evidence, not simply in the light of past event-type ratios, as Sobel wants to do. But the relativization of P(M) to background beliefs means that whether P(M/tM) turns out to be greater than 0.5 may vary from person to person. In relation to the background beliefs of someone like Sobel, I suspect that P(M) is effectively zero, such that no amount of testimony could even in principle qualify a miracle-report as worthy of credence for him.
PS. The lottery example in your Appendix does a really good job of showing why a simple ratio definition of probability is the wrong one to use. Unsurprisingly, Sobel responds to the problem by tacitly shifting to an epistemic notion of probability when he appeals to the background assumption that “in common circumstances, nothwistanding the antecedent improbability, we should believe [a] report according to how we consider the reporter to be.”
Vlastimil and Alan, this (and the referred post) caught my eye because
of the green and blue taxi example in Hacking. It’s fresh in my mind
because I’m reading the Hacking book right now and just finished that
chapter, “Bayes’ Rule”. I think one possible direction you could take is found
in the very next chapter of Hacking, “Expected Value”.
Interviews by police about what taxi you saw carry no value for you, neither positive
nor negative. The witness answers the question, they’re free to go.
But in the circumstances of when the gospels were written, being a follower
carried great risk to one’s life. Those who reported the miracles, probably
lost their lives for their beliefs.
If you don’t have Hacking, here’s a brief rundown. His notation:
(for two possible consequences)
Act: A
Consequences: C1, C2 (acts have consequences)
Utilities: U(C1), U(C2) (some consequences are desirable, some are not, call their
value their ‘utility’)
Probabilities: Pr(C1/A), Pr(C2/A)
To find the expected value of an action, sum the products of each consequences’ utility
by the probability of the consequence given the act:
Exp(A) = [Pr(C1/A)][U(C1)] + [Pr(C2/A)][U(C2)]
He leads off with a simple example:
Your aunt gives you a FREE lottery ticket, your two possible acts are accept, don’t accept.
Lottery has 100 tickets, with a prize of $90 for the drawn ticket.
Consequence1: your ticket is drawn
Utility of Consequence1: $90
Probability of Consequence1: 0.01
Consequence2: your ticket is not drawn
Utility of Consequence2: 0
Probability of Consequence2: 0.99
Expected value of accepting the ticket:
Exp(A) = (0.01)($90) + (0.99)(0) = 90 cents
Expected value of not accepting the ticket: 0
It’s a free ride because she offered the ticket for free.
Now suppose she offers to SELL you the ticket for $1, call it act B:
Exp(B) = [(0.01)($90 – $1)] + [(0.99)(-$1)] = -10 cents.
You would be at a disadvantage to buy the ticket from her for $1.
If you pay your aunt 90 cents for the ticket, the expected value would be 0.
Hence 90 cents seems to be a fair price for the ticket.
Exp(C) = [(0.01)($90 – 90 cents) + [(0.99)(-90 cents)] = 0
Now if we applied this to the benefits/expenses of reporting a miracle that really
didn’t happen, what kind of probabilities would we get?
Let me try a quick and loose example:
Act, reporting a miracle done by someone recently executed by the state: rM
Consequence: cD – lose your life
Consequence: cL – not lose your life
Utility of consequence cD: 0.01 – not much value losing your life for what you know didn’t happen
Utility of consequence cL: 0.99
Probability of cD – high, say 0.9
Probability of cL (1 – cD) 0.1
Exp(rM) = (0.9)(0.01) + (0.1)(0.99) = 0.108
The value of reporting a miracle that didn’t really happen would seem to be quite low in
this quick and simple example.
I’m just suggesting this as one possible tack to take, using expected values.
The Hacking example they are using, has no risk involved in it.
HammsBear
Alan,
1. Why (HT) rather than (HT*)? Sobel just uses common assumption of an apologist: it is possible that there is a testimony for a miracle, and the probability of the miracle on the testimony is greater than 0.5; in Bayesian notation, P(tM) > 0 and P(M/tM) > 0.5. This is shared by (almost) all apologists. Sobel (2004, pp. 332f.) then shows, in his 17-steps proof, that the assumption entails that P(M) > P(tM and not-M).
2. You say, „Sobel is relying on a simple ratio definition of probability.“ I don’t think so, but this misunderstanding is my fault. I shouldn’t say that „Sobel gets the value of P(M) as the ratio…“, which is misleading. I should rather say that „Sobel gets the value of P(M) as equal to the ratio…“ What’s the difference? According to Sobel, in the case of alleged miraculous events, P(M) is equal to some n, and n is equal to certain frequency-ratio. But this does not entail that P(M) is a shorthand for frequency-probability; P(M) still can be a shorthand for epistemic probability, even if the epistemic probability value is equal to (or determined by) the frequency-probability value. E.g., if you are tossing a fair coin for a long time, then the ratio of already appeared heads and tails is 1:1 (thus, the frequency-probability for head is 0.5), and it is also natural to say that the the epistemic probability that the next outcome will be head is 0.5. Which, again, does not entail that in all possible cases frequency-values and epistemic values equate. Cf. R. Swinburne, Epistemic Justification, 2001, Oxford UP.
3. Yes, Sobel works with epistemic probability.
4. What is your definition of epistemic probability, Alan? Is it similar to Q. Smith’s definition?: „Personalist probability (with which I have identified epistemic probability) is a kind of objective, mind-independent probability, since it is defined counterfactually as what a perfectly rational finite mind would believe to a certain degree, if there were such a mind and belief. This does not require the factual existence of such a mind (this is the sense in which it is mind-independent). The truthmakers of the relevant counterfactuals are possible worlds in which the perfectly rational, finite mind and its beliefs exist. This is how I shall understand personalist probability.“ (http://qsmithwmu.com/time_began_with_a_timeless_point.htm ) Or is your epistemic probability defined as a degree of an infinite perfectly rational mind? But what the word „rational“ means? One should not define some problematic concept by means of another problematic concept. Or is your epistemic probability, say, a degree of belief of some (possible) group of finite (or human) minds who master inductive logic and whose sole goal is to have many true beliefs and only few false beliefs concerning important questions?
5. Don’t you think there are some substantive problems in the Bayesian approach? I mean: there are general difficulties of assessing plausibility (or probability or rationality) of propositions (or beliefs) in the Bayesian manner, which is common in contemporary analytical philosophy. Suppose I want to know is mainly the probability of some proposition (e.g., Jesus was raised) on total relevant evidence. The first problem is total relevant evidence – what is included in it and should be accepted, and what is not included in it and should not be accepted? The second problem is the problem of stating the probability of a proposition with respect to the total relevant evidence. This task seems to be extraordinary complex and (almost) infeasible because: (A) the total evidence is so complex, (B) probability is a hydra-headed stuff (there are many interpretations of probability, with their own specific problems), (C) one often does not know how to determine the probability of a given proposition (even if one is able to use vague intervals of probability values).
The Bayesian approach is nice for it infuses nice epistemological principles (e.g., that one should care about the probability in relation to the total relevant evidence) and it formalizes important epistemological notions (e.g., the in/dependence of an evidence). But is Bayesianism usable? (The case of consequentialism is similar. I am not a consequentialist, but I think the principle of expected utility (choose the alternative with the highest sum of multiplications of probabilities and values of its consequences) is nice, in the sense that it should be taken into account at least sometimes. But I do not know how to determine values of the involved variables.)
Hammsbear,
your point is similar to T. McGrew, http://homepages.wmich.edu/~mcgrew/plantinga.pdf, pp. 20-22. Take a look, really interesting.
However, a skeptic can still doubt your historical premise that there were those who reported the miracles and who probably
lost their lives for their beliefs. Compare the debate between W. L. Craig and Ehrman at http://www.holycross.edu/departments/crec/website/resurrdebate.htm. Craig makes a critique of Ehrman, which is similar to Alan’s critique.
As for historical premises, Craig, when arguing for the resurrection of Jesus, assumes: #1: After his crucifixion Jesus was buried by Joseph of Arimathea in a tomb. #2: On the Sunday after the crucifixion, Jesus’ tomb was found empty by a group of his women followers. #3: On different occasions and under various circumstances different individuals and groups of people experienced appearances of Jesus alive from the dead. #4: The original disciples suddenly and sincerely came to believe that Jesus was risen from the dead despite their having every predisposition to the contrary. The original disciples suddenly came to believe so strongly that God had raised Jesus from the dead that they were willing to die for the truth of that belief.
Ehrman doubts all these historical premises. E.g., as for #4, he says: “an earlier point that Bill made was that the disciples were all willing to die for their faith. I didn’t hear one piece of evidence for that. I hear that claim a lot, but having read every Christian source from the first five hundred years of Christianity, I’d like him to tell us what the piece of evidence is that the disciples died for their belief in the resurrection.” Ehrman thinks there is no sufficient evidence for #4; at least he knows no sufficient evidence for #4.
It seems that every (prospective) serious argument for a miracle from historical premise(s) requires historical research. As McGrew (p. 25) puts it: “the historical argument cannot be evaluated by proxy: it stands or falls not with the clamor of conflicting voices but with the strength of the evidence. There is a curious lack of communication on this issue between the epistemologists and the historians, even the apologists — between those who specialize in the structure of arguments and those with expertise in the evidence itself. Until we come to grips with that evidence in a detailed way we will inevitably undervalue and even fail to understand the long tradition of evidentialism in the philosophy of religion …”
However,
Hammsbear and Alan,
is it possible, for a professional historian, to make a reliable justified belief about the historical premises like above, given the vast relavant historical literature, dissent displayed by this literature, and the temporal distance of alleged miraculous events? And is it possible for a philosopher who is not a full-time historian? (Not a rhetorical question.)
Hi Vlastimil,
Ad. 2 and 3. Even if Sobel doesn’t define P(M) as a frequency ratio, I wouldn’t concede to him that P(M) is equal to the frequency ratio. Since epistemic probability is what’s at issue, P(M) has got to be assessed relative to a set of background assumptions. As far as I can see, there’s no reason to think a priori that P(M) is going to be equivalent to the frequency ratio.
Ad 4. My concept of epistemic probability is essentially that of your last proposal, viz. “a degree of belief of some (possible) group of finite (or human) minds who master inductive logic and whose sole goal is to have many true beliefs and only few false beliefs concerning important questions.”
Ad. 5. I grant that there are serious problems concerning the practical usability of Bayes’ theorem. It’s a wonderful heuristic tool, but more often than not there’s no way to get sufficiently precise numbers to crank out an solid result.
Is it possible, for a professional historian, to make a reliable justified belief about the historical premises like above, given the vast relevant historical literature, dissent displayed by this literature, and the temporal distance of alleged miraculous events? And is it possible for a philosopher who is not a full-time historian?
I know of no reason why affirmative answers to both questions should be impossible, though I would prefer to substitute “reasonable” for “reliable”. The degree to which our beliefs are reliable is to some extent beyond our control, a matter of epistemic luck, so to speak. But I do think that each of us can look at the available evidence and reasonably conclude that a given hypothesis is the best available explanation. In my view, however, this is not a purely intellectual or evidential matter. Volitional factors also come into play. And thus it may be impossible to arrive at an unambiguous, universal consensus that a given explanation is the best.
it's good to see this information in your post, i was looking the same but there was not any proper resource, thanx now i have the link which i was looking for my research.
Dissertation Writers