135 research outputs found
David Hume's no-miracles argument begets a valid No-Miracles Argument
Hume's essay ‘Of Miracles’ has been a focus of controversy ever since its publication. The challenge to Christian orthodoxy was only too evident, but the balance-of-probabilities criterion advanced by Hume for determining when testimony justifies belief in miracles has also been a subject of contention among philosophers. The temptation for those familiar with Bayesian methodology to show that Hume's criterion determines a corresponding balance-of-posterior probabilities in favour of miracles is understandable, but I will argue that their attempts fail. However, I show that his criterion generates a valid form of the so-called No-Miracles Argument appealed to by modern realist philosophers, whose own presentation of it, despite their possession of the probabilistic machinery Hume himself lacked, is invalid
What probability probably isn't
Joyce and others have claimed that degrees of belief are estimates of truth-values and that the probability axioms are conditions of admissibility for these estimates with respect to a scoring rule penalising inaccuracy. In this paper I argue that the claim that the rules of probability are truth-directed in this way depends on an assumption which is both implausible and lacks any supporting evidence, strongly suggesting that the probability axioms have nothing intrinsically to do with truth-directedness
Does information inform confirmation?
In a recent survey of the literature on the relation between information and confirmation, Crupi and Tentori (Stud Hist Philos Sci 47:81–90, 2014) claim that the former is a fruitful source of insight into the latter, with two well-known measures of confirmation being definable purely information-theoretically. I argue that of the two explicata of semantic information (due originally to Bar Hillel and Carnap) which are considered by the authors, the one generating a popular Bayesian confirmation measure is a defective measure of information, while the other, although an admissible measure of information, generates a defective measure of confirmation. Some results are proved about the representation of measures on consequence-classes
Repelling a Prussian charge with a solution to a paradox of Dubins
Pruss (Thought 1:81–89, 2012) uses an example of Lester Dubins to argue against the claim that appealing to hyperreal-valued probabilities saves probabilistic regularity from the objection that in continuum outcome-spaces and with standard probability functions all save countably many possibilities must be assigned probability 0. Dubins’s example seems to show that merely finitely additive standard probability functions allow reasoning to a foregone conclusion, and Pruss argues that hyperreal-valued probability functions are vulnerable to the same charge. However, Pruss’s argument relies on the rule of conditionalisation, but I show that in examples like Dubins’s involving nonconglomerable probabilities, conditionalisation is self-defeating
Regularity and infinitely tossed coins
Timothy Williamson has claimed to prove that regularity must fail even in a nonstandard setting, with a counterexample based on tossing a fair coin infinitely many times. I argue that Williamson’s argument is mistaken, and that a corrected version shows that it is not regularity which fails in the non-standard setting but a fundamental property of shifts in Bernoulli processes
How pseudo-hypotheses defeat a non-Bayesian theory of evidence: reply to Bandyopadhyay, Taper, and Brittan
Bandyopadhyay, Taper, and Brittan (BTB) advance a measure of evidential support that first appeared in the statistical and philosophical literature four decades ago and have been extensively discussed since. I have argued elsewhere, however, that it is vulnerable to a simple counterexample. BTB claim that the counterexample is flawed because it conflates evidence with confirmation. In this reply, I argue that the counterexample stands, and is fatal to their theory
Timothy Williamson’s coin-flipping argument: refuted prior to publication
In a well-known paper, Timothy Williamson (Analysis 67:173–180, 2007) claimed to prove with a coin-flipping example that infinitesimal-valued probabilities cannot save the principle of Regularity, because on pain of inconsistency the event ‘all tosses land heads’ must be assigned probability 0, whether the probability function is hyperreal-valued or not. A premise of Williamson’s argument is that two infinitary events in that example must be assigned the same probability because they are isomorphic. It was argued by Howson (Eur J Philos Sci 7:97–100, 2017) that the claim of isomorphism fails, but a more radical objection to Williamson’s argument is that it had been, in effect, refuted long before it was published
Putting on the Garber style? Better not
This article argues that not only are there serious internal difficulties with both Garber’s and later ‘Garber-style’ solutions of the old-evidence problem, including a recent proposal of Hartmann and Fitelson, but Garber-style approaches in general cannot solve the problem. It also follows the earlier lead of Rosenkrantz in pointing out that, despite the appearance to the contrary which inspired Garber’s nonclassical development of the Bayesian theory, there is a straightforward, classically Bayesian, solution
Fitting your theory to the facts: probably not such a bad thing after all
1 online resource (PDF, page 224-244
- …