117 research outputs found

    Bayesianische Bestätigung des Irrationalen? Zum Problem der genuinen Bestätigung

    Get PDF
    Der herkömmliche bayesianische Bestätigungsbegriff hat das Problem, dass ihm zufolge auch pseudowissenschaftliche Erklärungshypothesen bestätigt werden. Ein Beispiel ist der rationalisierte Kreationismus, demzufolge die aktuelle Welt so ist wie sie ist, weil Gott sie so geschaffen hat. Solche Pseudoerklärungen zeichnen sich dadurch aus, dass durch sie beliebige Erfahrungen ex-post, also im nachhinein, erklärbar sind. Intuitiv betrachtet sind sie erst gar nicht bestätigungsfähig. Alternative Bestätigungsbegriffe, welche diese Intuition einzufangen versuchen, sind das novel prediction (NP) und das use novelty (UN) Kriterium der Bestätigung. Gegen beide Kriterien gibt es schwerwiegende Einwände. In diesem Vortrag entwickle ich das Kriterium der genuinen Bestätigung, welches das Problem der Pseudoerklärungen rein probabilistisch löst und noch weitere Vorzüge gegenüber den bisher vorgeschlagenen Bestätigungsbe-griffen besitzt

    Interactive Causes: Revising the Markov Condition

    Get PDF
    This paper suggests a revision of the theory of causal nets (TCN). In Section 1 we introduce an axiomatization of TCN based on a realistic understanding. It is shown that the causal Markov condition entails three independent principles. In Section 2 we analyze inde-terministic decay as the major counterexample to one of these principles: screening-off by common causes (SCC). We call (SCC)-violating common causes interactive causes. In Sec-tion 3 we develop a revised version of TCN, called TCN*, which accounts for interactive causes. It is shown that there are interactive causal models that admit of no faithful non-interactive reconstruction

    Unification and explanation: explanation as a prototype concept. A reply to Weber and van Dyck, Gijsbers, and de Regt

    Get PDF
    El artĂ­culo analiza la unificaciĂłn como virtud explicativa. En su primera parte hago una breve exposiciĂłn del enfoque de la unificaciĂłn de Schurz y Lambert (1994) y Schurz (1999). Ilustro las ventajas de este planteamiento sobre otros anteriores, como Friedman (1974) y Kitcher (1981). En la segunda parte (secc. 3) discuto varios comentarios y objeciones al enfoque Schurz-Lambert planteados por Weber y van Dyck (2002), Gijsbers (2007) y de Regt (2005). En la tercera y Ăşltima parte (secc. 4) argumento que la explicaciĂłn deberĂ­a entenderse como un concepto prototipo que contiene la esperabilidad nĂłmica, la causalidad y la unificaciĂłn como virtudes prototĂ­picas de las explicaciones, aunque ninguna de estas virtudes ofrece una condiciĂłn "definitoria", suficiente y necesaria, de la explicaciĂłn

    Tacking by Conjunction, Genuine Confirmation and Bayesian Convergence

    Get PDF
    Tacking by conjunction is a well-known problem for Bayesian confirmation theory. In the first section of the paper we point out disadvantages of orthodox Bayesian solu-tion proposals to this problem and develop an alternative solution based on a strengthened concept of probabilistic confirmation, called genuine confirmation. In the second section we illustrate the application of the concept of genuine confirmation to Goodman-type counter-inductive generalizations and to post-facto speculations. In the final section we demonstrate that genuine confirmation is a necessary condition for Bayesian convergence to certainty based on the accumulation of conditionally in-dependent pieces of evidence

    No Free Lunch Theorem, Inductive Skepticism, and the Optimality of Meta-Induction

    Get PDF
    The no free lunch theorem (Wolpert 1996) is a radicalized version of Hume's induction skepticism. It asserts that relative to a uniform probability distribution over all possible worlds, all computable prediction algorithms - whether 'clever' inductive or 'stupid' guessing methods (etc.) - have the same expected predictive success. This theorem seems to be in conflict with results about meta-induction (Schurz 2008). According to these results, certain meta-inductive prediction strategies may dominate other (non-meta-inductive) methods in their predictive success (in the long run). In this paper this conflict is analyzed and dissolved, by means of probabilistic analysis and computer simulation

    Ceteris Paribus Laws

    Get PDF
    Laws of nature take center stage in philosophy of science. Laws are usually believed to stand in a tight conceptual relation to many important key concepts such as causation, explanation, confirmation, determinism, counterfactuals etc. Traditionally, philosophers of science have focused on physical laws, which were taken to be at least true, universal statements that support counterfactual claims. But, although this claim about laws might be true with respect to physics, laws in the special sciences (such as biology, psychology, economics etc.) appear to have—maybe not surprisingly—different features than the laws of physics. Special science laws—for instance, the economic law “Under the condition of perfect competition, an increase of demand of a commodity leads to an increase of price, given that the quantity of the supplied commodity remains constant” and, in biology, Mendel's Laws—are usually taken to “have exceptions”, to be “non-universal” or “to be ceteris paribus laws”. How and whether the laws of physics and the laws of the special sciences differ is one of the crucial questions motivating the debate on ceteris paribus laws. Another major, controversial question concerns the determination of the precise meaning of “ceteris paribus”. Philosophers have attempted to explicate the meaning of ceteris paribus clauses in different ways. The question of meaning is connected to the problem of empirical content, i.e., the question whether ceteris paribus laws have non-trivial and empirically testable content. Since many philosophers have argued that ceteris paribus laws lack empirically testable content, this problem constitutes a major challenge to a theory of ceteris paribus laws

    Causality and unification: how causality unifies statistical regularities

    Get PDF
    Two key ideas of scientific explanation - explanations as causal information and explanation as unification - have frequently been set into mutual opposition. This paper proposes a "dialectical solution" to this conflict, by arguing that causal explanations are preferable to non-causal explanations because they lead to a higher degree of unification at the level of the explanation of statistical regularities. The core axioms of the theory of causal nets (TC) are justified because they give the best if not the only unifying explanation of two statistical phenomena: screening off and linking up. Alternative explanation attempts are discussed and it is shown why they don't work. It is demonstrated that not the core of TC but extended versions of TC have empirical content, by means of which they can generate independently testable predictions

    Editors' Introduction

    Get PDF
    • …
    corecore