5,094 research outputs found

    Causal Factors, Causal Inference, Causal Explanation

    Get PDF
    There are two concepts of causes, property causation and token causation. The principle I want to discuss describes an epistemological connection between the two concepts, which I call the Connecting Principle. The rough idea is that if a token event of type Cis followed by a token event of type E, then the support of the hypothesis that the first event token caused the second increases as the strength of the property causal relation of C to E does. I demonstrate the principle, illustrate its application to phylogenies, infections, and rumours, and discuss its consequence for the conceptual distinctness of causal processes from the events they connect. Although I am by no means confident that the Connecting Principle is ultimately correct, it seems to be a useful point of departure into an important aspect of the epistemology of causality

    Inference to the Best Explanation and the Screening-Off Challenge

    Get PDF
    We argue in Roche and Sober (2013) that explanatoriness is evidentially irrelevant in that Pr(H | O&EXPL) = Pr(H | O), where H is a hypothesis, O is an observation, and EXPL is the proposition that if H and O were true, then H would explain O. This is a “screening-off” thesis. Here we clarify that thesis, reply to criticisms advanced by Lange (2017), consider alternative formulations of Inference to the Best Explanation, discuss a strengthened screening-off thesis, and consider how it bears on the claim that unification is evidentially relevant

    Is Explanatoriness a Guide to Confirmation? A Reply to Climenhaga

    Get PDF
    We argued that explanatoriness is evidentially irrelevant in the following sense: Let H be a hypothesis, O an observation, and E the proposition that H would explain O if H and O were true. Then our claim is that Pr = Pr. We defended this screening-off thesis by discussing an example concerning smoking and cancer. Climenhaga argues that SOT is mistaken because it delivers the wrong verdict about a slightly different smoking-and-cancer case. He also considers a variant of SOT, called “SOT*”, and contends that it too gives the wrong result. We here reply to Climenhaga’s arguments and suggest that SOT provides a criticism of the widely held theory of inference called “inference to the best explanation”

    Manifold Approximation by Moving Least-Squares Projection (MMLS)

    Full text link
    In order to avoid the curse of dimensionality, frequently encountered in Big Data analysis, there was a vast development in the field of linear and nonlinear dimension reduction techniques in recent years. These techniques (sometimes referred to as manifold learning) assume that the scattered input data is lying on a lower dimensional manifold, thus the high dimensionality problem can be overcome by learning the lower dimensionality behavior. However, in real life applications, data is often very noisy. In this work, we propose a method to approximate M\mathcal{M} a dd-dimensional Cm+1C^{m+1} smooth submanifold of Rn\mathbb{R}^n (dnd \ll n) based upon noisy scattered data points (i.e., a data cloud). We assume that the data points are located "near" the lower dimensional manifold and suggest a non-linear moving least-squares projection on an approximating dd-dimensional manifold. Under some mild assumptions, the resulting approximant is shown to be infinitely smooth and of high approximation order (i.e., O(hm+1)O(h^{m+1}), where hh is the fill distance and mm is the degree of the local polynomial approximation). The method presented here assumes no analytic knowledge of the approximated manifold and the approximation algorithm is linear in the large dimension nn. Furthermore, the approximating manifold can serve as a framework to perform operations directly on the high dimensional data in a computationally efficient manner. This way, the preparatory step of dimension reduction, which induces distortions to the data, can be avoided altogether

    Explanation = Unification? A New Criticism of Friedman’s Theory and a Reply to an Old One

    Get PDF
    According to Michael Friedman’s theory of explanation, a law X explains laws Y1, Y2, …, Yn precisely when X unifies the Y’s, where unification is understood in terms of reducing the number of independently acceptable laws. Philip Kitcher criticized Friedman’s theory but did not analyze the concept of independent acceptability. Here we show that Kitcher’s objection can be met by modifying an element in Friedman’s account. In addition, we argue that there are serious objections to the use that Friedman makes of the concept of independent acceptability

    Fodor’s B ubbe Meise Against Darwinism

    Get PDF

    Hypotheses that attribute false beliefs: A two‐part epistemology

    Get PDF
    Is there some general reason to expect organisms that have beliefs to have false beliefs? And after you observe that an organism occasionally occupies a given neural state that you think encodes a perceptual belief, how do you evaluate hypotheses about the semantic content that that state has, where some of those hypotheses attribute beliefs that are sometimes false while others attribute beliefs that are always true? To address the first of these questions, we discuss evolution by natural selection and show how organisms that are risk-prone in the beliefs they form can be fitter than organisms that are risk-free. To address the second question, we discuss a problem that is widely recognized in statistics – the problem of over-fitting – and one influential device for addressing that problem, the Akaike Information Criterion (AIC). We then use AIC to solve epistemological versions of the disjunction and distality problems, which are two key problems concerning what it is for a belief state to have one semantic content rather than another
    corecore