56 research outputs found

    Learning, conditionals, causation

    Get PDF
    This dissertation is on conditionals and causation. In particular, we (i) propose a method of how an agent learns conditional information, and (ii) analyse causation in terms of a new type of conditional. Our starting point is Ramsey's (1929/1990) test: accept a conditional when you can infer its consequent upon supposing its antecedent. Inspired by this test, Stalnaker (1968) developed a semantics of conditionals. In Ch. 2, we define and apply our new method of learning conditional information. It says, roughly, that you learn conditional information by updating on the corresponding Stalnaker conditional. By generalising Lewis's (1976) updating rule to Jeffrey imaging, our learning method becomes applicable to both certain and uncertain conditional information. The method generates the correct predictions for all of Douven's (2012) benchmark examples and Van Fraassen's (1981) Judy Benjamin Problem. In Ch. 3, we prefix Ramsey's test by suspending judgment on antecedent and consequent. Unlike the Ramsey Test semantics by Stalnaker (1968) and Gärdenfors (1978), our strengthened semantics requires the antecedent to be inferentially relevant for the consequent. We exploit this asymmetric relation of relevance in a semantic analysis of the natural language conjunction 'because'. In Ch. 4, we devise an analysis of actual causation in terms of production, where production is understood along the lines of our strengthened Ramsey Test. Our analysis solves the problems of overdetermination, conjunctive scenarios, early and late preemption, switches, double prevention, and spurious causation -- a set of problems that still challenges counterfactual accounts of actual causation in the tradition of Lewis (1973c). In Ch. 5, we translate our analysis of actual causation into Halpern and Pearl's (2005) framework of causal models. As a result, our analysis is considerably simplified on the cost of losing its reductiveness. The upshot is twofold: (i) Jeffrey imaging on Stalnaker conditionals emerges as an alternative to Bayesian accounts of learning conditional information; (ii) the analyses of causation in terms of our strengthened Ramsey Test conditional prove to be worthy rivals to contemporary counterfactual accounts of causation

    Learning, conditionals, causation

    Get PDF
    This dissertation is on conditionals and causation. In particular, we (i) propose a method of how an agent learns conditional information, and (ii) analyse causation in terms of a new type of conditional. Our starting point is Ramsey's (1929/1990) test: accept a conditional when you can infer its consequent upon supposing its antecedent. Inspired by this test, Stalnaker (1968) developed a semantics of conditionals. In Ch. 2, we define and apply our new method of learning conditional information. It says, roughly, that you learn conditional information by updating on the corresponding Stalnaker conditional. By generalising Lewis's (1976) updating rule to Jeffrey imaging, our learning method becomes applicable to both certain and uncertain conditional information. The method generates the correct predictions for all of Douven's (2012) benchmark examples and Van Fraassen's (1981) Judy Benjamin Problem. In Ch. 3, we prefix Ramsey's test by suspending judgment on antecedent and consequent. Unlike the Ramsey Test semantics by Stalnaker (1968) and Gärdenfors (1978), our strengthened semantics requires the antecedent to be inferentially relevant for the consequent. We exploit this asymmetric relation of relevance in a semantic analysis of the natural language conjunction 'because'. In Ch. 4, we devise an analysis of actual causation in terms of production, where production is understood along the lines of our strengthened Ramsey Test. Our analysis solves the problems of overdetermination, conjunctive scenarios, early and late preemption, switches, double prevention, and spurious causation -- a set of problems that still challenges counterfactual accounts of actual causation in the tradition of Lewis (1973c). In Ch. 5, we translate our analysis of actual causation into Halpern and Pearl's (2005) framework of causal models. As a result, our analysis is considerably simplified on the cost of losing its reductiveness. The upshot is twofold: (i) Jeffrey imaging on Stalnaker conditionals emerges as an alternative to Bayesian accounts of learning conditional information; (ii) the analyses of causation in terms of our strengthened Ramsey Test conditional prove to be worthy rivals to contemporary counterfactual accounts of causation

    Ramsey's conditionals

    Get PDF
    In this paper, we propose a unified account of conditionals inspired by Frank Ramsey. Most contemporary philosophers agree that Ramsey's account applies to indicative conditionals only. We observe against this orthodoxy that his account covers subjunctive conditionals as well-including counterfactuals. In light of this observation, we argue that Ramsey's account of conditionals resembles Robert Stalnaker's possible worlds semantics supplemented by a model of belief. The resemblance suggests to reinterpret the notion of conditional degree of belief in order to overcome a tension in Ramsey's account. The result of the reinterpretation is a tenable account of conditionals that covers indicative and subjunctive as well as qualitative and probabilistic conditionals

    Learning and Pooling, Pooling and Learning

    Get PDF
    We explore which types of probabilistic updating commute with convex IP pooling (Stewart and Ojea Quintana 2017). Positive results are stated for Bayesian conditionalization (and a mild generalization of it), imaging, and a certain parameterization of Jeffrey conditioning. This last observation is obtained with the help of a slight generalization of a characterization of (precise) externally Bayesian pooling operators due to Wagner (Log J IGPL 18(2):336--345, 2009). These results strengthen the case that pooling should go by imprecise probabilities since no precise pooling method is as versatile

    Probability for epistemic modalities

    Get PDF
    This paper develops an information-sensitive theory of the semantics and probability of conditionals and statements involving epistemic modals. The theory validates a number of principles linking probability and modality, including the principle that the probability of a conditional If A, then C equals the probability of C, updated with A. The theory avoids so-called triviality results, which are standardly taken to show that principles of this sort cannot be validated. To achieve this, we deny that rational agents update their credences via conditionalization. We offer a new rule of update, Hyperconditionalization, which agrees with Conditionalization whenever nonmodal statements are at stake but differs for modal and conditional sentences

    Bayesians Still Don't Learn from Conditionals

    Get PDF
    One of the open questions in Bayesian epistemology is how to rationally learn from indicative conditionals (Douven, 2016). Eva et al. (Mind 129(514):461-508, 2020) propose a strategy to resolve this question. They claim that their strategy provides a uniquely rational response to any given learning scenario. We show that their updating strategy is neither very general nor always rational. Even worse, we generalize their strategy and show that it still fails. Bad news for the Bayesians

    Learning and Pooling, Pooling and Learning

    Get PDF
    We explore which types of probabilistic updating commute with convex IP pooling (Stewart and Ojea Quintana 2017). Positive results are stated for Bayesian conditionalization (and a mild generalization of it), imaging, and a certain parameterization of Jeffrey conditioning. This last observation is obtained with the help of a slight generalization of a characterization of (precise) externally Bayesian pooling operators due to Wagner (Log J IGPL 18(2):336--345, 2009). These results strengthen the case that pooling should go by imprecise probabilities since no precise pooling method is as versatile

    Causal and Evidential Conditionals

    Get PDF
    We put forth an account for when to believe causal and evidential conditionals. The basic idea is to embed a causal model in an agent's belief state. For the evaluation of conditionals seems to be relative to beliefs about both particular facts and causal relations. Unlike other attempts using causal models, we show that ours can account rather well not only for various causal but also evidential conditionals

    Possible Worlds Truth Table Task

    Get PDF
    In this paper, a novel experimental task is developed for testing the highly influential, but experimentally underexplored, possible worlds account of conditionals (Stalnaker, 1968; Lewis, 1973). In Experiment 1, this new task is used to test both indicative and subjunctive conditionals. For indicative conditionals, five competing truth tables are compared, including the previously untested, multi-dimensional possible worlds semantics of Bradley (2012). In Experiment 2, these results are replicated and it is shown that they cannot be accounted for by an alternative hypothesis proposed by our reviewers. In Experiment 3, individual variation in truth assignments of indicative conditionals is investigated via Bayesian mixture models that classify participants as following one of several competing models. As a novelty of this study, it is found that a possible worlds semantics of Lewis and Stalnaker is capable of accounting for participants’ aggregate truth value assignments in this task. Applied to indicative conditionals, we show across three experiments, that the theory both captures participants’ truth values at the aggregate level (Experiments 1 and 2) and that it makes up the largest subgroup in the analysis of individual variation in our experimental paradigm (Experiment 3)
    corecore