9 research outputs found

    Предисловие и вступительная статья к переизданию работы П.С. Порецкого «Решение общей задачи теории вероятностей при помощи математической логики»

    Get PDF
    A preface and introduction article presents an article by Platon Sergeevich Poreckij which is a record of his lecture delivered on October 25, 1886. The preface contains short historical reference about P. S. Poreckij’s works in the field of mathematical logic and its application to other science, including the probability theory. The introduction article has the main goal to show how the beginning of logic-and-probabilistic method (LPM) was created at the end of the XIX century. LPM essence was in valid transition from logic equation between the events to algebraic equality between their probabilities. The article shows that LPM further development is connected to the necessity of evaluation of digital circuits reliability as well as structurally complex systems reliability and safety in 1960s. Scientific disputes and the possibility of combining mathematical logic and the probability theory do not stop in the XIX century. There are regular seminars and conferences held on this subject. We discuss the complex mathematical and philosophical question about the nature of fundamentally different concepts - the probabilistic logic (PL) and the logic of probability (LP).Предисловие и вступительная статья представляют переиздание работы Платона Сергеевича Порецкого, которая была записана как лекция 25 октября 1886 г. В предисловии дана краткая историческая справка о работах П.С.Порецкого в области математической логики и ее применимости к другим областям науки, в том числе и к теории вероятностей. Вступительная статья имеет основной целью показать, как в конце XIX века было сформировано начало логико-вероятностного анализа (ЛВА), суть которого состоит в корректном переходе от логического равенства между событиями к алгебраическому равенству между их вероятностями. Показано, что дальнейшее развитие ЛВА было вызвано практической потребностью в 60-х годах прошлого столетия в оценке надежности цифровых схем, а также надежности и безопасности структурно сложных систем. Обсуждается сложный математический и философский вопрос о сущности принципиально разных понятий – вероятностной логики (ВЛ) и логики вероятностей (ЛВ)

    How good is an explanation?

    Get PDF

    Learning, conditionals, causation

    Get PDF
    This dissertation is on conditionals and causation. In particular, we (i) propose a method of how an agent learns conditional information, and (ii) analyse causation in terms of a new type of conditional. Our starting point is Ramsey's (1929/1990) test: accept a conditional when you can infer its consequent upon supposing its antecedent. Inspired by this test, Stalnaker (1968) developed a semantics of conditionals. In Ch. 2, we define and apply our new method of learning conditional information. It says, roughly, that you learn conditional information by updating on the corresponding Stalnaker conditional. By generalising Lewis's (1976) updating rule to Jeffrey imaging, our learning method becomes applicable to both certain and uncertain conditional information. The method generates the correct predictions for all of Douven's (2012) benchmark examples and Van Fraassen's (1981) Judy Benjamin Problem. In Ch. 3, we prefix Ramsey's test by suspending judgment on antecedent and consequent. Unlike the Ramsey Test semantics by Stalnaker (1968) and Gärdenfors (1978), our strengthened semantics requires the antecedent to be inferentially relevant for the consequent. We exploit this asymmetric relation of relevance in a semantic analysis of the natural language conjunction 'because'. In Ch. 4, we devise an analysis of actual causation in terms of production, where production is understood along the lines of our strengthened Ramsey Test. Our analysis solves the problems of overdetermination, conjunctive scenarios, early and late preemption, switches, double prevention, and spurious causation -- a set of problems that still challenges counterfactual accounts of actual causation in the tradition of Lewis (1973c). In Ch. 5, we translate our analysis of actual causation into Halpern and Pearl's (2005) framework of causal models. As a result, our analysis is considerably simplified on the cost of losing its reductiveness. The upshot is twofold: (i) Jeffrey imaging on Stalnaker conditionals emerges as an alternative to Bayesian accounts of learning conditional information; (ii) the analyses of causation in terms of our strengthened Ramsey Test conditional prove to be worthy rivals to contemporary counterfactual accounts of causation

    Learning, conditionals, causation

    Get PDF
    This dissertation is on conditionals and causation. In particular, we (i) propose a method of how an agent learns conditional information, and (ii) analyse causation in terms of a new type of conditional. Our starting point is Ramsey's (1929/1990) test: accept a conditional when you can infer its consequent upon supposing its antecedent. Inspired by this test, Stalnaker (1968) developed a semantics of conditionals. In Ch. 2, we define and apply our new method of learning conditional information. It says, roughly, that you learn conditional information by updating on the corresponding Stalnaker conditional. By generalising Lewis's (1976) updating rule to Jeffrey imaging, our learning method becomes applicable to both certain and uncertain conditional information. The method generates the correct predictions for all of Douven's (2012) benchmark examples and Van Fraassen's (1981) Judy Benjamin Problem. In Ch. 3, we prefix Ramsey's test by suspending judgment on antecedent and consequent. Unlike the Ramsey Test semantics by Stalnaker (1968) and Gärdenfors (1978), our strengthened semantics requires the antecedent to be inferentially relevant for the consequent. We exploit this asymmetric relation of relevance in a semantic analysis of the natural language conjunction 'because'. In Ch. 4, we devise an analysis of actual causation in terms of production, where production is understood along the lines of our strengthened Ramsey Test. Our analysis solves the problems of overdetermination, conjunctive scenarios, early and late preemption, switches, double prevention, and spurious causation -- a set of problems that still challenges counterfactual accounts of actual causation in the tradition of Lewis (1973c). In Ch. 5, we translate our analysis of actual causation into Halpern and Pearl's (2005) framework of causal models. As a result, our analysis is considerably simplified on the cost of losing its reductiveness. The upshot is twofold: (i) Jeffrey imaging on Stalnaker conditionals emerges as an alternative to Bayesian accounts of learning conditional information; (ii) the analyses of causation in terms of our strengthened Ramsey Test conditional prove to be worthy rivals to contemporary counterfactual accounts of causation

    Special Issue on Combining Probability and Logic

    No full text
    This volume arose out of an international, interdisciplinary academic networ

    Information, Confirmation, and Conditionals

    No full text
    Loosely speaking, a proposition adds the more information to corpus b, the greater the proportion of possibilities left open by b that it rules out. Plausible qualitative constraints lead to the result that any measure of information-added is a strictly decreasing rescaling of a conditional probability function sending 1 to 0. The two commonest rescalings are -log P and 1−P. In a similar vein, e is favourable evidence for hypothesis h relative to background b if h rules out a smaller proportion of the possibilities left open by b and e jointly, than left open by b alone. In terms of the underlying probability measure, this secures the familiar positive relevance conception of confirmation and that f is more favourable evidence for h than e iff h rules out a smaller proportion of the possibilities left open by b and f jointly than left open by b and e jointly.      In these terms, a measure of confirmation should be a function of the information added by h to b∧e and to b, decreasing with the first and increasing with the second. When e = h, the possibilities that drop out as we narrow the focus with e are exactly the possibilities left open by b but excluded by h. Thus the extent to which hconfirms h relative to b is a measure of the information h adds to b.      Given a measure I of information added, we can think of I(ac,b) − I(a,b) as a measure of the “deductive gap”, relative to b, between a and a∧c. When, I(a,b) = P(a|b), I(ac,b) − I(a,b) = -logP(c|ab), the amount of information the indicative conditional ‘if a then c’ adds to b on Ernest Adams' account of that conditional. When I(a,b) = 1−P(a|b), I(ac,b) − I(a,b) = I(a⊃c,b) where a⊃c is a material conditional. What, if anything, can be said in general about “information theoretic” conditionals obtained from measures of information-added in this way? We find that, granted a couple of provisos, all satisfy modus ponens and that the conditionals fall victim to Lewis-style triviality results if, and only if, I(a∧¬a,b) = ∞ (as happens with -logP(.|b)).The article appears in a Special Issue on Combining Probability and Logic to Solve Philosophical Problem
    corecore