557 research outputs found

    Characterizing the principle of minimum cross-entropy within a conditional-logical framework

    Get PDF
    AbstractThe principle of minimum cross-entropy (ME-principle) is often used as an elegant and powerful tool to build up complete probability distributions when only partial knowledge is available. The inputs it may be applied to are a prior distribution P and some new information R, and it yields as a result the one distribution P∗ that satisfies R and is closest to P in an information-theoretic sense. More generally, it provides a “best” solution to the problem “How to adjust P to R?”In this paper, we show how probabilistic conditionals allow a new and constructive approach to this important principle. Though popular and widely used for knowledge representation, conditionals quantified by probabilities are not easily dealt with. We develop four principles that describe their handling in a reasonable and consistent way, taking into consideration the conditional-logical as well as the numerical and probabilistic aspects. Finally, the ME-principle turns out to be the only method for adjusting a prior distribution to new conditional information that obeys all these principles.Thus a characterization of the ME-principle within a conditional-logical framework is achieved, and its implicit logical mechanisms are revealed clearly

    Bayesian Argumentation and the Value of Logical Validity

    Get PDF
    According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that (i)utilizes a new class of Bayesian learning methods that are better suited to modelling dynamic and conditional inferences than standard Bayesian conditionalization, (ii) is able to characterise the special value of logically valid argument schemes in uncertain reasoning contexts, (iii) greatly extends the range of inferences and argumentative phenomena that can be adequately described in a Bayesian framework, and (iv) undermines some influential theoretical motivations for dual function models of human cognition. We conclude that the probabilistic norms given by the Bayesian approach to rationality are not necessarily at odds with the norms given by classical logic. Rather, the Bayesian theory of argumentation can be seen as justifying and enriching the argumentative norms of classical logic

    Bayesian Argumentation and the Value of Logical Validity

    Get PDF
    According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that (i)utilizes a new class of Bayesian learning methods that are better suited to modelling dynamic and conditional inferences than standard Bayesian conditionalization, (ii) is able to characterise the special value of logically valid argument schemes in uncertain reasoning contexts, (iii) greatly extends the range of inferences and argumentative phenomena that can be adequately described in a Bayesian framework, and (iv) undermines some influential theoretical motivations for dual function models of human cognition. We conclude that the probabilistic norms given by the Bayesian approach to rationality are not necessarily at odds with the norms given by classical logic. Rather, the Bayesian theory of argumentation can be seen as justifying and enriching the argumentative norms of classical logic

    Editorial

    Get PDF

    Editorial

    Get PDF

    Editorial

    Get PDF

    Editorial

    Get PDF

    Editorial

    Get PDF

    Cognition and enquiry : The pragmatics of conditional reasoning.

    Get PDF

    Bayesian inference for the information gain model

    Get PDF
    One of the most popular paradigms to use for studying human reasoning involves the Wason card selection task. In this task, the participant is presented with four cards and a conditional rule (e.g., “If there is an A on one side of the card, there is always a 2 on the other side”). Participants are asked which cards should be turned to verify whether or not the rule holds. In this simple task, participants consistently provide answers that are incorrect according to formal logic. To account for these errors, several models have been proposed, one of the most prominent being the information gain model (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). This model is based on the assumption that people independently select cards based on the expected information gain of turning a particular card. In this article, we present two estimation methods to fit the information gain model: a maximum likelihood procedure (programmed in R) and a Bayesian procedure (programmed in WinBUGS). We compare the two procedures and illustrate the flexibility of the Bayesian hierarchical procedure by applying it to data from a meta-analysis of the Wason task (Oaksford & Chater, Psychological Review, 101, 608–631, 1994). We also show that the goodness of fit of the information gain model can be assessed by inspecting the posterior predictives of the model. These Bayesian procedures make it easy to apply the information gain model to empirical data. Supplemental materials may be downloaded along with this article from www.springerlink.com
    • 

    corecore