45 research outputs found

    Erratum to "Confirmation as partial entailment" [Journal of Applied Logic 11 (2013) 364–372]

    Get PDF
    Abstract We provide a correction to the proof of the main result in Crupi and Tentori (2013)

    Trust and attitude in consumer food choices under risk

    Get PDF
    In this paper, attitude and trust are studied in the context of a food scare (dioxin) with the aim of identifying the components of attitude and trust that significantly affect how purchases are determined. A revised version of the model by MAYER et al. (1995) was tested for two types of food: salmon and chicken. The final model for salmon shows that trust is significantly determined by perceived competence, perceived shared values, truthfulness of information and the experiential attitude (the feeling that consuming salmon is positive), but trust has no impact on behavioural intentions. Consumer preferences seem to be determined by a positive experiential attitude and the perception that breeders, sellers and institutions have values similar to those of the consumer. The model for chicken gave very similar results.trust, trust antecedents, attitude, food scare, purchase intention, Consumer/Household Economics, Food Consumption/Nutrition/Food Safety, Risk and Uncertainty,

    Generalized information theory meets human cognition: Introducing a unified framework to model uncertainty and information search

    Get PDF
    Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the reduction thereof. However, a variety of alternative entropy metrics (Hartley, Quadratic, Tsallis, Rényi, and more) are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism

    New Axioms for Probability and Likelihood Ratio Measures

    Get PDF
    ABSTRACT Probability ratio and likelihood ratio measures of inductive support and related notions have appeared as theoretical tools for probabilistic approaches in the philosophy of science, the psychology of reasoning, and artificial intelligence. In an effort of conceptual clarification, several authors have pursued axiomatic foundations for these two families of measures. Such results have been criticized, however, as relying on unduly demanding or poorly motivated mathematical assumptions. We provide two novel theorems showing that probability ratio and likelihood ratio measures can be axiomatized in a way that overcomes these difficulties

    What can the conjunction fallacy tell us about human reasoning?

    No full text
    In this chapter, I will briefly summarize and discuss the main results obtained from more than three decades of studies on the conjunction fallacy (hereafter CF) and will argue that this striking and widely debated reasoning error is a robust phenomenon that can systematically affect laypeople’s as much as experts’ probabilistic inferences, with potentially relevant real-life consequences. I will then introduce what is, in my view, the best explanation for the CF and indicate how it allows the reconciliation of some classic probabilistic reasoning errors with the outstanding reasoning performances that humans have been shown capable of. Finally, I will tackle the open issue of the greater accuracy and reliability of evidential impact assessments over those of posterior probability and outline how further research on this topic might also contribute to the development of effective human-like computing
    corecore