361,915 research outputs found

    A Semantic Information Formula Compatible with Shannon and Popper's Theories

    Get PDF
    Semantic Information conveyed by daily language has been researched for many years; yet, we still need a practical formula to measure information of a simple sentence or prediction, such as “There will be heavy rain tomorrow”. For practical purpose, this paper introduces a new formula, Semantic Information Formula (SIF), which is based on L. A. Zadeh’s fuzzy set theory and P. Z. Wang’s random set falling shadow theory. It carries forward C. E. Shannon and K. Popper’s thought. The fuzzy set’s probability defined by Zadeh is treated as the logical probability sought by Popper, and the membership grade is treated as the truth-value of a proposition and also as the posterior logical probability. The classical relative information formula (Information=log(Posterior probability / Prior probability) is revised into SIF by replacing the posterior probability with the membership grade and the prior probability with the fuzzy set’s probability. The SIF can be explained as “Information=Testing severity – Relative square deviation” and hence can be used as Popper's information criterion to test scientific theories or propositions. The information measure defined by the SIF also means the spared codeword length as the classical information measure. This paper introduces the set-Bayes’ formula which establishes the relationship between statistical probability and logical probability, derives Fuzzy Information Criterion (FIC) for the optimization of semantic channel, and discusses applications of SIF and FIC in areas such as linguistic communication, prediction, estimation, test, GPS, translation, and fuzzy reasoning. Particularly, through a detailed example of reasoning, it is proved that we can improve semantic channel with proper fuzziness to increase average semantic information to reach its upper limit: Shannon mutual information

    Reasoning about Criminal Evidence: Revealing Probabilistic Reasoning Behind Logical Conclusions

    Get PDF
    There are two competing theoretical frameworks with which cognitive sciences examines how people reason. These frameworks are broadly categorized into logic and probability. This paper reports two applied experiments to test which framework explains better how people reason about evidence in criminal cases. Logical frameworks predict that people derive conclusions from the presented evidence to endorse an absolute value of certainty such as ‘guilty’ or ‘not guilty’ (e.g., Johnson-Laird, 1999). But probabilistic frameworks predict that people derive conclusions from the presented evidence in order that they may use knowledge of prior instances to endorse a conclusion of guilt which varies in certainty (e.g., Tenenbaum, Griffiths, & Kemp, 2006). Experiment 1 showed that reasoning about evidence of prior instances, such as disclosed prior convictions, affected participants’ underlying ratings of guilt. Participants’ guilt ratings increased in certainty according to the number of disclosed prior convictions. Experiment 2 showed that participants’ reasoning about evidence of prior convictions and some forensic evidence tended to lead participants to endorse biased ‘guilty’ verdicts when rationally the evidence does not prove guilt. Both results are predicted by probabilistic frameworks. The paper considers the implications for logical and probabilistic frameworks for reasoning in the real world

    Characterizing the principle of minimum cross-entropy within a conditional-logical framework

    Get PDF
    AbstractThe principle of minimum cross-entropy (ME-principle) is often used as an elegant and powerful tool to build up complete probability distributions when only partial knowledge is available. The inputs it may be applied to are a prior distribution P and some new information R, and it yields as a result the one distribution P∗ that satisfies R and is closest to P in an information-theoretic sense. More generally, it provides a “best” solution to the problem “How to adjust P to R?”In this paper, we show how probabilistic conditionals allow a new and constructive approach to this important principle. Though popular and widely used for knowledge representation, conditionals quantified by probabilities are not easily dealt with. We develop four principles that describe their handling in a reasonable and consistent way, taking into consideration the conditional-logical as well as the numerical and probabilistic aspects. Finally, the ME-principle turns out to be the only method for adjusting a prior distribution to new conditional information that obeys all these principles.Thus a characterization of the ME-principle within a conditional-logical framework is achieved, and its implicit logical mechanisms are revealed clearly

    Logical intuitions and heuristic reflections : rethinking the role of intuition in probability judgements

    Get PDF
    This thesis aims to further our understanding of the role that intuition plays in human reasoning when making probability judgements. It attempts to: a) gain a better understanding of the cognitive processes underlying these judgements, b) determine how individual differences impacts the logicality of these judgements, and c) test original and theoretically-driven ways to increase logical intuitions in probability judgements. Classically, it is assumed that people make biased judgements because they rely on an intuitive thinking system (System 1) and apply the representativeness heuristic to make conjunctive probability judgements. In contrast, logical judgements are assumed to arise from the use of deliberation (System 2) to overrule the prepotent heuristic response and replace it with a logical one. Recent research; however, has challenged this claim and instead proposes that our intuitions do not always lead us astray. In fact, they can reflect a sensitivity to logic that is implicit, and potentially happens automatically and outside of awareness. This thesis takes this notion one step further and asks whether it is the slower, more deliberative, thinking system which may be vulnerable to prior beliefs and biases. A series of five experiments examined the relative impact of heuristic and logical considerations on probability judgements. The results indicated that people are readily able to detect the conflict underlying intuitive and deliberative assessments, and that people effortlessly engage in deliberative processing, which suggests they are not simply cognitive misers who fail to reason in line with the principles of logic because they either lack the cognitive ability or the motivation to do so. The results also supported the idea that people can intuit logical judgements (i.e., judgements in accordance with the laws of probability) when they rely on System 1 thinking; however, when they deliberate or use System 2 thinking, that is when the heuristic biases their judgements
    • 

    corecore