25,719 research outputs found

    An Ordinal View of Independence with Application to Plausible Reasoning

    Full text link
    An ordinal view of independence is studied in the framework of possibility theory. We investigate three possible definitions of dependence, of increasing strength. One of them is the counterpart to the multiplication law in probability theory, and the two others are based on the notion of conditional possibility. These two have enough expressive power to support the whole possibility theory, and a complete axiomatization is provided for the strongest one. Moreover we show that weak independence is well-suited to the problems of belief change and plausible reasoning, especially to address the problem of blocking of property inheritance in exception-tolerant taxonomic reasoning.Comment: Appears in Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence (UAI1994

    Ignorance and indifference

    Get PDF
    The epistemic state of complete ignorance is not a probability distribution. In it, we assign the same, unique, ignorance degree of belief to any contingent outcome and each of its contingent, disjunctive parts. That this is the appropriate way to represent complete ignorance is established by two instruments, each individually strong enough to identify this state. They are the principle of indifference (PI) and the notion that ignorance is invariant under certain redescriptions of the outcome space, here developed into the 'principle of invariance of ignorance' (PII). Both instruments are so innocuous as almost to be platitudes. Yet the literature in probabilistic epistemology has misdiagnosed them as paradoxical or defective since they generate inconsistencies when conjoined with the assumption that an epistemic state must be a probability distribution. To underscore the need to drop this assumption, I express PII in its most defensible form as relating symmetric descriptions and show that paradoxes still arise if we assume the ignorance state to be a probability distribution. Copyright 2008 by the Philosophy of Science Association. All rights reserved

    Epistemic Foundation of Stable Model Semantics

    Full text link
    Stable model semantics has become a very popular approach for the management of negation in logic programming. This approach relies mainly on the closed world assumption to complete the available knowledge and its formulation has its basis in the so-called Gelfond-Lifschitz transformation. The primary goal of this work is to present an alternative and epistemic-based characterization of stable model semantics, to the Gelfond-Lifschitz transformation. In particular, we show that stable model semantics can be defined entirely as an extension of the Kripke-Kleene semantics. Indeed, we show that the closed world assumption can be seen as an additional source of `falsehood' to be added cumulatively to the Kripke-Kleene semantics. Our approach is purely algebraic and can abstract from the particular formalism of choice as it is based on monotone operators (under the knowledge order) over bilattices only.Comment: 41 pages. To appear in Theory and Practice of Logic Programming (TPLP

    Assessing forensic evidence by computing belief functions

    Full text link
    We first discuss certain problems with the classical probabilistic approach for assessing forensic evidence, in particular its inability to distinguish between lack of belief and disbelief, and its inability to model complete ignorance within a given population. We then discuss Shafer belief functions, a generalization of probability distributions, which can deal with both these objections. We use a calculus of belief functions which does not use the much criticized Dempster rule of combination, but only the very natural Dempster-Shafer conditioning. We then apply this calculus to some classical forensic problems like the various island problems and the problem of parental identification. If we impose no prior knowledge apart from assuming that the culprit or parent belongs to a given population (something which is possible in our setting), then our answers differ from the classical ones when uniform or other priors are imposed. We can actually retrieve the classical answers by imposing the relevant priors, so our setup can and should be interpreted as a generalization of the classical methodology, allowing more flexibility. We show how our calculus can be used to develop an analogue of Bayes' rule, with belief functions instead of classical probabilities. We also discuss consequences of our theory for legal practice.Comment: arXiv admin note: text overlap with arXiv:1512.01249. Accepted for publication in Law, Probability and Ris

    Matching bias in syllogistic reasoning: Evidence for a dual-process account from response times and confidence ratings

    Get PDF
    We examined matching bias in syllogistic reasoning by analysing response times, confidence ratings, and individual differences. Roberts’ (2005) “negations paradigm” was used to generate conflict between the surface features of problems and the logical status of conclusions. The experiment replicated matching bias effects in conclusion evaluation (Stupple & Waterhouse, 2009), revealing increased processing times for matching/logic “conflict problems”. Results paralleled chronometric evidence from the belief bias paradigm indicating that logic/belief conflict problems take longer to process than non-conflict problems (Stupple, Ball, Evans, & Kamal-Smith, 2011). Individuals’ response times for conflict problems also showed patterns of association with the degree of overall normative responding. Acceptance rates, response times, metacognitive confidence judgements, and individual differences all converged in supporting dual-process theory. This is noteworthy because dual-process predictions about heuristic/analytic conflict in syllogistic reasoning generalised from the belief bias paradigm to a situation where matching features of conclusions, rather than beliefs, were set in opposition to logic

    Characterizing and Reasoning about Probabilistic and Non-Probabilistic Expectation

    Full text link
    Expectation is a central notion in probability theory. The notion of expectation also makes sense for other notions of uncertainty. We introduce a propositional logic for reasoning about expectation, where the semantics depends on the underlying representation of uncertainty. We give sound and complete axiomatizations for the logic in the case that the underlying representation is (a) probability, (b) sets of probability measures, (c) belief functions, and (d) possibility measures. We show that this logic is more expressive than the corresponding logic for reasoning about likelihood in the case of sets of probability measures, but equi-expressive in the case of probability, belief, and possibility. Finally, we show that satisfiability for these logics is NP-complete, no harder than satisfiability for propositional logic.Comment: To appear in Journal of the AC

    Probabilistic Algorithmic Knowledge

    Full text link
    The framework of algorithmic knowledge assumes that agents use deterministic knowledge algorithms to compute the facts they explicitly know. We extend the framework to allow for randomized knowledge algorithms. We then characterize the information provided by a randomized knowledge algorithm when its answers have some probability of being incorrect. We formalize this information in terms of evidence; a randomized knowledge algorithm returning ``Yes'' to a query about a fact \phi provides evidence for \phi being true. Finally, we discuss the extent to which this evidence can be used as a basis for decisions.Comment: 26 pages. A preliminary version appeared in Proc. 9th Conference on Theoretical Aspects of Rationality and Knowledge (TARK'03
    • 

    corecore