1,431 research outputs found

    The Basic Principles of Uncertain Information Fusion. An organized review of merging rules in different representation frameworks

    Get PDF
    We propose and advocate basic principles for the fusion of incomplete or uncertain information items, that should apply regardless of the formalism adopted for representing pieces of information coming from several sources. This formalism can be based on sets, logic, partial orders, possibility theory, belief functions or imprecise probabilities. We propose a general notion of information item representing incomplete or uncertain information about the values of an entity of interest. It is supposed to rank such values in terms of relative plausibility, and explicitly point out impossible values. Basic issues affecting the results of the fusion process, such as relative information content and consistency of information items, as well as their mutual consistency, are discussed. For each representation setting, we present fusion rules that obey our principles, and compare them to postulates specific to the representation proposed in the past. In the crudest (Boolean) representation setting (using a set of possible values), we show that the understanding of the set in terms of most plausible values, or in terms of non-impossible ones matters for choosing a relevant fusion rule. Especially, in the latter case our principles justify the method of maximal consistent subsets, while the former is related to the fusion of logical bases. Then we consider several formal settings for incomplete or uncertain information items, where our postulates are instantiated: plausibility orderings, qualitative and quantitative possibility distributions, belief functions and convex sets of probabilities. The aim of this paper is to provide a unified picture of fusion rules across various uncertainty representation settings

    Other uncertainty theories based on capacities

    Get PDF
    International audienceThe two main uncertainty representations in the literature that tolerate imprecision are possibility distributions and random disjunctive sets. This chapter devotes special attention to the theories that have emerged from them. The first part of the chapter discusses epistemic logic and derives the need for capturing imprecision in information representations. It bridges the gap between uncertainty theories and epistemic logic showing that imprecise probabilities subsume modalities of possibility and necessity as much as probability. The second part presents possibility and evidence theories, their origins, assumptions and semantics, discusses the connections between them and the general framework of imprecise probability. Finally, chapter points out the remaining discrepancies between the different theories regarding various basic notions, such as conditioning, independence or information fusion and the existing bridges between them

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected works), Vol. 2

    Get PDF
    This second volume dedicated to Dezert-Smarandache Theory (DSmT) in Information Fusion brings in new fusion quantitative rules (such as the PCR1-6, where PCR5 for two sources does the most mathematically exact redistribution of conflicting masses to the non-empty sets in the fusion literature), qualitative fusion rules, and the Belief Conditioning Rule (BCR) which is different from the classical conditioning rule used by the fusion community working with the Mathematical Theory of Evidence. Other fusion rules are constructed based on T-norm and T-conorm (hence using fuzzy logic and fuzzy set in information fusion), or more general fusion rules based on N-norm and N-conorm (hence using neutrosophic logic and neutrosophic set in information fusion), and an attempt to unify the fusion rules and fusion theories. The known fusion rules are extended from the power set to the hyper-power set and comparison between rules are made on many examples. One defines the degree of intersection of two sets, degree of union of two sets, and degree of inclusion of two sets which all help in improving the all existing fusion rules as well as the credibility, plausibility, and communality functions. The book chapters are written by Frederic Dambreville, Milan Daniel, Jean Dezert, Pascal Djiknavorian, Dominic Grenier, Xinhan Huang, Pavlina Dimitrova Konstantinova, Xinde Li, Arnaud Martin, Christophe Osswald, Andrew Schumann, Tzvetan Atanasov Semerdjiev, Florentin Smarandache, Albena Tchamova, and Min Wang

    Learning, conditionals, causation

    Get PDF
    This dissertation is on conditionals and causation. In particular, we (i) propose a method of how an agent learns conditional information, and (ii) analyse causation in terms of a new type of conditional. Our starting point is Ramsey's (1929/1990) test: accept a conditional when you can infer its consequent upon supposing its antecedent. Inspired by this test, Stalnaker (1968) developed a semantics of conditionals. In Ch. 2, we define and apply our new method of learning conditional information. It says, roughly, that you learn conditional information by updating on the corresponding Stalnaker conditional. By generalising Lewis's (1976) updating rule to Jeffrey imaging, our learning method becomes applicable to both certain and uncertain conditional information. The method generates the correct predictions for all of Douven's (2012) benchmark examples and Van Fraassen's (1981) Judy Benjamin Problem. In Ch. 3, we prefix Ramsey's test by suspending judgment on antecedent and consequent. Unlike the Ramsey Test semantics by Stalnaker (1968) and Gärdenfors (1978), our strengthened semantics requires the antecedent to be inferentially relevant for the consequent. We exploit this asymmetric relation of relevance in a semantic analysis of the natural language conjunction 'because'. In Ch. 4, we devise an analysis of actual causation in terms of production, where production is understood along the lines of our strengthened Ramsey Test. Our analysis solves the problems of overdetermination, conjunctive scenarios, early and late preemption, switches, double prevention, and spurious causation -- a set of problems that still challenges counterfactual accounts of actual causation in the tradition of Lewis (1973c). In Ch. 5, we translate our analysis of actual causation into Halpern and Pearl's (2005) framework of causal models. As a result, our analysis is considerably simplified on the cost of losing its reductiveness. The upshot is twofold: (i) Jeffrey imaging on Stalnaker conditionals emerges as an alternative to Bayesian accounts of learning conditional information; (ii) the analyses of causation in terms of our strengthened Ramsey Test conditional prove to be worthy rivals to contemporary counterfactual accounts of causation

    Collaborating

    Get PDF
    This paper examines moral hazard in teams over time. Agents are collectively engaged in an uncertain project, and their individual efforts are unobserved. Free-riding leads not only to a reduction in effort, but also to procrastination. The collaboration dwindles over time, but never ceases as long as the project has not succeeded. In fact, the delay until the project succeeds, if it ever does, increases with the number of agents. We show why deadlines, but not necessarily better monitoring, help to mitigate moral hazard.Moral hazard, Teams, Experimentation, Collaboration, Public goods, Learning

    Bounded rationality and spatio-temporal pedestrian shopping behavior

    Get PDF

    Combination of Evidence in Dempster-Shafer Theory

    Full text link

    Borderline Contradictions

    Get PDF
    Borderline contradictions have long been considered to be false, but recent developments in the intersection of the elds of experimental philosophy and linguistics have lead to a consensus that ordinary speakers of natural language nd borderline contradictions to be true. Furthermore, speakers are more likely to agree to disjunctive borderline contradictions (:(A _ :A)) than their conjunctive counterparts (A ^ :A). We focus our attention on a series of studies of this inequality, culminating in Egr e and Zehr's 2016 algorithmic account, which invokes strict and tolerant operators to predict that while speakers are more likely to agree to disjunctive contradictions than to conjunctive contradictions, they are also more likely to agree to both of these than to their positive and negative subparts (A and :A). We present the results of three new studies, one a replication of Egr e and Zehr's work, the results of which suggest that speakers only nd the positive subsentence to be false, suggesting that some leading accounts of how speakers interpret borderline contradictions may require modi cation
    corecore