1,795 research outputs found

    General combination rules for qualitative and quantitative beliefs

    Full text link
    Martin and Osswald \cite{Martin07} have recently proposed many generalizations of combination rules on quantitative beliefs in order to manage the conflict and to consider the specificity of the responses of the experts. Since the experts express themselves usually in natural language with linguistic labels, Smarandache and Dezert \cite{Li07} have introduced a mathematical framework for dealing directly also with qualitative beliefs. In this paper we recall some element of our previous works and propose the new combination rules, developed for the fusion of both qualitative or quantitative beliefs

    A new weighting factor in combining belief function

    Get PDF
    Dempster-Shafer evidence theory has been widely used in various applications. However, to solve the problem of counter-intuitive outcomes by using classical Dempster-Shafer combination rule is still an open issue while fusing the conflicting evidences. Many approaches based on discounted evidence and weighted average evidence have been investigated and have made significant improvements. Nevertheless, all of these approaches have inherent flaws. In this paper, a new weighting factor is proposed to address this proble

    Distances in evidence theory: Comprehensive survey and generalizations

    Get PDF
    AbstractThe purpose of the present work is to survey the dissimilarity measures defined so far in the mathematical framework of evidence theory, and to propose a classification of these measures based on their formal properties. This research is motivated by the fact that while dissimilarity measures have been widely studied and surveyed in the fields of probability theory and fuzzy set theory, no comprehensive survey is yet available for evidence theory. The main results presented herein include a synthesis of the properties of the measures defined so far in the scientific literature; the generalizations proposed naturally lead to additions to the body of the previously known measures, leading to the definition of numerous new measures. Building on this analysis, we have highlighted the fact that Dempster’s conflict cannot be considered as a genuine dissimilarity measure between two belief functions and have proposed an alternative based on a cosine function. Other original results include the justification of the use of two-dimensional indexes as (cosine; distance) couples and a general formulation for this class of new indexes. We base our exposition on a geometrical interpretation of evidence theory and show that most of the dissimilarity measures so far published are based on inner products, in some cases degenerated. Experimental results based on Monte Carlo simulations illustrate interesting relationships between existing measures

    Alphabetic Letter Identification: Effects of perceivability, similarity, and bias

    Get PDF
    The legibility of the letters in the Latin alphabet has been measured numerous times since the beginning of\ud experimental psychology. To identify the theoretical mechanisms attributed to letter identification, we report\ud a comprehensive review of literature, spanning more than a century. This review revealed that identification\ud accuracy has frequently been attributed to a subset of three common sources: perceivability, bias, and simi-\ud larity. However, simultaneous estimates of these values have rarely (if ever) been performed. We present the\ud results of two new experiments which allow for the simultaneous estimation of these factors, and examine\ud how the shape of a visual mask impacts each of them, as inferred through a new statistical model. Results showed that the shape and identity of the mask impacted the inferred perceivability, bias, and similarity space of a letter set, but that there were aspects of similarity that were robust to the choice of mask. The results illustrate how the psychological concepts of perceivability, bias, and similarity can be estimated simultaneously, and how each make powerful contributions to visual letter identification

    Adaptive imputation of missing values for incomplete pattern classification

    Get PDF
    In classification of incomplete pattern, the missing values can either play a crucial role in the class determination, or have only little influence (or eventually none) on the classification results according to the context. We propose a credal classification method for incomplete pattern with adaptive imputation of missing values based on belief function theory. At first, we try to classify the object (incomplete pattern) based only on the available attribute values. As underlying principle, we assume that the missing information is not crucial for the classification if a specific class for the object can be found using only the available information. In this case, the object is committed to this particular class. However, if the object cannot be classified without ambiguity, it means that the missing values play a main role for achieving an accurate classification. In this case, the missing values will be imputed based on the K-nearest neighbor (K-NN) and self-organizing map (SOM) techniques, and the edited pattern with the imputation is then classified. The (original or edited) pattern is respectively classified according to each training class, and the classification results represented by basic belief assignments are fused with proper combination rules for making the credal classification. The object is allowed to belong with different masses of belief to the specific classes and meta-classes (which are particular disjunctions of several single classes). The credal classification captures well the uncertainty and imprecision of classification, and reduces effectively the rate of misclassifications thanks to the introduction of meta-classes. The effectiveness of the proposed method with respect to other classical methods is demonstrated based on several experiments using artificial and real data sets

    Similarity-Based Models of Word Cooccurrence Probabilities

    Full text link
    In many applications of natural language processing (NLP) it is necessary to determine the likelihood of a given word combination. For example, a speech recognizer may need to determine which of the two word combinations ``eat a peach'' and ``eat a beach'' is more likely. Statistical NLP methods determine the likelihood of a word combination from its frequency in a training corpus. However, the nature of language is such that many word combinations are infrequent and do not occur in any given corpus. In this work we propose a method for estimating the probability of such previously unseen word combinations using available information on ``most similar'' words. We describe probabilistic word association models based on distributional word similarity, and apply them to two tasks, language modeling and pseudo-word disambiguation. In the language modeling task, a similarity-based model is used to improve probability estimates for unseen bigrams in a back-off language model. The similarity-based method yields a 20% perplexity improvement in the prediction of unseen bigrams and statistically significant reductions in speech-recognition error. We also compare four similarity-based estimation methods against back-off and maximum-likelihood estimation methods on a pseudo-word sense disambiguation task in which we controlled for both unigram and bigram frequency to avoid giving too much weight to easy-to-disambiguate high-frequency configurations. The similarity-based methods perform up to 40% better on this particular task.Comment: 26 pages, 5 figure

    An Evidence Fusion Method with Importance Discounting Factors based on Neutrosophic Probability Analysis in DSmT Framework

    Get PDF
    To obtain effective fusion results of multi source evidences with different importance, an evidence fusion method with importance discounting factors based on neutrosopic probability analysis in DSmT framework is proposed. First, the reasonable evidence sources are selected out based on the statistical analysis of the pignistic probability functions of single focal elements. Secondly, the neutrosophic probability analysis is conducted based on the similarities of the pignistic probability functions from the prior evidence knowledge of the reasonable evidence sources. Thirdly, the importance discounting factors of the reasonable evidence sources are obtained based on the neutrosophic probability analysis and the reliability discounting factors of the real-time evidences are calculated based on probabilistic-based distances. Fourthly, the real-time evidences are discounted by the importance discounting factors and then the evidences with the mass assignments of neutrosophic empty sets are discounted by the reliability discounting factors. Finally, DSmT+PCR5 of importance discounted evidences is applied. Experimental examples show that the decision results based on the proposed fusion method are different from the results based on the existed fusion methods. Simulation experiments of recognition fusion are performed and the superiority of proposed method is testified well by the simulation results

    Full Issue

    Get PDF

    Multi-source heterogeneous intelligence fusion

    Get PDF
    • …
    corecore