416,368 research outputs found

    Evidence combination based on credal belief redistribution for pattern classification

    Get PDF
    Evidence theory, also called belief functions theory, provides an efficient tool to represent and combine uncertain information for pattern classification. Evidence combination can be interpreted, in some applications, as classifier fusion. The sources of evidence corresponding to multiple classifiers usually exhibit different classification qualities, and they are often discounted using different weights before combination. In order to achieve the best possible fusion performance, a new Credal Belief Redistribution (CBR) method is proposed to revise such evidence. The rationale of CBR consists in transferring belief from one class not just to other classes but also to the associated disjunctions of classes (i.e., meta-classes). As classification accuracy for different objects in a given classifier can also vary, the evidence is revised according to prior knowledge mined from its training neighbors. If the selected neighbors are relatively close to the evidence, a large amount of belief will be discounted for redistribution. Otherwise, only a small fraction of belief will enter the redistribution procedure. An imprecision matrix estimated based on these neighbors is employed to specifically redistribute the discounted beliefs. This matrix expresses the likelihood of misclassification (i.e., the probability of a test pattern belonging to a class different from the one assigned to it by the classifier). In CBR, the discounted beliefs are divided into two parts. One part is transferred between singleton classes, whereas the other is cautiously committed to the associated meta-classes. By doing this, one can efficiently reduce the chance of misclassification by modeling partial imprecision. The multiple revised pieces of evidence are finally combined by Dempster-Shafer rule to reduce uncertainty and further improve classification accuracy. The effectiveness of CBR is extensively validated on several real datasets from the UCI repository, and critically compared with that of other related fusion methods

    Adaptive imputation of missing values for incomplete pattern classification

    Get PDF
    In classification of incomplete pattern, the missing values can either play a crucial role in the class determination, or have only little influence (or eventually none) on the classification results according to the context. We propose a credal classification method for incomplete pattern with adaptive imputation of missing values based on belief function theory. At first, we try to classify the object (incomplete pattern) based only on the available attribute values. As underlying principle, we assume that the missing information is not crucial for the classification if a specific class for the object can be found using only the available information. In this case, the object is committed to this particular class. However, if the object cannot be classified without ambiguity, it means that the missing values play a main role for achieving an accurate classification. In this case, the missing values will be imputed based on the K-nearest neighbor (K-NN) and self-organizing map (SOM) techniques, and the edited pattern with the imputation is then classified. The (original or edited) pattern is respectively classified according to each training class, and the classification results represented by basic belief assignments are fused with proper combination rules for making the credal classification. The object is allowed to belong with different masses of belief to the specific classes and meta-classes (which are particular disjunctions of several single classes). The credal classification captures well the uncertainty and imprecision of classification, and reduces effectively the rate of misclassifications thanks to the introduction of meta-classes. The effectiveness of the proposed method with respect to other classical methods is demonstrated based on several experiments using artificial and real data sets

    An Overview of Classifier Fusion Methods

    Get PDF
    A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided

    An Overview of Classifier Fusion Methods

    Get PDF
    A number of classifier fusion methods have been recently developed opening an alternative approach leading to a potential improvement in the classification performance. As there is little theory of information fusion itself, currently we are faced with different methods designed for different problems and producing different results. This paper gives an overview of classifier fusion methods and attempts to identify new trends that may dominate this area of research in future. A taxonomy of fusion methods trying to bring some order into the existing “pudding of diversities” is also provided

    A Distance-Based Decision in the Credal Level

    Get PDF
    Belief function theory provides a flexible way to combine information provided by different sources. This combination is usually followed by a decision making which can be handled by a range of decision rules. Some rules help to choose the most likely hypothesis. Others allow that a decision is made on a set of hypotheses. In [6], we proposed a decision rule based on a distance measure. First, in this paper, we aim to demonstrate that our proposed decision rule is a particular case of the rule proposed in [4]. Second, we give experiments showing that our rule is able to decide on a set of hypotheses. Some experiments are handled on a set of mass functions generated randomly, others on real databases

    Probabilistic classification of acute myocardial infarction from multiple cardiac markers

    Get PDF
    Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78–0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1–6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI

    Inexpensive fusion methods for enhancing feature detection

    Get PDF
    Recent successful approaches to high-level feature detection in image and video data have treated the problem as a pattern classification task. These typically leverage the techniques learned from statistical machine learning, coupled with ensemble architectures that create multiple feature detection models. Once created, co-occurrence between learned features can be captured to further boost performance. At multiple stages throughout these frameworks, various pieces of evidence can be fused together in order to boost performance. These approaches whilst very successful are computationally expensive, and depending on the task, require the use of significant computational resources. In this paper we propose two fusion methods that aim to combine the output of an initial basic statistical machine learning approach with a lower-quality information source, in order to gain diversity in the classified results whilst requiring only modest computing resources. Our approaches, validated experimentally on TRECVid data, are designed to be complementary to existing frameworks and can be regarded as possible replacements for the more computationally expensive combination strategies used elsewhere
    • 

    corecore