23 research outputs found

    Bayesian Augmentation of Deep Learning to Improve Video Classification

    Get PDF
    Traditional automated video classification methods lack measures of uncertainty, meaning the network is unable to identify those cases in which its predictions are made with significant uncertainty. This leads to misclassification, as the traditional network classifies each observation with same amount of certainty, no matter what the observation is. Bayesian neural networks are a remedy to this issue by leveraging Bayesian inference to construct uncertainty measures for each prediction. Because exact Bayesian inference is typically intractable due to the large number of parameters in a neural network, Bayesian inference is approximated by utilizing dropout in a convolutional neural network. This research compared a traditional video classification neural network to its Bayesian equivalent based on performance and capabilities. The Bayesian network achieves higher accuracy than a comparable non-Bayesian video network and it further provides uncertainty measures for each classification

    Supervised Classification: Quite a Brief Overview

    Full text link
    The original problem of supervised classification considers the task of automatically assigning objects to their respective classes on the basis of numerical measurements derived from these objects. Classifiers are the tools that implement the actual functional mapping from these measurements---also called features or inputs---to the so-called class label---or output. The fields of pattern recognition and machine learning study ways of constructing such classifiers. The main idea behind supervised methods is that of learning from examples: given a number of example input-output relations, to what extent can the general mapping be learned that takes any new and unseen feature vector to its correct class? This chapter provides a basic introduction to the underlying ideas of how to come to a supervised classification problem. In addition, it provides an overview of some specific classification techniques, delves into the issues of object representation and classifier evaluation, and (very) briefly covers some variations on the basic supervised classification task that may also be of interest to the practitioner

    The current approaches in pattern recognition

    Get PDF

    Can I Trust My One-Class Classification?

    Get PDF
    Contrary to binary and multi-class classifiers, the purpose of a one-class classifier for remote sensing applications is to map only one specific land use/land cover class of interest. Training these classifiers exclusively requires reference data for the class of interest, while training data for other classes is not required. Thus, the acquisition of reference data can be significantly reduced. However, one-class classification is fraught with uncertainty and full automatization is difficult, due to the limited reference information that is available for classifier training. Thus, a user-oriented one-class classification strategy is proposed, which is based among others on the visualization and interpretation of the one-class classifier outcomes during the data processing. Careful interpretation of the diagnostic plots fosters the understanding of the classification outcome, e.g., the class separability and suitability of a particular threshold. In the absence of complete and representative validation data, which is the fact in the context of a real one-class classification application, such information is valuable for evaluation and improving the classification. The potential of the proposed strategy is demonstrated by classifying different crop types with hyperspectral data from Hyperion

    Classification avec contraintes : problématique et apprentissage d'une règle de décision par SVM

    Get PDF
    Le travail présenté porte sur la détermination d'une règle de décision pour un problème avec deux classes et deux contraintes qui fixent des bornes supérieures pour les probabilités d'erreur conditionnelles aux classes. Dans le cas où il existe des règles de décision satisfaisant conjointement les contraintes, la règle choisie sera celle qui minimise un coût combinant les probabilités de décision conditionnelles. Dans le cas contraire, il est nécessaire de définir une règle qui ajoute une classe de rejet. La règle optimale recherchée est alors est celle qui minimise la probabilité de rejet. Dans un premier temps, la règle de décision est définie lorsque les densités de probabilités de chacune des classes sont connues : elle consiste à comparer le rapport de vraisemblance à un ou deux seuils selon que du rejet est nécessaire ou pas. Dans un second temps, une méthode basée sur les SVM est proposée pour élaborer une règle de décision lorsque le processus est uniquement décrit par un ensemble d'apprentissage. La règle consiste à comparer la sortie du SVM avec un ou deux seuils. Ceux-ci sont déterminés à partir des sorties du SVM pour un ensemble d'apprentissage, soit directement avec les valeurs de sortie, soit à partir d'estimation des densités de probabilités. Le biais et la variance des probabilités de rejet et d'erreurs conditionnelles inhérents à la taille de l'ensemble d'apprentissage sont étudiés

    An Optimum Class-Rejective Decision Rule and Its Evaluation

    Full text link

    Accuracy-Rejection Curves (ARCs) for Comparing Classification Methods with a Reject Option

    Get PDF
    Abstract Data extracted from microarrays are now considered an important source of knowledge about various diseases. Several studies based on microarray data and the use of receiver operating characteristics (ROC) graphs have compared supervised machine learning approaches. These comparisons are based on classification schemes in which all samples are classified, regardless of the degree of confidence associated with the classification of a particular sample on the basis of a given classifier. In the domain of healthcare, it is safer to refrain from classifying a sample if the confidence assigned to the classification is not high enough, rather than classifying all samples even if confidence is low. We describe an approach in which the performance of different classifiers is compared, with the possibility of rejection, based on several reject areas. Using a tradeoff between accuracy and rejection, we propose the use of accuracy-rejection curves (ARCs) and three types of relationship between ARCs for comparisons of the ARCs of two classifiers. Empirical results based on purely synthetic data, semi-synthetic data (generated from real data obtained from patients) and public microarray data for binary classification problems demonstrate the efficacy of this method
    corecore