483 research outputs found

    Beyond One-hot Encoding: lower dimensional target embedding

    Full text link
    Target encoding plays a central role when learning Convolutional Neural Networks. In this realm, One-hot encoding is the most prevalent strategy due to its simplicity. However, this so widespread encoding schema assumes a flat label space, thus ignoring rich relationships existing among labels that can be exploited during training. In large-scale datasets, data does not span the full label space, but instead lies in a low-dimensional output manifold. Following this observation, we embed the targets into a low-dimensional space, drastically improving convergence speed while preserving accuracy. Our contribution is two fold: (i) We show that random projections of the label space are a valid tool to find such lower dimensional embeddings, boosting dramatically convergence rates at zero computational cost; and (ii) we propose a normalized eigenrepresentation of the class manifold that encodes the targets with minimal information loss, improving the accuracy of random projections encoding while enjoying the same convergence rates. Experiments on CIFAR-100, CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach drastically improves convergence speed while reaching very competitive accuracy rates.Comment: Published at Image and Vision Computin

    The severity of stages estimation during hemorrhage using error correcting output codes method

    Get PDF
    As a beneficial component with critical impact, computer-aided decision making systems have infiltrated many fields, such as economics, medicine, architecture and agriculture. The latent capabilities for facilitating human work propel high-speed development of such systems. Effective decisions provided by such systems greatly reduce the expense of labor, energy, budget, etc. The computer-aided decision making system for traumatic injuries is one type of such systems that supplies suggestive opinions when dealing with the injuries resulted from accidents, battle, or illness. The functions may involve judging the type of illness, allocating the wounded according to battle injuries, deciding the severity of symptoms for illness or injuries, managing the resources in the context of traumatic events, etc. The proposed computer-aided decision making system aims at estimating the severity of blood volume loss. Specifically speaking, accompanying many traumatic injuries, severe hemorrhage, a potentially life-threatening condition that requires immediate treatment, is a significant loss of blood volume in process resulting in decreased blood and oxygen perfusion of vital organs. Hemorrhage and blood loss can occur in different levels such as mild, moderate, or severe. Our proposed system will assist physicians by estimating information such as the severity of blood volume loss and hemorrhage , so that timely measures can be taken to not only save lives but also reduce the long-term complications as well as the cost caused by unmatched operations and treatments. The general framework of the proposed research contains three tasks and many novel and transformative concepts are integrated into the system. First is the preprocessing of the raw signals. In this stage, adaptive filtering is adopted and customized to filter noise, and two detection algorithms (QRS complex detection and Systolic/Diastolic wave detection) are designed. The second process is to extract features. The proposed system combines features from time domain, frequency domain, nonlinear analysis, and multi-model analysis to better represent the patterns when hemorrhage happens. Third, a machine learning algorithm is designed for classification of patterns. A novel machine learning algorithm, as a new version of error correcting output code (ECOC), is designed and investigated for high accuracy and real-time decision making. The features and characteristics of this machine learning method are essential for the proposed computer-aided trauma decision making system. The proposed system is tested agasint Lower Body Negative Pressure (LBNP) dataset, and the results indicate the accuracy and reliability of the proposed system

    Simultaneous class-modelling in chemometrics: A generalization of Partial Least Squares class modelling for more than two classes by using error correcting output code matrices

    Get PDF
    The paper presents a new methodology within the framework of the so-called compliant class-models, PLS2-CM, designed with the purpose of improving the performance of class-modelling in a setting with more than two classes. The improvement in the class-models is achieved through the use of multi-response PLS models with the classes encoded via Error-Correcting Output Codes (ECOC), instead of the traditional class indicator variables used in chemometrics. The proposed PLS2-CM entails a decomposition of a class-modelling problem into a series of binary learners, based on a family of code matrices with different code length, which are evaluated to obtain simultaneous compliant class-models with the best performance. The methodology develops both a new encoding system, based on multi-criteria optimization to search for optimal coding matrices, and a new decoding system, based on probability thresholds to assign objects to classmodels. The whole procedure implies that the characteristics of the dataset at hand affect the final selection of the coding matrix and therefore of built class-models, thus giving rise to a data-driven strategy. The application of PLS2-CM to a variety of cases (controlled data, experimental data and repository datasets) results in an enhanced class-modelling performance by means of the suggested procedure, as measured by the DMCEN (Diagonal Modified Confusion Entropy) index and by sensitivity-specificity matrices. The predictive ability of the compliant class-models has been evaluated.This work is part of the project with reference BU052P20 financed by Junta de Castilla y Leon, Conserjería de Educacion with the aid of European Regional Development Funds

    Hierarchical Label Partitioning for Large Scale Classification

    Get PDF
    International audienceExtreme classification task where the number of classes is very large has received important focus over the last decade. Usual efficient multi-class classification approaches have not been designed to deal with such large number of classes. A particular issue in the context of large scale problems concerns the computational classification complexity : best multi-class approaches have generally a linear complexity with respect to the number of classes which does not allow these approaches to scale up. Recent works have put their focus on using hierarchical classification process in order to speed-up the classification of new instances. A priori information on labels is not always available nor useful to build hierarchical models. Finding a suitable hierarchical organization of the labels is thus a crucial issue as the accuracy of the model depends highly on the label assignment through the label tree. We propose in this work a new algorithm to build iteratively a hierarchical label structure by proposing a partitioning algorithm which optimizes simultaneously the structure in terms of classification complexity and the label partitioning problem in order to achieve high classification performances. Beginning from a flat tree structure, our algorithm selects iteratively a node to expand by adding a new level of nodes between the considered node and its children. This operation increases the speed-up of the classification process. Once the node is selected, best partitioning of the classes has to be computed. We propose to consider a measure based on the maximization of the expected loss of the sub-levels in order to minimize the global error of the structure. This choice enforces hardly separable classes to be group together in same partitions at the first levels of the tree structure and it delays errors at a deep level of the structure where there is no incidence on the accuracy of other classes
    corecore