5 research outputs found

    Rectified softmax loss with all-sided cost sensitivity for age estimation

    Get PDF
    In Convolutional Neural Network (ConvNet) based age estimation algorithms, softmax loss is usually chosen as the loss function directly, and the problems of Cost Sensitivity (CS), such as class imbalance and misclassification cost difference between different classes, are not considered. Focus on these problems, this paper constructs a rectified softmax loss function with all-sided CS, and proposes a novel cost-sensitive ConvNet based age estimation algorithm. Firstly, a loss function is established for each age category to solve the imbalance of the number of training samples. Then, a cost matrix is defined to reflect the cost difference caused by misclassification between different classes, thus constructing a new cost-sensitive error function. Finally, the above methods are merged to construct a rectified softmax loss function for ConvNet model, and a corresponding Back Propagation (BP) training scheme is designed to enable ConvNet network to learn robust face representation for age estimation during the training phase. Simultaneously, the rectified softmax loss is theoretically proved that it satisfies the general conditions of the loss function used for classification. The effectiveness of the proposed method is verified by experiments on face image datasets of different races. © 2013 IEEE

    Improved Multi-Class Cost-Sensitive Boosting via Estimation of the Minimum-Risk Class

    Get PDF
    We present a simple unified framework for multi-class cost-sensitive boosting. The minimum-risk class is estimated directly, rather than via an approximation of the posterior distribution. Our method jointly optimizes binary weak learners and their corresponding output vectors, requiring classes to share features at each iteration. By training in a cost-sensitive manner, weak learners are invested in separating classes whose discrimination is important, at the expense of less relevant classification boundaries. Additional contributions are a family of loss functions along with proof that our algorithm is Boostable in the theoretical sense, as well as an efficient procedure for growing decision trees for use as weak learners. We evaluate our method on a variety of datasets: a collection of synthetic planar data, common UCI datasets, MNIST digits, SUN scenes, and CUB-200 birds. Results show state-of-the-art performance across all datasets against several strong baselines, including non-boosting multi-class approaches

    Improved Multi-Class Cost-Sensitive Boosting via Estimation of the Minimum-Risk Class

    Get PDF
    We present a simple unified framework for multi-class cost-sensitive boosting. The minimum-risk class is estimated directly, rather than via an approximation of the posterior distribution. Our method jointly optimizes binary weak learners and their corresponding output vectors, requiring classes to share features at each iteration. By training in a cost-sensitive manner, weak learners are invested in separating classes whose discrimination is important, at the expense of less relevant classification boundaries. Additional contributions are a family of loss functions along with proof that our algorithm is Boostable in the theoretical sense, as well as an efficient procedure for growing decision trees for use as weak learners. We evaluate our method on a variety of datasets: a collection of synthetic planar data, common UCI datasets, MNIST digits, SUN scenes, and CUB-200 birds. Results show state-of-the-art performance across all datasets against several strong baselines, including non-boosting multi-class approaches

    Unified Binary and Multiclass Margin-Based Classification

    Full text link
    The notion of margin loss has been central to the development and analysis of algorithms for binary classification. To date, however, there remains no consensus as to the analogue of the margin loss for multiclass classification. In this work, we show that a broad range of multiclass loss functions, including many popular ones, can be expressed in the relative margin form, a generalization of the margin form of binary losses. The relative margin form is broadly useful for understanding and analyzing multiclass losses as shown by our prior work (Wang and Scott, 2020, 2021). To further demonstrate the utility of this way of expressing multiclass losses, we use it to extend the seminal result of Bartlett et al. (2006) on classification-calibration of binary margin losses to multiclass. We then analyze the class of Fenchel-Young losses, and expand the set of these losses that are known to be classification-calibrated
    corecore