133,612 research outputs found

    Surrogate regret bounds for generalized classification performance metrics

    Full text link
    We consider optimization of generalized performance metrics for binary classification by means of surrogate losses. We focus on a class of metrics, which are linear-fractional functions of the false positive and false negative rates (examples of which include FβF_{\beta}-measure, Jaccard similarity coefficient, AM measure, and many others). Our analysis concerns the following two-step procedure. First, a real-valued function ff is learned by minimizing a surrogate loss for binary classification on the training sample. It is assumed that the surrogate loss is a strongly proper composite loss function (examples of which include logistic loss, squared-error loss, exponential loss, etc.). Then, given ff, a threshold θ^\widehat{\theta} is tuned on a separate validation sample, by direct optimization of the target performance metric. We show that the regret of the resulting classifier (obtained from thresholding ff on θ^\widehat{\theta}) measured with respect to the target metric is upperbounded by the regret of ff measured with respect to the surrogate loss. We also extend our results to cover multilabel classification and provide regret bounds for micro- and macro-averaging measures. Our findings are further analyzed in a computational study on both synthetic and real data sets.Comment: 22 page

    Calibrated Surrogate Losses for Classification with Label-Dependent Costs

    Full text link
    We present surrogate regret bounds for arbitrary surrogate losses in the context of binary classification with label-dependent costs. Such bounds relate a classifier's risk, assessed with respect to a surrogate loss, to its cost-sensitive classification risk. Two approaches to surrogate regret bounds are developed. The first is a direct generalization of Bartlett et al. [2006], who focus on margin-based losses and cost-insensitive classification, while the second adopts the framework of Steinwart [2007] based on calibration functions. Nontrivial surrogate regret bounds are shown to exist precisely when the surrogate loss satisfies a "calibration" condition that is easily verified for many common losses. We apply this theory to the class of uneven margin losses, and characterize when these losses are properly calibrated. The uneven hinge, squared error, exponential, and sigmoid losses are then treated in detail.Comment: 33 pages, 7 figure

    Threshold Choice Methods: the Missing Link

    Full text link
    Many performance metrics have been introduced for the evaluation of classification performance, with different origins and niches of application: accuracy, macro-accuracy, area under the ROC curve, the ROC convex hull, the absolute error, and the Brier score (with its decomposition into refinement and calibration). One way of understanding the relation among some of these metrics is the use of variable operating conditions (either in the form of misclassification costs or class proportions). Thus, a metric may correspond to some expected loss over a range of operating conditions. One dimension for the analysis has been precisely the distribution we take for this range of operating conditions, leading to some important connections in the area of proper scoring rules. However, we show that there is another dimension which has not received attention in the analysis of performance metrics. This new dimension is given by the decision rule, which is typically implemented as a threshold choice method when using scoring models. In this paper, we explore many old and new threshold choice methods: fixed, score-uniform, score-driven, rate-driven and optimal, among others. By calculating the loss of these methods for a uniform range of operating conditions we get the 0-1 loss, the absolute error, the Brier score (mean squared error), the AUC and the refinement loss respectively. This provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation, namely: take a model, apply several threshold choice methods consistent with the information which is (and will be) available about the operating condition, and compare their expected losses. In order to assist in this procedure we also derive several connections between the aforementioned performance metrics, and we highlight the role of calibration in choosing the threshold choice method

    Improved Heat Demand Prediction of Individual Households

    Get PDF
    One of the options to increase the energy efficiency of current electricity network is the use of a Virtual Power Plant. By using multiple small (micro)generators distributed over the country, electricity can be produced more efficiently since these small generators are more efficient and located where the energy is needed. In this paper we focus on micro Combined Heat and Power generators. For such generators, the production capacity is determined and limited by the heat demand. To keep the global electricity network stable, information about the production capacity of the heat-driven generators is required in advance. In this paper we present methods to perform heat demand prediction of individual households based on neural network techniques. Using different input sets and a so called sliding window, the quality of the predictions can be improved significantly. Simulations show that these improvements have a positive impact on controlling the distributed microgenerators
    • …
    corecore