4 research outputs found

    Half-AUC for the evaluation of sensitive or specific classifiers

    Get PDF
    This paper describes a simple, non-parametric variant of area under the receiver operating characteristic (ROC) curve (AUC), which we call half-AUC (HAUC). By measuring AUC in two halves: first when the true positive rate (TPR) is greater than the true negative rate (TNR) and then when TPR is less than TNR, we obtain a measure of a classifier's overall sensitivity (HAUC(Se)) and specificity (HAUC(Sp)) respectively. We show that these HAUC measures can be interpreted as the probability of correct ranking under the constraint that one class must have a higher detection rate than the other. We then go on to describe application domains where this constraint is appropriate and hence where HAUC may be superior to AUC. We show examples where HAUC discriminates ROC curves both when one curve dominates another and when the curves cross, but have an equivalent AUC. (C) 2013 Elsevier B.V. All rights reserved

    Deep ROC Analysis and AUC as Balanced Average Accuracy to Improve Model Selection, Understanding and Interpretation

    Get PDF
    Optimal performance is critical for decision-making tasks from medicine to autonomous driving, however common performance measures may be too general or too specific. For binary classifiers, diagnostic tests or prognosis at a timepoint, measures such as the area under the receiver operating characteristic curve, or the area under the precision recall curve, are too general because they include unrealistic decision thresholds. On the other hand, measures such as accuracy, sensitivity or the F1 score are measures at a single threshold that reflect an individual single probability or predicted risk, rather than a range of individuals or risk. We propose a method in between, deep ROC analysis, that examines groups of probabilities or predicted risks for more insightful analysis. We translate esoteric measures into familiar terms: AUC and the normalized concordant partial AUC are balanced average accuracy (a new finding); the normalized partial AUC is average sensitivity; and the normalized horizontal partial AUC is average specificity. Along with post-test measures, we provide a method that can improve model selection in some cases and provide interpretation and assurance for patients in each risk group. We demonstrate deep ROC analysis in two case studies and provide a toolkit in Python.Comment: 14 pages, 6 Figures, submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), currently under revie

    Large-scale Optimization of Partial AUC in a Range of False Positive Rates

    Full text link
    The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning. However, it summarizes the true positive rates (TPRs) over all false positive rates (FPRs) in the ROC space, which may include the FPRs with no practical relevance in some applications. The partial AUC, as a generalization of the AUC, summarizes only the TPRs over a specific range of the FPRs and is thus a more suitable performance measure in many real-world situations. Although partial AUC optimization in a range of FPRs had been studied, existing algorithms are not scalable to big data and not applicable to deep learning. To address this challenge, we cast the problem into a non-smooth difference-of-convex (DC) program for any smooth predictive functions (e.g., deep neural networks), which allowed us to develop an efficient approximated gradient descent method based on the Moreau envelope smoothing technique, inspired by recent advances in non-smooth DC optimization. To increase the efficiency of large data processing, we used an efficient stochastic block coordinate update in our algorithm. Our proposed algorithm can also be used to minimize the sum of ranked range loss, which also lacks efficient solvers. We established a complexity of O~(1/6)\tilde O(1/\epsilon^6) for finding a nearly \epsilon-critical solution. Finally, we numerically demonstrated the effectiveness of our proposed algorithms for both partial AUC maximization and sum of ranked range loss minimization

    Optimal cutoff points for classification in diagnostic studies: new contributions and software development

    Get PDF
    Continuous diagnostic tests (biomarkers or risk markers) are often used to discriminate between healthy and diseased populations. For the clinical application of such tests, the key aspect is how to select an appropriate cutpoint or discrimination value c that defines positive and negative test results. In general, individuals with a diagnostic test value smaller than c are classified as healthy and otherwise as diseased. In the literature, several methods have been proposed to select the threshold value c in terms of different specific criteria of optimality. Among others, one of the methods most used in clinical practice is the Symmetry point that maximizes simultaneously both types of correct classifications. From a graphical viewpoint, the Symmetry point is associated to the operating point on the Receiver Operating Characteristic (ROC) curve that intersects the diagonal line passing through the points (0,1) and (1,0). However, this cutpoint is actually valid only when the error of misclassifying a diseased patient has the same severity than the error of misclassifying a healthy patient. Since this may not be the case in practice, an important issue in order to assess the clinical effectiveness of a biomarker is to take into account the costs associated with the decisions taken when selecting the threshold value. Moreover, to facilitate the task of selecting the optimal cut-off point in clinical practice, it is essential to have software that implements the existing optimal criteria in an user-friendly environment. Another interesting issue appears when the marker shows an irregular distribution, with a dominance of diseased subjects in noncontiguous regions. Using a single cutpoint, as common practice in traditional ROC analysis, would not be appropriate for these scenarios because it would lead to erroneous conclusions, not taking full advantage of the intrinsic classificatory capacity of the marke
    corecore