196 research outputs found
Surrogate regret bounds for generalized classification performance metrics
We consider optimization of generalized performance metrics for binary
classification by means of surrogate losses. We focus on a class of metrics,
which are linear-fractional functions of the false positive and false negative
rates (examples of which include -measure, Jaccard similarity
coefficient, AM measure, and many others). Our analysis concerns the following
two-step procedure. First, a real-valued function is learned by minimizing
a surrogate loss for binary classification on the training sample. It is
assumed that the surrogate loss is a strongly proper composite loss function
(examples of which include logistic loss, squared-error loss, exponential loss,
etc.). Then, given , a threshold is tuned on a separate
validation sample, by direct optimization of the target performance metric. We
show that the regret of the resulting classifier (obtained from thresholding
on ) measured with respect to the target metric is
upperbounded by the regret of measured with respect to the surrogate loss.
We also extend our results to cover multilabel classification and provide
regret bounds for micro- and macro-averaging measures. Our findings are further
analyzed in a computational study on both synthetic and real data sets.Comment: 22 page
Optimizing Nondecomposable Data Dependent Regularizers via Lagrangian Reparameterization offers Significant Performance and Efficiency Gains
Data dependent regularization is known to benefit a wide variety of problems
in machine learning. Often, these regularizers cannot be easily decomposed into
a sum over a finite number of terms, e.g., a sum over individual example-wise
terms. The measure, Area under the ROC curve (AUCROC) and Precision
at a fixed recall (P@R) are some prominent examples that are used in many
applications. We find that for most medium to large sized datasets, scalability
issues severely limit our ability in leveraging the benefits of such
regularizers. Importantly, the key technical impediment despite some recent
progress is that, such objectives remain difficult to optimize via
backpropapagation procedures. While an efficient general-purpose strategy for
this problem still remains elusive, in this paper, we show that for many
data-dependent nondecomposable regularizers that are relevant in applications,
sizable gains in efficiency are possible with minimal code-level changes; in
other words, no specialized tools or numerical schemes are needed. Our
procedure involves a reparameterization followed by a partial dualization --
this leads to a formulation that has provably cheap projection operators. We
present a detailed analysis of runtime and convergence properties of our
algorithm. On the experimental side, we show that a direct use of our scheme
significantly improves the state of the art IOU measures reported for MSCOCO
Stuff segmentation dataset
- …