24 research outputs found
A Theoretical Analysis of NDCG Type Ranking Measures
A central problem in ranking is to design a ranking measure for evaluation of
ranking functions. In this paper we study, from a theoretical perspective, the
widely used Normalized Discounted Cumulative Gain (NDCG)-type ranking measures.
Although there are extensive empirical studies of NDCG, little is known about
its theoretical properties. We first show that, whatever the ranking function
is, the standard NDCG which adopts a logarithmic discount, converges to 1 as
the number of items to rank goes to infinity. On the first sight, this result
is very surprising. It seems to imply that NDCG cannot differentiate good and
bad ranking functions, contradicting to the empirical success of NDCG in many
applications. In order to have a deeper understanding of ranking measures in
general, we propose a notion referred to as consistent distinguishability. This
notion captures the intuition that a ranking measure should have such a
property: For every pair of substantially different ranking functions, the
ranking measure can decide which one is better in a consistent manner on almost
all datasets. We show that NDCG with logarithmic discount has consistent
distinguishability although it converges to the same limit for all ranking
functions. We next characterize the set of all feasible discount functions for
NDCG according to the concept of consistent distinguishability. Specifically we
show that whether NDCG has consistent distinguishability depends on how fast
the discount decays, and 1/r is a critical point. We then turn to the cut-off
version of NDCG, i.e., NDCG@k. We analyze the distinguishability of NDCG@k for
various choices of k and the discount functions. Experimental results on real
Web search datasets agree well with the theory.Comment: COLT 201
Convex Calibration Dimension for Multiclass Loss Matrices
We study consistency properties of surrogate loss functions for general
multiclass learning problems, defined by a general multiclass loss matrix. We
extend the notion of classification calibration, which has been studied for
binary and multiclass 0-1 classification problems (and for certain other
specific learning problems), to the general multiclass setting, and derive
necessary and sufficient conditions for a surrogate loss to be calibrated with
respect to a loss matrix in this setting. We then introduce the notion of
convex calibration dimension of a multiclass loss matrix, which measures the
smallest `size' of a prediction space in which it is possible to design a
convex surrogate that is calibrated with respect to the loss matrix. We derive
both upper and lower bounds on this quantity, and use these results to analyze
various loss matrices. In particular, we apply our framework to study various
subset ranking losses, and use the convex calibration dimension as a tool to
show both the existence and non-existence of various types of convex calibrated
surrogates for these losses. Our results strengthen recent results of Duchi et
al. (2010) and Calauzenes et al. (2012) on the non-existence of certain types
of convex calibrated surrogates in subset ranking. We anticipate the convex
calibration dimension may prove to be a useful tool in the study and design of
surrogate losses for general multiclass learning problems.Comment: Accepted to JMLR, pending editin
On the (non-)existence of convex, calibrated surrogate losses for ranking
Abstract We study surrogate losses for learning to rank, in a framework where the rankings are induced by scores and the task is to learn the scoring function. We focus on the calibration of surrogate losses with respect to a ranking evaluation metric, where the calibration is equivalent to the guarantee that near-optimal values of the surrogate risk imply near-optimal values of the risk defined by the evaluation metric. We prove that if a surrogate loss is a convex function of the scores, then it is not calibrated with respect to two evaluation metrics widely used for search engine evaluation, namely the Average Precision and the Expected Reciprocal Rank. We also show that such convex surrogate losses cannot be calibrated with respect to the Pairwise Disagreement, an evaluation metric used when learning from pairwise preferences. Our results cast lights on the intrinsic difficulty of some ranking problems, as well as on the limitations of learning-to-rank algorithms based on the minimization of a convex surrogate risk