Extreme multi-label classification (XMLC) is the task of selecting a small
subset of relevant labels from a very large set of possible labels. As such, it
is characterized by long-tail labels, i.e., most labels have very few positive
instances. With standard performance measures such as precision@k, a classifier
can ignore tail labels and still report good performance. However, it is often
argued that correct predictions in the tail are more "interesting" or
"rewarding," but the community has not yet settled on a metric capturing this
intuitive concept. The existing propensity-scored metrics fall short on this
goal by confounding the problems of long-tail and missing labels. In this
paper, we analyze generalized metrics budgeted "at k" as an alternative
solution. To tackle the challenging problem of optimizing these metrics, we
formulate it in the expected test utility (ETU) framework, which aims to
optimize the expected performance on a fixed test set. We derive optimal
prediction rules and construct computationally efficient approximations with
provable regret guarantees and robustness against model misspecification. Our
algorithm, based on block coordinate ascent, scales effortlessly to XMLC
problems and obtains promising results in terms of long-tail performance.Comment: This is the authors' version of the work accepted to NeurIPS 2023;
the final version of the paper, errors and typos corrected, and minor
modifications to improve clarit