We study the key framework of learning with abstention in the multi-class
classification setting. In this setting, the learner can choose to abstain from
making a prediction with some pre-defined cost. We present a series of new
theoretical and algorithmic results for this learning problem in the
predictor-rejector framework. We introduce several new families of surrogate
losses for which we prove strong non-asymptotic and hypothesis set-specific
consistency guarantees, thereby resolving positively two existing open
questions. These guarantees provide upper bounds on the estimation error of the
abstention loss function in terms of that of the surrogate loss. We analyze
both a single-stage setting where the predictor and rejector are learned
simultaneously and a two-stage setting crucial in applications, where the
predictor is learned in a first stage using a standard surrogate loss such as
cross-entropy. These guarantees suggest new multi-class abstention algorithms
based on minimizing these surrogate losses. We also report the results of
extensive experiments comparing these algorithms to the current
state-of-the-art algorithms on CIFAR-10, CIFAR-100 and SVHN datasets. Our
results demonstrate empirically the benefit of our new surrogate losses and
show the remarkable performance of our broadly applicable two-stage abstention
algorithm