Rigorous guarantees about the performance of predictive algorithms are
necessary in order to ensure their responsible use. Previous work has largely
focused on bounding the expected loss of a predictor, but this is not
sufficient in many risk-sensitive applications where the distribution of errors
is important. In this work, we propose a flexible framework to produce a family
of bounds on quantiles of the loss distribution incurred by a predictor. Our
method takes advantage of the order statistics of the observed loss values
rather than relying on the sample mean alone. We show that a quantile is an
informative way of quantifying predictive performance, and that our framework
applies to a variety of quantile-based metrics, each targeting important
subsets of the data distribution. We analyze the theoretical properties of our
proposed method and demonstrate its ability to rigorously control loss
quantiles on several real-world datasets.Comment: 24 pages, 4 figures. Code is available at
https://github.com/jakesnell/quantile-risk-contro