Conformal prediction is a statistical framework that generates prediction
sets containing ground-truth labels with a desired coverage guarantee. The
predicted probabilities produced by machine learning models are generally
miscalibrated, leading to large prediction sets in conformal prediction. In
this paper, we empirically and theoretically show that disregarding the
probabilities' value will mitigate the undesirable effect of miscalibrated
probability values. Then, we propose a novel algorithm named Sorted Adaptive prediction sets (SAPS), which discards all the probability values
except for the maximum softmax probability. The key idea behind SAPS is to
minimize the dependence of the non-conformity score on the probability values
while retaining the uncertainty information. In this manner, SAPS can produce
sets of small size and communicate instance-wise uncertainty. Theoretically, we
provide a finite-sample coverage guarantee of SAPS and show that the expected
value of set size from SAPS is always smaller than APS. Extensive experiments
validate that SAPS not only lessens the prediction sets but also broadly
enhances the conditional coverage rate and adaptation of prediction sets