6 research outputs found
On the Expected Size of Conformal Prediction Sets
While conformal predictors reap the benefits of rigorous statistical
guarantees on their error frequency, the size of their corresponding prediction
sets is critical to their practical utility. Unfortunately, there is currently
a lack of finite-sample analysis and guarantees for their prediction set sizes.
To address this shortfall, we theoretically quantify the expected size of the
prediction sets under the split conformal prediction framework. As this precise
formulation cannot usually be calculated directly, we further derive point
estimates and high-probability interval bounds that can be empirically
computed, providing a practical method for characterizing the expected set
size. We corroborate the efficacy of our results with experiments on real-world
datasets for both regression and classification problems.Comment: International Conference on Artificial Intelligence and Statistics
(AISTATS), 202
Stochastic Activation Pruning for Robust Adversarial Defense
Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy. In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. SAP prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. We can apply SAP to pretrained networks, including adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness against attacks, increasing accuracy and preserving calibration