Efficient Estimation of the Local Robustness of Machine Learning Models

Abstract

Machine learning models often need to be robust to noisy input data. Real-world noise (such as measurement noise) is often random and the effect of such noise on model predictions is captured by a model's local robustness, i.e., the consistency of model predictions in a local region around an input. Local robustness is therefore an important characterization of real-world model behavior and can be useful for debugging models and establishing user trust. However, the na\"ive approach to computing local robustness based on Monte-Carlo sampling is statistically inefficient, especially for high-dimensional data, leading to prohibitive computational costs for large-scale applications. In this work, we develop the first analytical estimators to efficiently compute local robustness of multi-class discriminative models. These estimators linearize models in the local region around an input and compute the model's local robustness using the multivariate Normal cumulative distribution function. Through the derivation of these estimators, we show how local robustness is connected to such concepts as randomized smoothing and softmax probability. In addition, we show empirically that these estimators efficiently compute the local robustness of standard deep learning models and demonstrate these estimators' usefulness for various tasks involving local robustness, such as measuring robustness bias and identifying examples that are vulnerable to noise perturbation in a dataset. To our knowledge, this work is the first to investigate local robustness in a multi-class setting and develop efficient analytical estimators for local robustness. In doing so, this work not only advances the conceptual understanding of local robustness, but also makes its computation practical, enabling the use of local robustness in critical downstream applications

    Similar works

    Full text

    thumbnail-image

    Available Versions