32 research outputs found
Tikhonov Regularization is Optimal Transport Robust under Martingale Constraints
Distributionally robust optimization has been shown to offer a principled way
to regularize learning models. In this paper, we find that Tikhonov
regularization is distributionally robust in an optimal transport sense (i.e.,
if an adversary chooses distributions in a suitable optimal transport
neighborhood of the empirical measure), provided that suitable martingale
constraints are also imposed. Further, we introduce a relaxation of the
martingale constraints which not only provides a unified viewpoint to a class
of existing robust methods but also leads to new regularization tools. To
realize these novel tools, tractable computational algorithms are proposed. As
a byproduct, the strong duality theorem proved in this paper can be potentially
applied to other problems of independent interest.Comment: Accepted by NeurIPS 202
Adversarial robustness of amortized Bayesian inference
Bayesian inference usually requires running potentially costly inference
procedures separately for every new observation. In contrast, the idea of
amortized Bayesian inference is to initially invest computational cost in
training an inference network on simulated data, which can subsequently be used
to rapidly perform inference (i.e., to return estimates of posterior
distributions) for new observations. This approach has been applied to many
real-world models in the sciences and engineering, but it is unclear how robust
the approach is to adversarial perturbations in the observed data. Here, we
study the adversarial robustness of amortized Bayesian inference, focusing on
simulation-based estimation of multi-dimensional posterior distributions. We
show that almost unrecognizable, targeted perturbations of the observations can
lead to drastic changes in the predicted posterior and highly unrealistic
posterior predictive samples, across several benchmark tasks and a real-world
example from neuroscience. We propose a computationally efficient
regularization scheme based on penalizing the Fisher information of the
conditional density estimator, and show how it improves the adversarial
robustness of amortized Bayesian inference