216 research outputs found
Learning Provably Robust Estimators for Inverse Problems via Jittering
Deep neural networks provide excellent performance for inverse problems such
as denoising. However, neural networks can be sensitive to adversarial or
worst-case perturbations. This raises the question of whether such networks can
be trained efficiently to be worst-case robust. In this paper, we investigate
whether jittering, a simple regularization technique that adds isotropic
Gaussian noise during training, is effective for learning worst-case robust
estimators for inverse problems. While well studied for prediction in
classification tasks, the effectiveness of jittering for inverse problems has
not been systematically investigated. In this paper, we present a novel
analytical characterization of the optimal -worst-case robust estimator
for linear denoising and show that jittering yields optimal robust denoisers.
Furthermore, we examine jittering empirically via training deep neural networks
(U-nets) for natural image denoising, deconvolution, and accelerated magnetic
resonance imaging (MRI). The results show that jittering significantly enhances
the worst-case robustness, but can be suboptimal for inverse problems beyond
denoising. Moreover, our results imply that training on real data which often
contains slight noise is somewhat robustness enhancing
- β¦