20,039 research outputs found
Robust Conditional GAN from Uncertainty-Aware Pairwise Comparisons
Conditional generative adversarial networks have shown exceptional generation
performance over the past few years. However, they require large numbers of
annotations. To address this problem, we propose a novel generative adversarial
network utilizing weak supervision in the form of pairwise comparisons (PC-GAN)
for image attribute editing. In the light of Bayesian uncertainty estimation
and noise-tolerant adversarial training, PC-GAN can estimate attribute rating
efficiently and demonstrate robust performance in noise resistance. Through
extensive experiments, we show both qualitatively and quantitatively that
PC-GAN performs comparably with fully-supervised methods and outperforms
unsupervised baselines.Comment: Accepted for spotlight at AAAI-2
Neural Networks with Recurrent Generative Feedback
Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback. We instantiate this design
on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework. In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks
Neural Networks with Recurrent Generative Feedback
Neural networks are vulnerable to input perturbations such as additive noise
and adversarial attacks. In contrast, human perception is much more robust to
such perturbations. The Bayesian brain hypothesis states that human brains use
an internal generative model to update the posterior beliefs of the sensory
input. This mechanism can be interpreted as a form of self-consistency between
the maximum a posteriori (MAP) estimation of an internal generative model and
the external environment. Inspired by such hypothesis, we enforce
self-consistency in neural networks by incorporating generative recurrent
feedback. We instantiate this design on convolutional neural networks (CNNs).
The proposed framework, termed Convolutional Neural Networks with Feedback
(CNN-F), introduces a generative feedback with latent variables to existing CNN
architectures, where consistent predictions are made through alternating MAP
inference under a Bayesian framework. In the experiments, CNN-F shows
considerably improved adversarial robustness over conventional feedforward CNNs
on standard benchmarks.Comment: NeurIPS 202
- …