1,760 research outputs found
Towards a Theoretical Understanding of the Robustness of Variational Autoencoders
We make inroads into understanding the robustness of Variational Autoencoders
(VAEs) to adversarial attacks and other input perturbations. While previous
work has developed algorithmic approaches to attacking and defending VAEs,
there remains a lack of formalization for what it means for a VAE to be robust.
To address this, we develop a novel criterion for robustness in probabilistic
models: -robustness. We then use this to construct the first theoretical
results for the robustness of VAEs, deriving margins in the input space for
which we can provide guarantees about the resulting reconstruction. Informally,
we are able to define a region within which any perturbation will produce a
reconstruction that is similar to the original reconstruction. To support our
analysis, we show that VAEs trained using disentangling methods not only score
well under our robustness metrics, but that the reasons for this can be
interpreted through our theoretical results.Comment: 8 page
Variance Loss in Variational Autoencoders
In this article, we highlight what appears to be major issue of Variational
Autoencoders, evinced from an extensive experimentation with different network
architectures and datasets: the variance of generated data is significantly
lower than that of training data. Since generative models are usually evaluated
with metrics such as the Frechet Inception Distance (FID) that compare the
distributions of (features of) real versus generated images, the variance loss
typically results in degraded scores. This problem is particularly relevant in
a two stage setting, where we use a second VAE to sample in the latent space of
the first VAE. The minor variance creates a mismatch between the actual
distribution of latent variables and those generated by the second VAE, that
hinders the beneficial effects of the second stage. Renormalizing the output of
the second VAE towards the expected normal spherical distribution, we obtain a
sudden burst in the quality of generated samples, as also testified in terms of
FID.Comment: Article accepted at the Sixth International Conference on Machine
Learning, Optimization, and Data Science. July 19-23, 2020 - Certosa di
Pontignano, Siena, Ital
- …