2,129 research outputs found
Visualization of AE's Training on Credit Card Transactions with Persistent Homology
Auto-encoders are among the most popular neural network architecture for
dimension reduction. They are composed of two parts: the encoder which maps the
model distribution to a latent manifold and the decoder which maps the latent
manifold to a reconstructed distribution. However, auto-encoders are known to
provoke chaotically scattered data distribution in the latent manifold
resulting in an incomplete reconstructed distribution. Current distance
measures fail to detect this problem because they are not able to acknowledge
the shape of the data manifolds, i.e. their topological features, and the scale
at which the manifolds should be analyzed. We propose Persistent Homology for
Wasserstein Auto-Encoders, called PHom-WAE, a new methodology to assess and
measure the data distribution of a generative model. PHom-WAE minimizes the
Wasserstein distance between the true distribution and the reconstructed
distribution and uses persistent homology, the study of the topological
features of a space at different spatial resolutions, to compare the nature of
the latent manifold and the reconstructed distribution. Our experiments
underline the potential of persistent homology for Wasserstein Auto-Encoders in
comparison to Variational Auto-Encoders, another type of generative model. The
experiments are conducted on a real-world data set particularly challenging for
traditional distance measures and auto-encoders. PHom-WAE is the first
methodology to propose a topological distance measure, the bottleneck distance,
for Wasserstein Auto-Encoders used to compare decoded samples of high quality
in the context of credit card transactions.Comment: arXiv admin note: substantial text overlap with arXiv:1905.0989
Visualization of AE's Training on Credit Card Transactions with Persistent Homology
Auto-encoders are among the most popular neural network architecture for dimension reduction. They are composed of two parts: the encoder which maps the model distribution to a latent manifold and the decoder which maps the latent manifold to a reconstructed distribution. However, auto-encoders are known to provoke chaotically scattered data distribution in the latent manifold resulting in an incomplete reconstructed distribution. Current distance measures fail to detect this problem because they are not able to acknowledge the shape of the data manifolds, i.e. their topological features, and the scale at which the manifolds should be analyzed. We propose Persistent Homology for Wasserstein Auto-Encoders, called PHom-WAE, a new methodology to assess and measure the data distribution of a generative model. PHom-WAE minimizes the Wasserstein distance between the true distribution and the reconstructed distribution and uses persistent homology, the study of the topological features of a space at different spatial resolutions, to compare the nature of the latent manifold and the reconstructed distribution. Our experiments underline the potential of persistent homology for Wasserstein Auto-Encoders in comparison to Variational Auto-Encoders, another type of generative model. The experiments are conducted on a real-world data set particularly challenging for traditional distance measures and auto-encoders. PHom-WAE is the first methodology to propose a topological distance measure, the bottleneck distance, for Wasserstein Auto-Encoders used to compare decoded samples of high quality in the context of credit card transactions
Max-Sliced Wasserstein Distance and its use for GANs
Generative adversarial nets (GANs) and variational auto-encoders have
significantly improved our distribution modeling capabilities, showing promise
for dataset augmentation, image-to-image translation and feature learning.
However, to model high-dimensional distributions, sequential training and
stacked architectures are common, increasing the number of tunable
hyper-parameters as well as the training time. Nonetheless, the sample
complexity of the distance metrics remains one of the factors affecting GAN
training. We first show that the recently proposed sliced Wasserstein distance
has compelling sample complexity properties when compared to the Wasserstein
distance. To further improve the sliced Wasserstein distance we then analyze
its `projection complexity' and develop the max-sliced Wasserstein distance
which enjoys compelling sample complexity while reducing projection complexity,
albeit necessitating a max estimation. We finally illustrate that the proposed
distance trains GANs on high-dimensional images up to a resolution of 256x256
easily.Comment: Accepted to CVPR 201
PHom-GeM: Persistent Homology for Generative Models
Generative neural network models, including Generative Adversarial Network
(GAN) and Auto-Encoders (AE), are among the most popular neural network models
to generate adversarial data. The GAN model is composed of a generator that
produces synthetic data and of a discriminator that discriminates between the
generator's output and the true data. AE consist of an encoder which maps the
model distribution to a latent manifold and of a decoder which maps the latent
manifold to a reconstructed distribution. However, generative models are known
to provoke chaotically scattered reconstructed distribution during their
training, and consequently, incomplete generated adversarial distributions.
Current distance measures fail to address this problem because they are not
able to acknowledge the shape of the data manifold, i.e. its topological
features, and the scale at which the manifold should be analyzed. We propose
Persistent Homology for Generative Models, PHom-GeM, a new methodology to
assess and measure the distribution of a generative model. PHom-GeM minimizes
an objective function between the true and the reconstructed distributions and
uses persistent homology, the study of the topological features of a space at
different spatial resolutions, to compare the nature of the true and the
generated distributions. Our experiments underline the potential of persistent
homology for Wasserstein GAN in comparison to Wasserstein AE and Variational
AE. The experiments are conducted on a real-world data set particularly
challenging for traditional distance measures and generative neural network
models. PHom-GeM is the first methodology to propose a topological distance
measure, the bottleneck distance, for generative models used to compare
adversarial samples in the context of credit card transactions
- …