1,275 research outputs found
Truncated Variational Sampling for "Black Box" Optimization of Generative Models
We investigate the optimization of two probabilistic generative models with
binary latent variables using a novel variational EM approach. The approach
distinguishes itself from previous variational approaches by using latent
states as variational parameters. Here we use efficient and general purpose
sampling procedures to vary the latent states, and investigate the "black box"
applicability of the resulting optimization procedure. For general purpose
applicability, samples are drawn from approximate marginal distributions of the
considered generative model as well as from the model's prior distribution. As
such, variational sampling is defined in a generic form, and is directly
executable for a given model. As a proof of concept, we then apply the novel
procedure (A) to Binary Sparse Coding (a model with continuous observables),
and (B) to basic Sigmoid Belief Networks (which are models with binary
observables). Numerical experiments verify that the investigated approach
efficiently as well as effectively increases a variational free energy
objective without requiring any additional analytical steps
Truncated Variational Sampling for "Black Box" Optimization of Generative Models
We investigate the optimization of two probabilistic generative models with binary latent variables using a novel variational EM approach. The approach distinguishes itself from previous variational approaches by using latent states as variational parameters. Here we use efficient and general purpose sampling procedures to vary the latent states, and investigate the "black box" applicability of the resulting optimization procedure. For general purpose applicability, samples are drawn from approximate marginal distributions of the considered generative model as well as from the model's prior distribution. As such, variational sampling is defined in a generic form, and is directly executable for a given model. As a proof of concept, we then apply the novel procedure (A) to Binary Sparse Coding (a model with continuous observables), and (B) to basic Sigmoid Belief Networks (which are models with binary observables). Numerical experiments verify that the investigated approach efficiently as well as effectively increases a variational free energy objective without requiring any additional analytical steps
Quantum Generative Adversarial Networks for Learning and Loading Random Distributions
Quantum algorithms have the potential to outperform their classical
counterparts in a variety of tasks. The realization of the advantage often
requires the ability to load classical data efficiently into quantum states.
However, the best known methods require gates to
load an exact representation of a generic data structure into an -qubit
state. This scaling can easily predominate the complexity of a quantum
algorithm and, thereby, impair potential quantum advantage. Our work presents a
hybrid quantum-classical algorithm for efficient, approximate quantum state
loading. More precisely, we use quantum Generative Adversarial Networks (qGANs)
to facilitate efficient learning and loading of generic probability
distributions -- implicitly given by data samples -- into quantum states.
Through the interplay of a quantum channel, such as a variational quantum
circuit, and a classical neural network, the qGAN can learn a representation of
the probability distribution underlying the data samples and load it into a
quantum state. The loading requires
gates and can, thus, enable the
use of potentially advantageous quantum algorithms, such as Quantum Amplitude
Estimation. We implement the qGAN distribution learning and loading method with
Qiskit and test it using a quantum simulation as well as actual quantum
processors provided by the IBM Q Experience. Furthermore, we employ quantum
simulation to demonstrate the use of the trained quantum channel in a quantum
finance application.Comment: 14 pages, 13 figure
Direct Evolutionary Optimization of Variational Autoencoders With Binary Latents
Discrete latent variables are considered important for real world data, which
has motivated research on Variational Autoencoders (VAEs) with discrete
latents. However, standard VAE-training is not possible in this case, which has
motivated different strategies to manipulate discrete distributions in order to
train discrete VAEs similarly to conventional ones. Here we ask if it is also
possible to keep the discrete nature of the latents fully intact by applying a
direct discrete optimization for the encoding model. The approach is
consequently strongly diverting from standard VAE-training by sidestepping
sampling approximation, reparameterization trick and amortization. Discrete
optimization is realized in a variational setting using truncated posteriors in
conjunction with evolutionary algorithms. For VAEs with binary latents, we (A)
show how such a discrete variational method ties into gradient ascent for
network weights, and (B) how the decoder is used to select latent states for
training. Conventional amortized training is more efficient and applicable to
large neural networks. However, using smaller networks, we here find direct
discrete optimization to be efficiently scalable to hundreds of latents. More
importantly, we find the effectiveness of direct optimization to be highly
competitive in `zero-shot' learning. In contrast to large supervised networks,
the here investigated VAEs can, e.g., denoise a single image without previous
training on clean data and/or training on large image datasets. More generally,
the studied approach shows that training of VAEs is indeed possible without
sampling-based approximation and reparameterization, which may be interesting
for the analysis of VAE-training in general. For `zero-shot' settings a direct
optimization, furthermore, makes VAEs competitive where they have previously
been outperformed by non-generative approaches
- …