69,754 research outputs found
Truncated Variational Sampling for "Black Box" Optimization of Generative Models
We investigate the optimization of two probabilistic generative models with
binary latent variables using a novel variational EM approach. The approach
distinguishes itself from previous variational approaches by using latent
states as variational parameters. Here we use efficient and general purpose
sampling procedures to vary the latent states, and investigate the "black box"
applicability of the resulting optimization procedure. For general purpose
applicability, samples are drawn from approximate marginal distributions of the
considered generative model as well as from the model's prior distribution. As
such, variational sampling is defined in a generic form, and is directly
executable for a given model. As a proof of concept, we then apply the novel
procedure (A) to Binary Sparse Coding (a model with continuous observables),
and (B) to basic Sigmoid Belief Networks (which are models with binary
observables). Numerical experiments verify that the investigated approach
efficiently as well as effectively increases a variational free energy
objective without requiring any additional analytical steps
Towards Efficient Maximum Likelihood Estimation of LPV-SS Models
How to efficiently identify multiple-input multiple-output (MIMO) linear
parameter-varying (LPV) discrete-time state-space (SS) models with affine
dependence on the scheduling variable still remains an open question, as
identification methods proposed in the literature suffer heavily from the curse
of dimensionality and/or depend on over-restrictive approximations of the
measured signal behaviors. However, obtaining an SS model of the targeted
system is crucial for many LPV control synthesis methods, as these synthesis
tools are almost exclusively formulated for the aforementioned representation
of the system dynamics. Therefore, in this paper, we tackle the problem by
combining state-of-the-art LPV input-output (IO) identification methods with an
LPV-IO to LPV-SS realization scheme and a maximum likelihood refinement step.
The resulting modular LPV-SS identification approach achieves statical
efficiency with a relatively low computational load. The method contains the
following three steps: 1) estimation of the Markov coefficient sequence of the
underlying system using correlation analysis or Bayesian impulse response
estimation, then 2) LPV-SS realization of the estimated coefficients by using a
basis reduced Ho-Kalman method, and 3) refinement of the LPV-SS model estimate
from a maximum-likelihood point of view by a gradient-based or an
expectation-maximization optimization methodology. The effectiveness of the
full identification scheme is demonstrated by a Monte Carlo study where our
proposed method is compared to existing schemes for identifying a MIMO LPV
system
A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms
The benefits of automating design cycles for Bayesian inference-based
algorithms are becoming increasingly recognized by the machine learning
community. As a result, interest in probabilistic programming frameworks has
much increased over the past few years. This paper explores a specific
probabilistic programming paradigm, namely message passing in Forney-style
factor graphs (FFGs), in the context of automated design of efficient Bayesian
signal processing algorithms. To this end, we developed "ForneyLab"
(https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message
passing-based inference in FFGs. We show by example how ForneyLab enables
automatic derivation of Bayesian signal processing algorithms, including
algorithms for parameter estimation and model comparison. Crucially, due to the
modular makeup of the FFG framework, both the model specification and inference
methods are readily extensible in ForneyLab. In order to test this framework,
we compared variational message passing as implemented by ForneyLab with
automatic differentiation variational inference (ADVI) and Monte Carlo methods
as implemented by state-of-the-art tools "Edward" and "Stan". In terms of
performance, extensibility and stability issues, ForneyLab appears to enjoy an
edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate
Reasonin
A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues
Sequential data often possesses a hierarchical structure with complex
dependencies between subsequences, such as found between the utterances in a
dialogue. In an effort to model this kind of generative process, we propose a
neural network-based generative architecture, with latent stochastic variables
that span a variable number of time steps. We apply the proposed model to the
task of dialogue response generation and compare it with recent neural network
architectures. We evaluate the model performance through automatic evaluation
metrics and by carrying out a human evaluation. The experiments demonstrate
that our model improves upon recently proposed models and that the latent
variables facilitate the generation of long outputs and maintain the context.Comment: 15 pages, 5 tables, 4 figure
- …