4,062 research outputs found
Structured Optimal Variational Inference for Dynamic Latent Space Models
We consider a latent space model for dynamic networks, where our objective is
to estimate the pairwise inner products of the latent positions. To balance
posterior inference and computational scalability, we present a structured
mean-field variational inference framework, where the time-dependent properties
of the dynamic networks are exploited to facilitate computation and inference.
Additionally, an easy-to-implement block coordinate ascent algorithm is
developed with message-passing type updates in each block, whereas the
complexity per iteration is linear with the number of nodes and time points. To
facilitate learning of the pairwise latent distances, we adopt a Gamma prior
for the transition variance different from the literature. To certify the
optimality, we demonstrate that the variational risk of the proposed
variational inference approach attains the minimax optimal rate under certain
conditions. En route, we derive the minimax lower bound, which might be of
independent interest. To best of our knowledge, this is the first such exercise
for dynamic latent space models. Simulations and real data analysis demonstrate
the efficacy of our methodology and the efficiency of our algorithm. Finally,
our proposed methodology can be readily extended to the case where the scales
of the latent nodes are learned in a nodewise manner
Coordinated Multi-Agent Imitation Learning
We study the problem of imitation learning from demonstrations of multiple
coordinating agents. One key challenge in this setting is that learning a good
model of coordination can be difficult, since coordination is often implicit in
the demonstrations and must be inferred as a latent variable. We propose a
joint approach that simultaneously learns a latent coordination model along
with the individual policies. In particular, our method integrates unsupervised
structure learning with conventional imitation learning. We illustrate the
power of our approach on a difficult problem of learning multiple policies for
fine-grained behavior modeling in team sports, where different players occupy
different roles in the coordinated team strategy. We show that having a
coordination model to infer the roles of players yields substantially improved
imitation loss compared to conventional baselines.Comment: International Conference on Machine Learning 201
A Factor Graph Approach to Automated Design of Bayesian Signal Processing Algorithms
The benefits of automating design cycles for Bayesian inference-based
algorithms are becoming increasingly recognized by the machine learning
community. As a result, interest in probabilistic programming frameworks has
much increased over the past few years. This paper explores a specific
probabilistic programming paradigm, namely message passing in Forney-style
factor graphs (FFGs), in the context of automated design of efficient Bayesian
signal processing algorithms. To this end, we developed "ForneyLab"
(https://github.com/biaslab/ForneyLab.jl) as a Julia toolbox for message
passing-based inference in FFGs. We show by example how ForneyLab enables
automatic derivation of Bayesian signal processing algorithms, including
algorithms for parameter estimation and model comparison. Crucially, due to the
modular makeup of the FFG framework, both the model specification and inference
methods are readily extensible in ForneyLab. In order to test this framework,
we compared variational message passing as implemented by ForneyLab with
automatic differentiation variational inference (ADVI) and Monte Carlo methods
as implemented by state-of-the-art tools "Edward" and "Stan". In terms of
performance, extensibility and stability issues, ForneyLab appears to enjoy an
edge relative to its competitors for automated inference in state-space models.Comment: Accepted for publication in the International Journal of Approximate
Reasonin
- …