43,574 research outputs found
Multi-Source Neural Variational Inference
Learning from multiple sources of information is an important problem in
machine-learning research. The key challenges are learning representations and
formulating inference methods that take into account the complementarity and
redundancy of various information sources. In this paper we formulate a
variational autoencoder based multi-source learning framework in which each
encoder is conditioned on a different information source. This allows us to
relate the sources via the shared latent variables by computing divergence
measures between individual source's posterior approximations. We explore a
variety of options to learn these encoders and to integrate the beliefs they
compute into a consistent posterior approximation. We visualise learned beliefs
on a toy dataset and evaluate our methods for learning shared representations
and structured output prediction, showing trade-offs of learning separate
encoders for each information source. Furthermore, we demonstrate how conflict
detection and redundancy can increase robustness of inference in a multi-source
setting.Comment: AAAI 2019, Association for the Advancement of Artificial Intelligence
(AAAI) 201
Factored expectation propagation for input-output FHMM models in systems biology
We consider the problem of joint modelling of metabolic signals and gene
expression in systems biology applications. We propose an approach based on
input-output factorial hidden Markov models and propose a structured
variational inference approach to infer the structure and states of the model.
We start from the classical free form structured variational mean field
approach and use a expectation propagation to approximate the expectations
needed in the variational loop. We show that this corresponds to a factored
expectation constrained approximate inference. We validate our model through
extensive simulations and demonstrate its applicability on a real world
bacterial data set
- …