19,783 research outputs found
Nonlinear State-Space Models for Microeconometric Panel Data
In applied microeconometric panel data analyses, time-constant random effects and first-order Markov chains are the most prevalent structures to account for intertemporal correlations in limited dependent variable models. An example from health economics shows that the addition of a simple autoregressive error terms leads to a more plausible and parsimonious model which also captures the dynamic features better. The computational problems encountered in the estimation of such models - and a broader class formulated in the framework of nonlinear state space models - hampers their widespread use. This paper discusses the application of different nonlinear filtering approaches developed in the time-series literature to these models and suggests that a straightforward algorithm based on sequential Gaussian quadrature can be expected to perform well in this setting. This conjecture is impressively confirmed by an extensive analysis of the example application
Lifelong Generative Modeling
Lifelong learning is the problem of learning multiple consecutive tasks in a
sequential manner, where knowledge gained from previous tasks is retained and
used to aid future learning over the lifetime of the learner. It is essential
towards the development of intelligent machines that can adapt to their
surroundings. In this work we focus on a lifelong learning approach to
unsupervised generative modeling, where we continuously incorporate newly
observed distributions into a learned model. We do so through a student-teacher
Variational Autoencoder architecture which allows us to learn and preserve all
the distributions seen so far, without the need to retain the past data nor the
past models. Through the introduction of a novel cross-model regularizer,
inspired by a Bayesian update rule, the student model leverages the information
learned by the teacher, which acts as a probabilistic knowledge store. The
regularizer reduces the effect of catastrophic interference that appears when
we learn over sequences of distributions. We validate our model's performance
on sequential variants of MNIST, FashionMNIST, PermutedMNIST, SVHN and Celeb-A
and demonstrate that our model mitigates the effects of catastrophic
interference faced by neural networks in sequential learning scenarios.Comment: 32 page
Replica Conditional Sequential Monte Carlo
We propose a Markov chain Monte Carlo (MCMC) scheme to perform state
inference in non-linear non-Gaussian state-space models. Current
state-of-the-art methods to address this problem rely on particle MCMC
techniques and its variants, such as the iterated conditional Sequential Monte
Carlo (cSMC) scheme, which uses a Sequential Monte Carlo (SMC) type proposal
within MCMC. A deficiency of standard SMC proposals is that they only use
observations up to time to propose states at time when an entire
observation sequence is available. More sophisticated SMC based on lookahead
techniques could be used but they can be difficult to put in practice. We
propose here replica cSMC where we build SMC proposals for one replica using
information from the entire observation sequence by conditioning on the states
of the other replicas. This approach is easily parallelizable and we
demonstrate its excellent empirical performance when compared to the standard
iterated cSMC scheme at fixed computational complexity.Comment: To appear in Proceedings of ICML '1
- …