11,060 research outputs found
Win-Stay, Lose-Sample: A simple sequential algorithm for approximating Bayesian inference
a b s t r a c t People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm ''Win-Stay, Lose-Sample'', inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a ''mini-microgenetic method'', investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments
Variational Sequential Monte Carlo
Many recent advances in large scale probabilistic inference rely on
variational methods. The success of variational approaches depends on (i)
formulating a flexible parametric family of distributions, and (ii) optimizing
the parameters to find the member of this family that most closely approximates
the exact posterior. In this paper we present a new approximating family of
distributions, the variational sequential Monte Carlo (VSMC) family, and show
how to optimize it in variational inference. VSMC melds variational inference
(VI) and sequential Monte Carlo (SMC), providing practitioners with flexible,
accurate, and powerful Bayesian inference. The VSMC family is a variational
family that can approximate the posterior arbitrarily well, while still
allowing for efficient optimization of its parameters. We demonstrate its
utility on state space models, stochastic volatility models for financial data,
and deep Markov models of brain neural circuits
Bayesian Conditional Density Filtering
We propose a Conditional Density Filtering (C-DF) algorithm for efficient
online Bayesian inference. C-DF adapts MCMC sampling to the online setting,
sampling from approximations to conditional posterior distributions obtained by
propagating surrogate conditional sufficient statistics (a function of data and
parameter estimates) as new data arrive. These quantities eliminate the need to
store or process the entire dataset simultaneously and offer a number of
desirable features. Often, these include a reduction in memory requirements and
runtime and improved mixing, along with state-of-the-art parameter inference
and prediction. These improvements are demonstrated through several
illustrative examples including an application to high dimensional compressed
regression. Finally, we show that C-DF samples converge to the target posterior
distribution asymptotically as sampling proceeds and more data arrives.Comment: 41 pages, 7 figures, 12 table
Approximate Bayesian Computation in State Space Models
A new approach to inference in state space models is proposed, based on
approximate Bayesian computation (ABC). ABC avoids evaluation of the likelihood
function by matching observed summary statistics with statistics computed from
data simulated from the true process; exact inference being feasible only if
the statistics are sufficient. With finite sample sufficiency unattainable in
the state space setting, we seek asymptotic sufficiency via the maximum
likelihood estimator (MLE) of the parameters of an auxiliary model. We prove
that this auxiliary model-based approach achieves Bayesian consistency, and
that - in a precise limiting sense - the proximity to (asymptotic) sufficiency
yielded by the MLE is replicated by the score. In multiple parameter settings a
separate treatment of scalar parameters, based on integrated likelihood
techniques, is advocated as a way of avoiding the curse of dimensionality. Some
attention is given to a structure in which the state variable is driven by a
continuous time process, with exact inference typically infeasible in this case
as a result of intractable transitions. The ABC method is demonstrated using
the unscented Kalman filter as a fast and simple way of producing an
approximation in this setting, with a stochastic volatility model for financial
returns used for illustration
Sequential Monte Carlo Methods for System Identification
One of the key challenges in identifying nonlinear and possibly non-Gaussian
state space models (SSMs) is the intractability of estimating the system state.
Sequential Monte Carlo (SMC) methods, such as the particle filter (introduced
more than two decades ago), provide numerical solutions to the nonlinear state
estimation problems arising in SSMs. When combined with additional
identification techniques, these algorithms provide solid solutions to the
nonlinear system identification problem. We describe two general strategies for
creating such combinations and discuss why SMC is a natural tool for
implementing these strategies.Comment: In proceedings of the 17th IFAC Symposium on System Identification
(SYSID). Added cover pag
- …