9,546 research outputs found
Hamiltonian Monte Carlo Without Detailed Balance
We present a method for performing Hamiltonian Monte Carlo that largely
eliminates sample rejection for typical hyperparameters. In situations that
would normally lead to rejection, instead a longer trajectory is computed until
a new state is reached that can be accepted. This is achieved using Markov
chain transitions that satisfy the fixed point equation, but do not satisfy
detailed balance. The resulting algorithm significantly suppresses the random
walk behavior and wasted function evaluations that are typically the
consequence of update rejection. We demonstrate a greater than factor of two
improvement in mixing time on three test problems. We release the source code
as Python and MATLAB packages.Comment: Accepted conference submission to ICML 2014 and also featured in a
special edition of JMLR. Since updated to include additional literature
citation
"Computing Densities: A Conditional Monte Carlo Estimator"
We propose a generalized conditional Monte Carlo technique for computing densities in economic models. Global consistency and functional asymptotic normality are established under ergodicity assumptions on the simulated process. The asymptotic normality result allows us to characterize the asymptotic distribution of the error in density space, and implies faster convergence than nonparametric kernel density estimators. We show that our results nest several other well-known density estimators, and illustrate potential applications.
Computing Densities: A Conditional Monte Carlo Estimator
We propose a generalized conditional Monte Carlo technique for computing densities in economic models. Global consistency and functional asymptotic normality are established under ergodicity assumptions on the simulated process. The asymptotic normality result allows us to characterize the asymptotic distribution of the error in density space, and implies faster convergence than nonparametric kernel density estimators. We show that our results nest several other well-known density estimators, and illustrate potential applications.
The iterated auxiliary particle filter
We present an offline, iterated particle filter to facilitate statistical
inference in general state space hidden Markov models. Given a model and a
sequence of observations, the associated marginal likelihood L is central to
likelihood-based inference for unknown statistical parameters. We define a
class of "twisted" models: each member is specified by a sequence of positive
functions psi and has an associated psi-auxiliary particle filter that provides
unbiased estimates of L. We identify a sequence psi* that is optimal in the
sense that the psi*-auxiliary particle filter's estimate of L has zero
variance. In practical applications, psi* is unknown so the psi*-auxiliary
particle filter cannot straightforwardly be implemented. We use an iterative
scheme to approximate psi*, and demonstrate empirically that the resulting
iterated auxiliary particle filter significantly outperforms the bootstrap
particle filter in challenging settings. Applications include parameter
estimation using a particle Markov chain Monte Carlo algorithm
Replica Conditional Sequential Monte Carlo
We propose a Markov chain Monte Carlo (MCMC) scheme to perform state
inference in non-linear non-Gaussian state-space models. Current
state-of-the-art methods to address this problem rely on particle MCMC
techniques and its variants, such as the iterated conditional Sequential Monte
Carlo (cSMC) scheme, which uses a Sequential Monte Carlo (SMC) type proposal
within MCMC. A deficiency of standard SMC proposals is that they only use
observations up to time to propose states at time when an entire
observation sequence is available. More sophisticated SMC based on lookahead
techniques could be used but they can be difficult to put in practice. We
propose here replica cSMC where we build SMC proposals for one replica using
information from the entire observation sequence by conditioning on the states
of the other replicas. This approach is easily parallelizable and we
demonstrate its excellent empirical performance when compared to the standard
iterated cSMC scheme at fixed computational complexity.Comment: To appear in Proceedings of ICML '1
Recursive Monte Carlo filters: Algorithms and theoretical analysis
Recursive Monte Carlo filters, also called particle filters, are a powerful
tool to perform computations in general state space models. We discuss and
compare the accept--reject version with the more common sampling importance
resampling version of the algorithm. In particular, we show how auxiliary
variable methods and stratification can be used in the accept--reject version,
and we compare different resampling techniques. In a second part, we show laws
of large numbers and a central limit theorem for these Monte Carlo filters by
simple induction arguments that need only weak conditions. We also show that,
under stronger conditions, the required sample size is independent of the
length of the observed series.Comment: Published at http://dx.doi.org/10.1214/009053605000000426 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Particle Efficient Importance Sampling
The efficient importance sampling (EIS) method is a general principle for the
numerical evaluation of high-dimensional integrals that uses the sequential
structure of target integrands to build variance minimising importance
samplers. Despite a number of successful applications in high dimensions, it is
well known that importance sampling strategies are subject to an exponential
growth in variance as the dimension of the integration increases. We solve this
problem by recognising that the EIS framework has an offline sequential Monte
Carlo interpretation. The particle EIS method is based on non-standard
resampling weights that take into account the look-ahead construction of the
importance sampler. We apply the method for a range of univariate and bivariate
stochastic volatility specifications. We also develop a new application of the
EIS approach to state space models with Student's t state innovations. Our
results show that the particle EIS method strongly outperforms both the
standard EIS method and particle filters for likelihood evaluation in high
dimensions. Moreover, the ratio between the variances of the particle EIS and
particle filter methods remains stable as the time series dimension increases.
We illustrate the efficiency of the method for Bayesian inference using the
particle marginal Metropolis-Hastings and importance sampling squared
algorithms
Bayesian fan charts for U.K. inflation: forecasting and sources of uncertainty in an evolving monetary system
We estimate a Bayesian vector autoregression for the U.K. with drifting coefficients and stochastic volatilities. We use it to characterize posterior densities for several objects that are useful for designing and evaluating monetary policy, including local approximations to the mean, persistence, and volatility of inflation. We present diverse sources of uncertainty that impinge on the posterior predictive density for inflation, including model uncertainty, policy drift, structural shifts and other shocks. We use a recently developed minimum entropy method to bring outside information to bear on inflation forecasts. We compare our predictive densities with the Bank of England's fan charts
- …