16,747 research outputs found
Demonstration of Enhanced Monte Carlo Computation of the Fisher Information for Complex Problems
The Fisher information matrix summarizes the amount of information in a set
of data relative to the quantities of interest. There are many applications of
the information matrix in statistical modeling, system identification and
parameter estimation. This short paper reviews a feedback-based method and an
independent perturbation approach for computing the information matrix for
complex problems, where a closed form of the information matrix is not
achievable. We show through numerical examples how these methods improve the
accuracy of the estimate of the information matrix compared to the basic
resampling-based approach. Some relevant theory is summarized
Riemann-Langevin Particle Filtering in Track-Before-Detect
Track-before-detect (TBD) is a powerful approach that consists in providing
the tracker with sensor measurements directly without pre-detection. Due to the
measurement model non-linearities, online state estimation in TBD is most
commonly solved via particle filtering. Existing particle filters for TBD do
not incorporate measurement information in their proposal distribution. The
Langevin Monte Carlo (LMC) is a sampling method whose proposal is able to
exploit all available knowledge of the posterior (that is, both prior and
measurement information). This letter synthesizes recent advances in LMC-based
filtering to describe the Riemann-Langevin particle filter and introduces its
novel application to TBD. The benefits of our approach are illustrated in a
challenging low-noise scenario.Comment: Minor grammatical update
Efficient Cosmological Parameter Estimation from Microwave Background Anisotropies
We revisit the issue of cosmological parameter estimation in light of current
and upcoming high-precision measurements of the cosmic microwave background
power spectrum. Physical quantities which determine the power spectrum are
reviewed, and their connection to familiar cosmological parameters is
explicated. We present a set of physical parameters, analytic functions of the
usual cosmological parameters, upon which the microwave background power
spectrum depends linearly (or with some other simple dependence) over a wide
range of parameter values. With such a set of parameters, microwave background
power spectra can be estimated with high accuracy and negligible computational
effort, vastly increasing the efficiency of cosmological parameter error
determination. The techniques presented here allow calculation of microwave
background power spectra times faster than comparably accurate direct
codes (after precomputing a handful of power spectra). We discuss various
issues of parameter estimation, including parameter degeneracies, numerical
precision, mapping between physical and cosmological parameters, and systematic
errors, and illustrate these considerations with an idealized model of the MAP
experiment.Comment: 22 pages, 12 figure
Hamiltonian Monte Carlo Acceleration Using Surrogate Functions with Random Bases
For big data analysis, high computational cost for Bayesian methods often
limits their applications in practice. In recent years, there have been many
attempts to improve computational efficiency of Bayesian inference. Here we
propose an efficient and scalable computational technique for a
state-of-the-art Markov Chain Monte Carlo (MCMC) methods, namely, Hamiltonian
Monte Carlo (HMC). The key idea is to explore and exploit the structure and
regularity in parameter space for the underlying probabilistic model to
construct an effective approximation of its geometric properties. To this end,
we build a surrogate function to approximate the target distribution using
properly chosen random bases and an efficient optimization process. The
resulting method provides a flexible, scalable, and efficient sampling
algorithm, which converges to the correct target distribution. We show that by
choosing the basis functions and optimization process differently, our method
can be related to other approaches for the construction of surrogate functions
such as generalized additive models or Gaussian process models. Experiments
based on simulated and real data show that our approach leads to substantially
more efficient sampling algorithms compared to existing state-of-the art
methods
Computing the Cramer-Rao bound of Markov random field parameters: Application to the Ising and the Potts models
This report considers the problem of computing the Cramer-Rao bound for the
parameters of a Markov random field. Computation of the exact bound is not
feasible for most fields of interest because their likelihoods are intractable
and have intractable derivatives. We show here how it is possible to formulate
the computation of the bound as a statistical inference problem that can be
solve approximately, but with arbitrarily high accuracy, by using a Monte Carlo
method. The proposed methodology is successfully applied on the Ising and the
Potts models.% where it is used to assess the performance of three state-of-the
art estimators of the parameter of these Markov random fields
Stochastic Gradient Hamiltonian Monte Carlo
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for
defining distant proposals with high acceptance probabilities in a
Metropolis-Hastings framework, enabling more efficient exploration of the state
space than standard random-walk proposals. The popularity of such methods has
grown significantly in recent years. However, a limitation of HMC methods is
the required gradient computation for simulation of the Hamiltonian dynamical
system-such computation is infeasible in problems involving a large sample size
or streaming data. Instead, we must rely on a noisy gradient estimate computed
from a subset of the data. In this paper, we explore the properties of such a
stochastic gradient HMC approach. Surprisingly, the natural implementation of
the stochastic approximation can be arbitrarily bad. To address this problem we
introduce a variant that uses second-order Langevin dynamics with a friction
term that counteracts the effects of the noisy gradient, maintaining the
desired target distribution as the invariant distribution. Results on simulated
data validate our theory. We also provide an application of our methods to a
classification task using neural networks and to online Bayesian matrix
factorization.Comment: ICML 2014 versio
On optimality of kernels for approximate Bayesian computation using sequential Monte Carlo
Approximate Bayesian computation (ABC) has gained popularity over the past few years for the analysis of complex models arising in population genetics, epidemiology and system biology. Sequential Monte Carlo (SMC) approaches have become work-horses in ABC. Here we discuss how to construct the perturbation kernels that are required in ABC SMC approaches, in order to construct a sequence of distributions that start out from a suitably defined prior and converge towards the unknown posterior. We derive optimality criteria for different kernels, which are based on the Kullback-Leibler divergence between a distribution and the distribution of the perturbed particles. We will show that for many complicated posterior distributions, locally adapted kernels tend to show the best performance. We find that the added moderate cost of adapting kernel functions is easily regained in terms of the higher acceptance rate. We demonstrate the computational efficiency gains in a range of toy examples which illustrate some of the challenges faced in real-world applications of ABC, before turning to two demanding parameter inference problems in molecular biology, which highlight the huge increases in efficiency that can be gained from choice of optimal kernels. We conclude with a general discussion of the rational choice of perturbation kernels in ABC SMC settings
- âŠ