16,191 research outputs found
Perfect Sampling of the Master Equation for Gene Regulatory Networks
We present a Perfect Sampling algorithm that can be applied to the Master
Equation of Gene Regulatory Networks (GRNs). The method recasts Gillespie's
Stochastic Simulation Algorithm (SSA) in the light of Markov Chain Monte Carlo
methods and combines it with the Dominated Coupling From The Past (DCFTP)
algorithm to provide guaranteed sampling from the stationary distribution. We
show how the DCFTP-SSA can be generically applied to genetic networks with
feedback formed by the interconnection of linear enzymatic reactions and
nonlinear Monod- and Hill-type elements. We establish rigorous bounds on the
error and convergence of the DCFTP-SSA, as compared to the standard SSA,
through a set of increasingly complex examples. Once the building blocks for
GRNs have been introduced, the algorithm is applied to study properly averaged
dynamic properties of two experimentally relevant genetic networks: the toggle
switch, a two-dimensional bistable system, and the repressilator, a
six-dimensional genetic oscillator.Comment: Minor rewriting; final version to be published in Biophysical Journa
Sequential Monte Carlo Methods for Protein Folding
We describe a class of growth algorithms for finding low energy states of
heteropolymers. These polymers form toy models for proteins, and the hope is
that similar methods will ultimately be useful for finding native states of
real proteins from heuristic or a priori determined force fields. These
algorithms share with standard Markov chain Monte Carlo methods that they
generate Gibbs-Boltzmann distributions, but they are not based on the strategy
that this distribution is obtained as stationary state of a suitably
constructed Markov chain. Rather, they are based on growing the polymer by
successively adding individual particles, guiding the growth towards
configurations with lower energies, and using "population control" to eliminate
bad configurations and increase the number of "good ones". This is not done via
a breadth-first implementation as in genetic algorithms, but depth-first via
recursive backtracking. As seen from various benchmark tests, the resulting
algorithms are extremely efficient for lattice models, and are still
competitive with other methods for simple off-lattice models.Comment: 10 pages; published in NIC Symposium 2004, eds. D. Wolf et al. (NIC,
Juelich, 2004
Interacting Multiple Try Algorithms with Different Proposal Distributions
We propose a new class of interacting Markov chain Monte Carlo (MCMC)
algorithms designed for increasing the efficiency of a modified multiple-try
Metropolis (MTM) algorithm. The extension with respect to the existing MCMC
literature is twofold. The sampler proposed extends the basic MTM algorithm by
allowing different proposal distributions in the multiple-try generation step.
We exploit the structure of the MTM algorithm with different proposal
distributions to naturally introduce an interacting MTM mechanism (IMTM) that
expands the class of population Monte Carlo methods. We show the validity of
the algorithm and discuss the choice of the selection weights and of the
different proposals. We provide numerical studies which show that the new
algorithm can perform better than the basic MTM algorithm and that the
interaction mechanism allows the IMTM to efficiently explore the state space
Detection Strategies for Extreme Mass Ratio Inspirals
The capture of compact stellar remnants by galactic black holes provides a
unique laboratory for exploring the near horizon geometry of the Kerr
spacetime, or possible departures from general relativity if the central cores
prove not to be black holes. The gravitational radiation produced by these
Extreme Mass Ratio Inspirals (EMRIs) encodes a detailed map of the black hole
geometry, and the detection and characterization of these signals is a major
scientific goal for the LISA mission. The waveforms produced are very complex,
and the signals need to be coherently tracked for hundreds to thousands of
cycles to produce a detection, making EMRI signals one of the most challenging
data analysis problems in all of gravitational wave astronomy. Estimates for
the number of templates required to perform an exhaustive grid-based
matched-filter search for these signals are astronomically large, and far out
of reach of current computational resources. Here I describe an alternative
approach that employs a hybrid between Genetic Algorithms and Markov Chain
Monte Carlo techniques, along with several time saving techniques for computing
the likelihood function. This approach has proven effective at the blind
extraction of relatively weak EMRI signals from simulated LISA data sets.Comment: 10 pages, 4 figures, Updated for LISA 8 Symposium Proceeding
Accelerating Asymptotically Exact MCMC for Computationally Intensive Models via Local Approximations
We construct a new framework for accelerating Markov chain Monte Carlo in
posterior sampling problems where standard methods are limited by the
computational cost of the likelihood, or of numerical models embedded therein.
Our approach introduces local approximations of these models into the
Metropolis-Hastings kernel, borrowing ideas from deterministic approximation
theory, optimization, and experimental design. Previous efforts at integrating
approximate models into inference typically sacrifice either the sampler's
exactness or efficiency; our work seeks to address these limitations by
exploiting useful convergence characteristics of local approximations. We prove
the ergodicity of our approximate Markov chain, showing that it samples
asymptotically from the \emph{exact} posterior distribution of interest. We
describe variations of the algorithm that employ either local polynomial
approximations or local Gaussian process regressors. Our theoretical results
reinforce the key observation underlying this paper: when the likelihood has
some \emph{local} regularity, the number of model evaluations per MCMC step can
be greatly reduced without biasing the Monte Carlo average. Numerical
experiments demonstrate multiple order-of-magnitude reductions in the number of
forward model evaluations used in representative ODE and PDE inference
problems, with both synthetic and real data.Comment: A major update of the theory and example
- …