44 research outputs found
Persistence in fluctuating environments
Understanding under what conditions interacting populations, whether they be
plants, animals, or viral particles, coexist is a question of theoretical and
practical importance in population biology. Both biotic interactions and
environmental fluctuations are key factors that can facilitate or disrupt
coexistence. To better understand this interplay between these deterministic
and stochastic forces, we develop a mathematical theory extending the nonlinear
theory of permanence for deterministic systems to stochastic difference and
differential equations. Our condition for coexistence requires that there is a
fixed set of weights associated with the interacting populations and this
weighted combination of populations' invasion rates is positive for any
(ergodic) stationary distribution associated with a subcollection of
populations. Here, an invasion rate corresponds to an average per-capita growth
rate along a stationary distribution. When this condition holds and there is
sufficient noise in the system, we show that the populations approach a unique
positive stationary distribution. Moreover, we show that our coexistence
criterion is robust to small perturbations of the model functions. Using this
theory, we illustrate that (i) environmental noise enhances or inhibits
coexistence in communities with rock-paper-scissor dynamics depending on
correlations between interspecific demographic rates, (ii) stochastic variation
in mortality rates has no effect on the coexistence criteria for discrete-time
Lotka-Volterra communities, and (iii) random forcing can promote genetic
diversity in the presence of exploitative interactions.Comment: 25 page
Time series prediction via aggregation : an oracle bound including numerical cost
We address the problem of forecasting a time series meeting the Causal
Bernoulli Shift model, using a parametric set of predictors. The aggregation
technique provides a predictor with well established and quite satisfying
theoretical properties expressed by an oracle inequality for the prediction
risk. The numerical computation of the aggregated predictor usually relies on a
Markov chain Monte Carlo method whose convergence should be evaluated. In
particular, it is crucial to bound the number of simulations needed to achieve
a numerical precision of the same order as the prediction risk. In this
direction we present a fairly general result which can be seen as an oracle
inequality including the numerical cost of the predictor computation. The
numerical cost appears by letting the oracle inequality depend on the number of
simulations required in the Monte Carlo approximation. Some numerical
experiments are then carried out to support our findings
Interacting Multiple Try Algorithms with Different Proposal Distributions
We propose a new class of interacting Markov chain Monte Carlo (MCMC)
algorithms designed for increasing the efficiency of a modified multiple-try
Metropolis (MTM) algorithm. The extension with respect to the existing MCMC
literature is twofold. The sampler proposed extends the basic MTM algorithm by
allowing different proposal distributions in the multiple-try generation step.
We exploit the structure of the MTM algorithm with different proposal
distributions to naturally introduce an interacting MTM mechanism (IMTM) that
expands the class of population Monte Carlo methods. We show the validity of
the algorithm and discuss the choice of the selection weights and of the
different proposals. We provide numerical studies which show that the new
algorithm can perform better than the basic MTM algorithm and that the
interaction mechanism allows the IMTM to efficiently explore the state space
An Adaptive Interacting Wang-Landau Algorithm for Automatic Density Exploration
While statisticians are well-accustomed to performing exploratory analysis in
the modeling stage of an analysis, the notion of conducting preliminary
general-purpose exploratory analysis in the Monte Carlo stage (or more
generally, the model-fitting stage) of an analysis is an area which we feel
deserves much further attention. Towards this aim, this paper proposes a
general-purpose algorithm for automatic density exploration. The proposed
exploration algorithm combines and expands upon components from various
adaptive Markov chain Monte Carlo methods, with the Wang-Landau algorithm at
its heart. Additionally, the algorithm is run on interacting parallel chains --
a feature which both decreases computational cost as well as stabilizes the
algorithm, improving its ability to explore the density. Performance is studied
in several applications. Through a Bayesian variable selection example, the
authors demonstrate the convergence gains obtained with interacting chains. The
ability of the algorithm's adaptive proposal to induce mode-jumping is
illustrated through a trimodal density and a Bayesian mixture modeling
application. Lastly, through a 2D Ising model, the authors demonstrate the
ability of the algorithm to overcome the high correlations encountered in
spatial models.Comment: 33 pages, 20 figures (the supplementary materials are included as
appendices
Accelerating MCMC Algorithms
Markov chain Monte Carlo algorithms are used to simulate from complex
statistical distributions by way of a local exploration of these distributions.
This local feature avoids heavy requests on understanding the nature of the
target, but it also potentially induces a lengthy exploration of this target,
with a requirement on the number of simulations that grows with the dimension
of the problem and with the complexity of the data behind it. Several
techniques are available towards accelerating the convergence of these Monte
Carlo algorithms, either at the exploration level (as in tempering, Hamiltonian
Monte Carlo and partly deterministic methods) or at the exploitation level
(with Rao-Blackwellisation and scalable methods).Comment: This is a survey paper, submitted WIREs Computational Statistics, to
with 6 figure
Towards optimal scaling of metropolis-coupled Markov chain Monte Carlo
We consider optimal temperature spacings for Metropolis-coupled Markov chain Monte Carlo (MCMCMC) and Simulated Tempering algorithms. We prove that, under certain conditions, it is optimal (in terms of maximising the expected squared jumping distance) to space the temperatures so that the proportion of temperature swaps which are accepted is approximately 0.234. This generalises related work by physicists, and is consistent with previous work about optimal scaling of random-walk Metropolis algorithms