496 research outputs found
Effective Sample Size for Importance Sampling based on discrepancy measures
The Effective Sample Size (ESS) is an important measure of efficiency of
Monte Carlo methods such as Markov Chain Monte Carlo (MCMC) and Importance
Sampling (IS) techniques. In the IS context, an approximation
of the theoretical ESS definition is widely applied, involving the inverse of
the sum of the squares of the normalized importance weights. This formula,
, has become an essential piece within Sequential Monte Carlo
(SMC) methods, to assess the convenience of a resampling step. From another
perspective, the expression is related to the Euclidean
distance between the probability mass described by the normalized weights and
the discrete uniform probability mass function (pmf). In this work, we derive
other possible ESS functions based on different discrepancy measures between
these two pmfs. Several examples are provided involving, for instance, the
geometric mean of the weights, the discrete entropy (including theperplexity
measure, already proposed in literature) and the Gini coefficient among others.
We list five theoretical requirements which a generic ESS function should
satisfy, allowing us to classify different ESS measures. We also compare the
most promising ones by means of numerical simulations
Parallel Metropolis chains with cooperative adaptation
Monte Carlo methods, such as Markov chain Monte Carlo (MCMC) algorithms, have
become very popular in signal processing over the last years. In this work, we
introduce a novel MCMC scheme where parallel MCMC chains interact, adapting
cooperatively the parameters of their proposal functions. Furthermore, the
novel algorithm distributes the computational effort adaptively, rewarding the
chains which are providing better performance and, possibly even stopping other
ones. These extinct chains can be reactivated if the algorithm considers
necessary. Numerical simulations shows the benefits of the novel scheme
Orthogonal parallel MCMC methods for sampling and optimization
Monte Carlo (MC) methods are widely used for Bayesian inference and
optimization in statistics, signal processing and machine learning. A
well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms.
In order to foster better exploration of the state space, specially in
high-dimensional applications, several schemes employing multiple parallel MCMC
chains have been recently introduced. In this work, we describe a novel
parallel interacting MCMC scheme, called {\it orthogonal MCMC} (O-MCMC), where
a set of "vertical" parallel MCMC chains share information using some
"horizontal" MCMC techniques working on the entire population of current
states. More specifically, the vertical chains are led by random-walk
proposals, whereas the horizontal MCMC techniques employ independent proposals,
thus allowing an efficient combination of global exploration and local
approximation. The interaction is contained in these horizontal iterations.
Within the analysis of different implementations of O-MCMC, novel schemes in
order to reduce the overall computational cost of parallel multiple try
Metropolis (MTM) chains are also presented. Furthermore, a modified version of
O-MCMC for optimization is provided by considering parallel simulated annealing
(SA) algorithms. Numerical results show the advantages of the proposed sampling
scheme in terms of efficiency in the estimation, as well as robustness in terms
of independence with respect to initial values and the choice of the
parameters
A Monte Carlo Approach to Measure the Robustness of Boolean Networks
Emergence of robustness in biological networks is a paramount feature of
evolving organisms, but a study of this property in vivo, for any level of
representation such as Genetic, Metabolic, or Neuronal Networks, is a very hard
challenge. In the case of Genetic Networks, mathematical models have been used
in this context to provide insights on their robustness, but even in relatively
simple formulations, such as Boolean Networks (BN), it might not be feasible to
compute some measures for large system sizes. We describe in this work a Monte
Carlo approach to calculate the size of the largest basin of attraction of a
BN, which is intrinsically associated with its robustness, that can be used
regardless the network size. We show the stability of our method through
finite-size analysis and validate it with a full search on small networks.Comment: on 1st International Workshop on Robustness and Stability of
Biological Systems and Computational Solutions (WRSBS
Critical Cooperation Range to Improve Spatial Network Robustness
A robust worldwide air-transportation network (WAN) is one that minimizes the
number of stranded passengers under a sequence of airport closures. Building on
top of this realistic example, here we address how spatial network robustness
can profit from cooperation between local actors. We swap a series of links
within a certain distance, a cooperation range, while following typical
constraints of spatially embedded networks. We find that the network robustness
is only improved above a critical cooperation range. Such improvement can be
described in the framework of a continuum transition, where the critical
exponents depend on the spatial correlation of connected nodes. For the WAN we
show that, except for Australia, all continental networks fall into the same
universality class. Practical implications of this result are also discussed
- …