19,625 research outputs found
Markov chains and optimality of the Hamiltonian cycle
We consider the Hamiltonian cycle problem (HCP) embedded in a controlled Markov decision process. In this setting, HCP reduces to an optimization problem on a set of Markov chains corresponding to a given graph. We prove that Hamiltonian cycles are minimizers for the trace of the fundamental matrix on a set of all stochastic transition matrices. In case of doubly stochastic matrices with symmetric linear perturbation, we show that Hamiltonian cycles minimize a diagonal element of a fundamental matrix for all admissible values of the perturbation parameter. In contrast to the previous work on this topic, our arguments are primarily based on probabilistic rather than algebraic methods
A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters
Markov Chain Monte Carlo (MCMC) methods have become increasingly popular for estimating the posterior probability distribution of parameters in hydrologic models. However, MCMC methods require the a priori definition of a proposal or sampling distribution, which determines the explorative capabilities and efficiency of the sampler and therefore the statistical properties of the Markov Chain and its rate of convergence. In this paper we present an MCMC sampler entitled the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), which is well suited to infer the posterior distribution of hydrologic model parameters. The SCEM-UA algorithm is a modified version of the original SCE-UA global optimization algorithm developed by Duan et al. [1992]. The SCEM-UA algorithm operates by merging the strengths of the Metropolis algorithm, controlled random search, competitive evolution, and complex shuffling in order to continuously update the proposal distribution and evolve the sampler to the posterior target distribution. Three case studies demonstrate that the adaptive capability of the SCEM-UA algorithm significantly reduces the number of model simulations needed to infer the posterior distribution of the parameters when compared with the traditional Metropolis-Hastings samplers
Distributed Random Access Algorithm: Scheduling and Congesion Control
This paper provides proofs of the rate stability, Harris recurrence, and
epsilon-optimality of CSMA algorithms where the backoff parameter of each node
is based on its backlog. These algorithms require only local information and
are easy to implement.
The setup is a network of wireless nodes with a fixed conflict graph that
identifies pairs of nodes whose simultaneous transmissions conflict. The paper
studies two algorithms. The first algorithm schedules transmissions to keep up
with given arrival rates of packets. The second algorithm controls the arrivals
in addition to the scheduling and attempts to maximize the sum of the utilities
of the flows of packets at the different nodes. For the first algorithm, the
paper proves rate stability for strictly feasible arrival rates and also Harris
recurrence of the queues. For the second algorithm, the paper proves the
epsilon-optimality. Both algorithms operate with strictly local information in
the case of decreasing step sizes, and operate with the additional information
of the number of nodes in the network in the case of constant step size
Achievable Performance in Product-Form Networks
We characterize the achievable range of performance measures in product-form
networks where one or more system parameters can be freely set by a network
operator. Given a product-form network and a set of configurable parameters, we
identify which performance measures can be controlled and which target values
can be attained. We also discuss an online optimization algorithm, which allows
a network operator to set the system parameters so as to achieve target
performance metrics. In some cases, the algorithm can be implemented in a
distributed fashion, of which we give several examples. Finally, we give
conditions that guarantee convergence of the algorithm, under the assumption
that the target performance metrics are within the achievable range.Comment: 50th Annual Allerton Conference on Communication, Control and
Computing - 201
Non-Reversible Parallel Tempering: a Scalable Highly Parallel MCMC Scheme
Parallel tempering (PT) methods are a popular class of Markov chain Monte
Carlo schemes used to sample complex high-dimensional probability
distributions. They rely on a collection of interacting auxiliary chains
targeting tempered versions of the target distribution to improve the
exploration of the state-space. We provide here a new perspective on these
highly parallel algorithms and their tuning by identifying and formalizing a
sharp divide in the behaviour and performance of reversible versus
non-reversible PT schemes. We show theoretically and empirically that a class
of non-reversible PT methods dominates its reversible counterparts and identify
distinct scaling limits for the non-reversible and reversible schemes, the
former being a piecewise-deterministic Markov process and the latter a
diffusion. These results are exploited to identify the optimal annealing
schedule for non-reversible PT and to develop an iterative scheme approximating
this schedule. We provide a wide range of numerical examples supporting our
theoretical and methodological contributions. The proposed methodology is
applicable to sample from a distribution with a density with respect
to a reference distribution and compute the normalizing constant. A
typical use case is when is a prior distribution, a likelihood
function and the corresponding posterior.Comment: 74 pages, 30 figures. The method is implemented in an open source
probabilistic programming available at
https://github.com/UBC-Stat-ML/blangSD
- …