10,080 research outputs found
Strong Stationary Duality for M\"obius Monotone Markov Chains: Unreliable Networks
For Markov chains with a partially ordered finite state space we show strong
stationary duality under the condition of M\"obius monotonicity of the chain.
We show relations of M\"obius monotonicity to other definitions of monotone
chains. We give examples of dual chains in this context which have transitions
only upwards. We illustrate general theory by an analysis of nonsymmetric
random walks on the cube with an application to networks of queues
Coalescence time and second largest eigenvalue modulus in the monotone reversible case
If T is the coalescence time of the Propp and Wilson [15], perfect simulation algorithm, the aim of this paper is to show that T depends on the second largest eigenvalue modulus of the transition matrix of the underlying Markov chain. This gives a relationship between the ordering based on the speed of convergence to stationarity in total variation distance and the ordering dened in terms of speed of coalescence in perfect simulation. Key words and phrases: Peskun ordering, Covariance ordering, Effciency ordering, MCMC, time-invariance estimating equations, asymptotic variance, continuous time Markov chains.
On the Equivalence Between Deep NADE and Generative Stochastic Networks
Neural Autoregressive Distribution Estimators (NADEs) have recently been
shown as successful alternatives for modeling high dimensional multimodal
distributions. One issue associated with NADEs is that they rely on a
particular order of factorization for . This issue has been
recently addressed by a variant of NADE called Orderless NADEs and its deeper
version, Deep Orderless NADE. Orderless NADEs are trained based on a criterion
that stochastically maximizes with all possible orders of
factorizations. Unfortunately, ancestral sampling from deep NADE is very
expensive, corresponding to running through a neural net separately predicting
each of the visible variables given some others. This work makes a connection
between this criterion and the training criterion for Generative Stochastic
Networks (GSNs). It shows that training NADEs in this way also trains a GSN,
which defines a Markov chain associated with the NADE model. Based on this
connection, we show an alternative way to sample from a trained Orderless NADE
that allows to trade-off computing time and quality of the samples: a 3 to
10-fold speedup (taking into account the waste due to correlations between
consecutive samples of the chain) can be obtained without noticeably reducing
the quality of the samples. This is achieved using a novel sampling procedure
for GSNs called annealed GSN sampling, similar to tempering methods that
combines fast mixing (obtained thanks to steps at high noise levels) with
accurate samples (obtained thanks to steps at low noise levels).Comment: ECML/PKDD 201
On Endogenous Random Consensus and Averaging Dynamics
Motivated by various random variations of Hegselmann-Krause model for opinion
dynamics and gossip algorithm in an endogenously changing environment, we
propose a general framework for the study of endogenously varying random
averaging dynamics, i.e.\ an averaging dynamics whose evolution suffers from
history dependent sources of randomness. We show that under general assumptions
on the averaging dynamics, such dynamics is convergent almost surely. We also
determine the limiting behavior of such dynamics and show such dynamics admit
infinitely many time-varying Lyapunov functions
- …