445 research outputs found
Reduction of Markov chains with two-time-scale state transitions
In this paper, we consider a general class of two-time-scale Markov chains
whose transition rate matrix depends on a parameter . We assume that
some transition rates of the Markov chain will tend to infinity as
. We divide the state space of the Markov chain
into a fast state space and a slow state space and define a reduced chain
on the slow state space. Our main result is that the distribution of the
original chain will converge in total variation distance to that of the
reduced chain uniformly in time as .Comment: 30 pages, 3 figures; Stochastics: An International Journal of
Probability and Stochastic Processes, 201
Asymptotic Expansions for Stationary Distributions of Perturbed Semi-Markov Processes
New algorithms for computing of asymptotic expansions for stationary
distributions of nonlinearly perturbed semi-Markov processes are presented. The
algorithms are based on special techniques of sequential phase space reduction,
which can be applied to processes with asymptotically coupled and uncoupled
finite phase spaces.Comment: 83 page
Markov chains and optimality of the Hamiltonian cycle
We consider the Hamiltonian cycle problem (HCP) embedded in a controlled Markov decision process. In this setting, HCP reduces to an optimization problem on a set of Markov chains corresponding to a given graph. We prove that Hamiltonian cycles are minimizers for the trace of the fundamental matrix on a set of all stochastic transition matrices. In case of doubly stochastic matrices with symmetric linear perturbation, we show that Hamiltonian cycles minimize a diagonal element of a fundamental matrix for all admissible values of the perturbation parameter. In contrast to the previous work on this topic, our arguments are primarily based on probabilistic rather than algebraic methods
Landscapes of Non-gradient Dynamics Without Detailed Balance: Stable Limit Cycles and Multiple Attractors
Landscape is one of the key notions in literature on biological processes and
physics of complex systems with both deterministic and stochastic dynamics. The
large deviation theory (LDT) provides a possible mathematical basis for the
scientists' intuition. In terms of Freidlin-Wentzell's LDT, we discuss
explicitly two issues in singularly perturbed stationary diffusion processes
arisen from nonlinear differential equations: (1) For a process whose
corresponding ordinary differential equation has a stable limit cycle, the
stationary solution exhibits a clear separation of time scales: an exponential
terms and an algebraic prefactor. The large deviation rate function attains its
minimum zero on the entire stable limit cycle, while the leading term of the
prefactor is inversely proportional to the velocity of the non-uniform periodic
oscillation on the cycle. (2) For dynamics with multiple stable fixed points
and saddles, there is in general a breakdown of detailed balance among the
corresponding attractors. Two landscapes, a local and a global, arise in LDT,
and a Markov jumping process with cycle flux emerges in the low-noise limit. A
local landscape is pertinent to the transition rates between neighboring stable
fixed points; and the global landscape defines a nonequilibrium steady state.
There would be nondifferentiable points in the latter for a stationary dynamics
with cycle flux. LDT serving as the mathematical foundation for emergent
landscapes deserves further investigations.Comment: 4 figur
Order of magnitude time-reversible Markov chains and characterization of clustering processes
We introduce the notion of order of magnitude reversibility
(OM-reversibility) in Markov chains that are parametrized by a positive
parameter \ep. OM-reversibility is a weaker condition than reversibility, and
requires only the knowledge of order of magnitude of the transition
probabilities. For an irreducible, OM-reversible Markov chain on a finite state
space, we prove that the stationary distribution satisfies order of magnitude
detailed balance (analog of detailed balance in reversible Markov chains). The
result characterizes the states with positive probability in the limit of the
stationary distribution as \ep \to 0, which finds an important application in
the case of singularly perturbed Markov chains that are reducible for \ep=0.
We show that OM-reversibility occurs naturally in macroscopic systems,
involving many interacting particles. Clustering is a common phenomenon in
biological systems, in which particles or molecules aggregate at one location.
We give a simple condition on the transition probabilities in an interacting
particle Markov chain that characterizes clustering. We show that such
clustering processes are OM-reversible, and we find explicitly the order of
magnitude of the stationary distribution. Further, we show that the single pole
states, in which all particles are at a single vertex, are the only states with
positive probability in the limit of the stationary distribution as the rate of
diffusion goes to zero.Comment: 22 pages, 3 figure
Control of singularly perturbed hybrid stochastic systems
In this paper, we study a class of optimal stochastic
control problems involving two different time scales. The fast
mode of the system is represented by deterministic state equations
whereas the slow mode of the system corresponds to a jump disturbance
process. Under a fundamental “ergodicity” property for
a class of “infinitesimal control systems” associated with the fast
mode, we show that there exists a limit problem which provides
a good approximation to the optimal control of the perturbed
system. Both the finite- and infinite-discounted horizon cases are
considered. We show how an approximate optimal control law
can be constructed from the solution of the limit control problem.
In the particular case where the infinitesimal control systems
possess the so-called turnpike property, i.e., are characterized by
the existence of global attractors, the limit control problem can be
given an interpretation related to a decomposition approach
- …