149 research outputs found
Average optimality for continuous-time Markov decision processes in polish spaces
This paper is devoted to studying the average optimality in continuous-time
Markov decision processes with fairly general state and action spaces. The
criterion to be maximized is expected average rewards. The transition rates of
underlying continuous-time jump Markov processes are allowed to be unbounded,
and the reward rates may have neither upper nor lower bounds. We first provide
two optimality inequalities with opposed directions, and also give suitable
conditions under which the existence of solutions to the two optimality
inequalities is ensured. Then, from the two optimality inequalities we prove
the existence of optimal (deterministic) stationary policies by using the
Dynkin formula. Moreover, we present a ``semimartingale characterization'' of
an optimal stationary policy. Finally, we use a generalized Potlach process
with control to illustrate the difference between our conditions and those in
the previous literature, and then further apply our results to average optimal
control problems of generalized birth--death systems, upwardly skip-free
processes and two queueing systems. The approach developed in this paper is
slightly different from the ``optimality inequality approach'' widely used in
the previous literature.Comment: Published at http://dx.doi.org/10.1214/105051606000000105 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Poisson's equation for discrete-time quasi-birth-and-death processes
We consider Poisson's equation for quasi-birth-and-death processes (QBDs) and
we exploit the special transition structure of QBDs to obtain its solutions in
two different forms. One is based on a decomposition through first passage
times to lower levels, the other is based on a recursive expression for the
deviation matrix.
We revisit the link between a solution of Poisson's equation and perturbation
analysis and we show that it applies to QBDs. We conclude with the PH/M/1 queue
as an illustrative example, and we measure the sensitivity of the expected
queue size to the initial value
Ergodicity for the -type Markov Chain
Ergodicity is a fundamental issue for a stochastic process. In this paper, we
refine results on ergodicity for a general type of Markov chain to a specific
type or the -type Markov chain, which has many interesting and
important applications in various areas. It is of interest to obtain conditions
in terms of system parameters or the given information about the process, under
which the chain has various ergodic properties. Specifically, we provide
necessary and sufficient conditions for geometric, strong and polynomial
ergodicity, respectively.Comment: 16 page
Quasi-stationary distributions
This paper contains a survey of results related to quasi-stationary distributions, which arise in the setting of stochastic dynamical systems that eventually evanesce, and which may be useful in describing the long-term behaviour of such systems before evanescence. We are concerned mainly with continuous-time Markov chains over a finite or countably infinite state space, since these processes most often arise in applications, but will make reference to results for other processes where appropriate. Next to giving an historical account of the subject, we review the most important results on the existence and identification of quasi-stationary distributions for general Markov chains, and give special attention to birth-death processes and related models. Results on the question of whether a quasi-stationary distribution, given its existence, is indeed a good descriptor of the long-term behaviour of a system before evanescence, are reviewed as well. The paper is concluded with a summary of recent developments in numerical and approximation methods
- ā¦