Many Markov chains with a single absorbing state have a unique limiting conditional distribution (LCD) to which they converge, conditioned on non-absorption, regardless of the initial distribution. If this limiting conditional distribution is used as the initial distribution over the non-absorbing states, then the probability distribution of the process at time n, conditioned on non-absorption, is equal for all values of n>0. Such an initial distribution is known as the quasi-stationary distribution (QSD). Thus the LCD and QSD are equal. These distributions can be found in both the discrete-time and continuous-time case.\ud \ud In this thesis we consider finite Markov chains which have one absorbing state, and for which all other states form a set which is a single communicating class. In addition, every state is aperiodic. These conditions ensure the existence of a unique LCD. We first consider continuous Markov chains in the context of survival analysis. We consider the hazard rate, a function which measures the risk of instantaneous failure of a system at time t conditioned on the system not having failed before t. It is well-known that the QSD leads to a constant hazard rate, and that the hazard rate generated by any other initial distribution tends to that constant rate. Claims have been made by Aalen and by Aalen and Gjessing that it may be possible to predict the shape of hazard rates generated by phase type distributions (first passage time distributions generated by atomic initial distributions) by comparing these initial distributions with the QSD. In Chapter 2 we consider these claims, and demonstrate through the use of several examples that the behaviour considered by those conjectures is more complex then previously believed.\ud \ud In Chapters 3 and 4 we consider discrete Markov chains in the context of imprecise probability. In many situations it may be unrealistic to assume that the transition matrix of a Markov chain can be determined exactly. It may be more plausible to determine upper and lower bounds upon each element, or even determine closed sets of probability distributions to which the rows of the matrix may belong. Such methods have been discussed by Kozine and Utkin and by Skulj, and in each of these papers results were given regarding the long-term behaviour of such processes. None of these papers considered Markov chains with an absorbing state. In Chapter 3 we demonstrate that, under the assumption that the transition matrix cannot change from time step to time step, there exists an imprecise generalisation to both the LCD and the QSD, and that these two generalisations are equal. In Chapter 4, we prove that this result holds even when we no longer assume that the transition matrix cannot change from time step to time step. In each chapter, examples are presented demonstrating the convergence of such processes, and Chapter 4 includes a comparison between the two methods
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.