Nearly reducible finite Markov chains: theory and algorithms

Abstract

Finite Markov chains are probabilistic network models that are commonly used as representations of dynamical processes in the physical sciences, biological sciences, economics, and elsewhere. Markov chains that appear in realistic modelling tasks are frequently observed to be nearly reducible, incorporating a mixture of fast and slow processes that leads to ill-conditioning of the underlying matrix of probabilities for transitions between states. Hence, the wealth of established theoretical results that makes Markov chains attractive and convenient models often cannot be used straightforwardly in practice, owing to numerical instability associated with the standard computational procedures to evaluate the expressions. This work is concerned with the development of theory, algorithms, and simulation methods for the efficient and numerically stable analysis of finite Markov chains, with a primary focus on exact approaches that are robust and therefore applicable to nearly reducible networks. New methodologies are presented to determine representative paths, identify the dominant transition mechanisms for a particular process of interest, and analyze the local states that have a strong influence on the characteristics of the global dynamics. The novel approaches yield new insights into the behaviour of Markovian networks, addressing and overcoming numerical challenges. The methodology is applied to example models that are relevant to current problems in chemical physics, including Markov chains representing a protein folding transition, and a configurational transition in an atomic cluster. Relevant classical theory of finite Markov chains and a description of existing robust algorithms for their numerical analysis is given in Chapter 1. The remainder of this thesis considers the problem of investigating a transition from an initial set of states in a Markovian network to an absorbing (target) macrostate. A formal approach to determine a finite set of representative transition paths is proposed in Chapter 2, based on exact pathwise decomposition of the total productive flux. This analysis allows for the importance of competing dynamical processes to be rigorously quantified. A robust state reduction algorithm to compute the expectation of any path property for a transition between two endpoint states is also described in Chapter 2. Chapter 3 reports further numerically stable state reduction algorithms to compute quantities that characterize the features of a transition at a statewise level of detail, allowing for identification of the local states that play a key role in modulating the slow dynamics. An expression is derived for the probability that a state is visited on a path that proceeds directly to the absorbing state without revisiting the initial state, which characterizes the dynamical relevance of an individual state to the overall transition process. In Chapter 4, an unsupervised strategy is proposed to utilize a highly efficient simulation algorithm for sampling paths on a Markov chain. The framework employs a scalable community detection algorithm to obtain an initial clustering of the network into metastable sets of states, which is subsequently refined by a variational optimization procedure. The optimized clustering is then used as the basis for simulating trajectory segments that necessarily escape from the metastable macrostates. The thesis is concluded with an overview of recent related advances that are beyond the scope of the current work (Chapter 5), and a discussion of potential applications where the novel methodology reported herein may be applied to perform insightful analyses that were previously intractable.Cambridge Commonwealth, European and International Trust Engineering and Physical Sciences Research Counci

    Similar works