62 research outputs found
Eigenvalue Bounds on Restrictions of Reversible Nearly Uncoupled Markov Chains
AbstractIn this paper we analyze decompositions of reversible nearly uncoupled Markov chains into rapidly mixing subchains. We state upper bounds on the 2nd eigenvalue for restriction and stochastic complementation chains of reversible Markov chains, as well as a relation between them. We illustrate the obtained bounds analytically for bunkbed graphs, and furthermore apply them to restricted Markov chains that arise when analyzing conformation dynamics of a small biomolecule
Stochastic Data Clustering
In 1961 Herbert Simon and Albert Ando published the theory behind the
long-term behavior of a dynamical system that can be described by a nearly
uncoupled matrix. Over the past fifty years this theory has been used in a
variety of contexts, including queueing theory, brain organization, and
ecology. In all these applications, the structure of the system is known and
the point of interest is the various stages the system passes through on its
way to some long-term equilibrium.
This paper looks at this problem from the other direction. That is, we
develop a technique for using the evolution of the system to tell us about its
initial structure, and we use this technique to develop a new algorithm for
data clustering.Comment: 23 page
Recommended from our members
Nearly reducible finite Markov chains: theory and algorithms
Finite Markov chains are probabilistic network models that are commonly used as representations of dynamical processes in the physical sciences, biological sciences, economics, and elsewhere. Markov chains that appear in realistic modelling tasks are frequently observed to be nearly reducible, incorporating a mixture of fast and slow processes that leads to ill-conditioning of the underlying matrix of probabilities for transitions between states. Hence, the wealth of established theoretical results that makes Markov chains attractive and convenient models often cannot be used straightforwardly in practice, owing to numerical instability associated with the standard computational procedures to evaluate the expressions. This work is concerned with the development of theory, algorithms, and simulation methods for the efficient and numerically stable analysis of finite Markov chains, with a primary focus on exact approaches that are robust and therefore applicable to nearly reducible networks. New methodologies are presented to determine representative paths, identify the dominant transition mechanisms for a particular process of interest, and analyze the local states that have a strong influence on the characteristics of the global dynamics. The novel approaches yield new insights into the behaviour of Markovian networks, addressing and overcoming numerical challenges. The methodology is applied to example models that are relevant to current problems in chemical physics, including Markov chains representing a protein folding transition, and a configurational transition in an atomic cluster.
Relevant classical theory of finite Markov chains and a description of existing robust algorithms for their numerical analysis is given in Chapter 1. The remainder of this thesis considers the problem of investigating a transition from an initial set of states in a Markovian network to an absorbing (target) macrostate.
A formal approach to determine a finite set of representative transition paths is proposed in Chapter 2, based on exact pathwise decomposition of the total productive flux. This analysis allows for the importance of competing dynamical processes to be rigorously quantified. A robust state reduction algorithm to compute the expectation of any path property for a transition between two endpoint states is also described in Chapter 2.
Chapter 3 reports further numerically stable state reduction algorithms to compute quantities that characterize the features of a transition at a statewise level of detail, allowing for identification of the local states that play a key role in modulating the slow dynamics. An expression is derived for the probability that a state is visited on a path that proceeds directly to the absorbing state without revisiting the initial state, which characterizes the dynamical relevance of an individual state to the overall transition process.
In Chapter 4, an unsupervised strategy is proposed to utilize a highly efficient simulation algorithm for sampling paths on a Markov chain. The framework employs a scalable community detection algorithm to obtain an initial clustering of the network into metastable sets of states, which is subsequently refined by a variational optimization procedure. The optimized clustering is then used as the basis for simulating trajectory segments that necessarily escape from the metastable macrostates.
The thesis is concluded with an overview of recent related advances that are beyond the scope of the current work (Chapter 5), and a discussion of potential applications where the novel methodology reported herein may be applied to perform insightful analyses that were previously intractable.Cambridge Commonwealth, European and International Trust
Engineering and Physical Sciences Research Counci
Bounding the Equilibrium Distribution of Markov Population Models
Arguing about the equilibrium distribution of continuous-time Markov chains
can be vital for showing properties about the underlying systems. For example
in biological systems, bistability of a chemical reaction network can hint at
its function as a biological switch. Unfortunately, the state space of these
systems is infinite in most cases, preventing the use of traditional steady
state solution techniques. In this paper we develop a new approach to tackle
this problem by first retrieving geometric bounds enclosing a major part of the
steady state probability mass, followed by a more detailed analysis revealing
state-wise bounds.Comment: 4 page
The Markov chain tree theorem and the state reduction algorithm in commutative semirings
We extend the Markov chain tree theorem to general commutative semirings, and
we generalize the state reduction algorithm to commutative semifields. This
leads to a new universal algorithm, whose prototype is the state reduction
algorithm which computes the Markov chain tree vector of a stochastic matrix.Comment: 13 page
On Markov State Models for Metastable Processes
We consider Markov processes on large state spaces and want to find low-dimensional structure-preserving approximations of the process in the sense that the longest timescales of the dynamics of the original process are reproduced well. Recent years have seen the advance of so-called Markov state models (MSM) for processes on very large state spaces exhibiting metastable dynamics. It has been demonstrated that MSMs are especially useful for modelling the interesting slow dynamics of biomolecules (cf. Noe et al, PNAS(106) 2009) and materials. From the mathematical perspective, MSMs result from Galerkin projection of the transfer operator underlying the original process onto some
low-dimensional subspace which leads to an approximation of the dominant eigenvalues of the transfer operators and thus of the longest timescales of the original dynamics. Until now, most articles on MSMs have been based on full subdivisions of state space, i.e., Galerkin projections onto subspaces spanned by indicator functions. We show how to generalize MSMs to alternative low-dimensional subspaces with superior approximation properties, and how to analyse the approximation quality (dominant eigenvalues, propagation of functions) of the resulting MSMs. To this end, we give an overview of the construction of MSMs, the associated stochastics and functional-analysis background, and its algorithmic consequences. Furthermore, we illustrate the mathematical construction with numerical examples
- …