168 research outputs found
Switching and diffusion models for gene regulation networks
We analyze a hierarchy of three regimes for modeling gene regulation. The most complete model is a continuous time, discrete state space, Markov jump process. An intermediate 'switch plus diffusion' model takes the form of a stochastic differential equation driven by an independent continuous time Markov switch. In the third 'switch plus ODE' model the switch remains but the diffusion is removed. The latter two models allow for multi-scale simulation where, for the sake of computational efficiency, system components are treated differently according to their abundance. The 'switch plus ODE' regime was proposed by Paszek (Modeling stochasticity in gene regulation: characterization in the terms of the underlying distribution function, Bulletin of Mathematical Biology, 2007), who analyzed the steady state behavior, showing that the mean was preserved but the variance only approximated that of the full model. Here, we show that the tools of stochastic calculus can be used to analyze first and second moments for all time. A technical issue to be addressed is that the state space for the discrete-valued switch is infinite. We show that the new 'switch plus diffusion' regime preserves the biologically relevant measures of mean and variance, whereas the 'switch plus ODE' model uniformly underestimates the variance in the protein level. We also show that, for biologically relevant parameters, the transient behaviour can differ significantly from the steady state, justifying our time-dependent analysis. Extra computational results are also given for a protein dimerization model that is beyond the scope of the current analysis
Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing
Branching processes are a class of continuous-time Markov chains (CTMCs) with
ubiquitous applications. A general difficulty in statistical inference under
partially observed CTMC models arises in computing transition probabilities
when the discrete state space is large or uncountable. Classical methods such
as matrix exponentiation are infeasible for large or countably infinite state
spaces, and sampling-based alternatives are computationally intensive,
requiring a large integration step to impute over all possible hidden events.
Recent work has successfully applied generating function techniques to
computing transition probabilities for linear multitype branching processes.
While these techniques often require significantly fewer computations than
matrix exponentiation, they also become prohibitive in applications with large
populations. We propose a compressed sensing framework that significantly
accelerates the generating function method, decreasing computational cost up to
a logarithmic factor by only assuming the probability mass of transitions is
sparse. We demonstrate accurate and efficient transition probability
computations in branching process models for hematopoiesis and transposable
element evolution.Comment: 18 pages, 4 figures, 2 table
An epidemic model for an evolving pathogen with strain-dependent immunity
Between pandemics, the influenza virus exhibits periods of incremental evolution via a process known as antigenic drift. This process gives rise to a sequence of strains of the pathogen that are continuously replaced by newer strains, preventing a build up of immunity in the host population. In this paper, a parsimonious epidemic model is defined that attempts to capture the dynamics of evolving strains within a host population. The āevolving strainsā epidemic model has many properties that lie in-between the SusceptibleāInfectedāSusceptible and the SusceptibleāInfectedāRemoved epidemic models, due to the fact that individuals can only be infected by each strain once, but remain susceptible to reinfection by newly emerged strains. Coupling results are used to identify key properties, such as the time to extinction. A range of reproduction numbers are explored to characterise the model, including a novel quasi-stationary reproduction number that can be used to describe the re-emergence of the pathogen into a population with āaverageā levels of strain immunity, analogous to the beginning of the winter peak in influenza. Finally the quasi-stationary distribution of the evolving strains model is explored via simulation
Recommended from our members
Nearly reducible finite Markov chains: theory and algorithms
Finite Markov chains are probabilistic network models that are commonly used as representations of dynamical processes in the physical sciences, biological sciences, economics, and elsewhere. Markov chains that appear in realistic modelling tasks are frequently observed to be nearly reducible, incorporating a mixture of fast and slow processes that leads to ill-conditioning of the underlying matrix of probabilities for transitions between states. Hence, the wealth of established theoretical results that makes Markov chains attractive and convenient models often cannot be used straightforwardly in practice, owing to numerical instability associated with the standard computational procedures to evaluate the expressions. This work is concerned with the development of theory, algorithms, and simulation methods for the efficient and numerically stable analysis of finite Markov chains, with a primary focus on exact approaches that are robust and therefore applicable to nearly reducible networks. New methodologies are presented to determine representative paths, identify the dominant transition mechanisms for a particular process of interest, and analyze the local states that have a strong influence on the characteristics of the global dynamics. The novel approaches yield new insights into the behaviour of Markovian networks, addressing and overcoming numerical challenges. The methodology is applied to example models that are relevant to current problems in chemical physics, including Markov chains representing a protein folding transition, and a configurational transition in an atomic cluster.
Relevant classical theory of finite Markov chains and a description of existing robust algorithms for their numerical analysis is given in Chapter 1. The remainder of this thesis considers the problem of investigating a transition from an initial set of states in a Markovian network to an absorbing (target) macrostate.
A formal approach to determine a finite set of representative transition paths is proposed in Chapter 2, based on exact pathwise decomposition of the total productive flux. This analysis allows for the importance of competing dynamical processes to be rigorously quantified. A robust state reduction algorithm to compute the expectation of any path property for a transition between two endpoint states is also described in Chapter 2.
Chapter 3 reports further numerically stable state reduction algorithms to compute quantities that characterize the features of a transition at a statewise level of detail, allowing for identification of the local states that play a key role in modulating the slow dynamics. An expression is derived for the probability that a state is visited on a path that proceeds directly to the absorbing state without revisiting the initial state, which characterizes the dynamical relevance of an individual state to the overall transition process.
In Chapter 4, an unsupervised strategy is proposed to utilize a highly efficient simulation algorithm for sampling paths on a Markov chain. The framework employs a scalable community detection algorithm to obtain an initial clustering of the network into metastable sets of states, which is subsequently refined by a variational optimization procedure. The optimized clustering is then used as the basis for simulating trajectory segments that necessarily escape from the metastable macrostates.
The thesis is concluded with an overview of recent related advances that are beyond the scope of the current work (Chapter 5), and a discussion of potential applications where the novel methodology reported herein may be applied to perform insightful analyses that were previously intractable.Cambridge Commonwealth, European and International Trust
Engineering and Physical Sciences Research Counci
Bayesian Inference of Stochastic Degradation Models: A Likelihood-Free Approach
The structural integrity and system performance of large engineering systems are adversely affected by various forms of degradation mechanisms. Modeling of such mechanisms is accomplished by collecting degradation data from periodic in-service inspections of structures and components. Subsequently, the degradation prediction is transformed into system and component lifetimes that are necessary inputs into the risk-based life-cycle management of critical structures. Stochastic degradation models are widely applicable for predicting degradation growths in structural components. The statistical estimation of such models is often challenged by various uncertainties, such as inherent randomness of a degradation process, parameter uncertainty due to noise in measurements, coverage issues, probe signal loss, the limited resolution of the inspection probe, and small sample size.
The Bayesian inference method can be used to quantify the uncertainties of the model parameters. However, degradation data of engineering structures are often contaminated by a significant amount of inspection errors added by various inspection tools. As a result, the likelihood function becomes analytically intractable and computationally expensive to a degree that any traditional likelihood-based Bayesian inference scheme (e.g., Gibbs Sampler, Metropolis-Hastings sampler) turns difficult for practical use.
This study proposes a practical likelihood-free approach for parameter estimation based on the approximate Bayesian computation (ABC) method. ABC is a simulation-based approach that does not require an explicit formulation of the likelihood function. Three advanced computational algorithms, namely, ABC using Markov chain Monte Carlo (ABC-MCMC), ABC using Hamiltonian Monte Carlo (ABC-HMC), and ABC using subset simulation (ABC-SS), are developed and implemented for the parameter estimation task. In the context of degradation modeling, various implementation issues of these algorithms are discussed in detail.
To improve the mixing properties of ABC-MCMC, a new ABC algorithm is derived based on the HMC sampler that uses the Hamiltonian dynamics to simulate new samples from its seed samples. Its non-random walk behavior helps to explore the target probability space more effectively and efficiently than the standard random-walk MCMC method. The convergence of the proposed ABC-HMC algorithm is proved by satisfying the detailed balance equation, and its efficacy is verified using a numerical example. Furthermore, A new sequential ABC algorithm is proposed to deal with highly diffused priors in a Bayesian inference problem. The proposed ABC algorithm is based on the subset simulation method and a modified HMC algorithm. With faster convergence, the new algorithm turns out to be a powerful method to sample from a complex multi-modal target density as shown by a numerical example. The applicability of the proposed algorithm is further extended by transforming it into a likelihood-free Bayesian model selection tool.
The proposed likelihood-free approach for Bayesian inference is applied to analyze practical data sets from the Canadian nuclear power plants. The practical data consist of two types of degradation measurements: (1) wall thickness data of the feeder pipes that are affected by the flow-accelerated corrosion (FAC) and (2) data from the steam generator tubes affected by the pitting corrosion. Four popular stochastic degradation models are considered, namely, the random rate model, the gamma process model, the mixed-effects regression model, and the Poisson process model, for characterizing the degradation processes under study. In the modeling process, various inspection uncertainties, such as the sizing error, the coverage error, and the probability of detection (POD) error are taken into account. The numerical results demonstrate that, in comparison to the likelihood-based approach, the proposed likelihood-free approach notably reduces computational time while accurately estimating the model parameters. This study finds that these intuitive and easy-to-implement likelihood-free algorithms are versatile tools for Bayesian inference of stochastic degradation models and a promising alternative to the traditional Bayesian estimation methods
- ā¦