4,627 research outputs found
Imprecise Continuous-Time Markov Chains
Continuous-time Markov chains are mathematical models that are used to
describe the state-evolution of dynamical systems under stochastic uncertainty,
and have found widespread applications in various fields. In order to make
these models computationally tractable, they rely on a number of assumptions
that may not be realistic for the domain of application; in particular, the
ability to provide exact numerical parameter assessments, and the applicability
of time-homogeneity and the eponymous Markov property. In this work, we extend
these models to imprecise continuous-time Markov chains (ICTMC's), which are a
robust generalisation that relaxes these assumptions while remaining
computationally tractable.
More technically, an ICTMC is a set of "precise" continuous-time finite-state
stochastic processes, and rather than computing expected values of functions,
we seek to compute lower expectations, which are tight lower bounds on the
expectations that correspond to such a set of "precise" models. Note that, in
contrast to e.g. Bayesian methods, all the elements of such a set are treated
on equal grounds; we do not consider a distribution over this set.
The first part of this paper develops a formalism for describing
continuous-time finite-state stochastic processes that does not require the
aforementioned simplifying assumptions. Next, this formalism is used to
characterise ICTMC's and to investigate their properties. The concept of lower
expectation is then given an alternative operator-theoretic characterisation,
by means of a lower transition operator, and the properties of this operator
are investigated as well. Finally, we use this lower transition operator to
derive tractable algorithms (with polynomial runtime complexity w.r.t. the
maximum numerical error) for computing the lower expectation of functions that
depend on the state at any finite number of time points
Transient Reward Approximation for Continuous-Time Markov Chains
We are interested in the analysis of very large continuous-time Markov chains
(CTMCs) with many distinct rates. Such models arise naturally in the context of
reliability analysis, e.g., of computer network performability analysis, of
power grids, of computer virus vulnerability, and in the study of crowd
dynamics. We use abstraction techniques together with novel algorithms for the
computation of bounds on the expected final and accumulated rewards in
continuous-time Markov decision processes (CTMDPs). These ingredients are
combined in a partly symbolic and partly explicit (symblicit) analysis
approach. In particular, we circumvent the use of multi-terminal decision
diagrams, because the latter do not work well if facing a large number of
different rates. We demonstrate the practical applicability and efficiency of
the approach on two case studies.Comment: Accepted for publication in IEEE Transactions on Reliabilit
Tunneling and Metastability of continuous time Markov chains
We propose a new definition of metastability of Markov processes on countable
state spaces. We obtain sufficient conditions for a sequence of processes to be
metastable. In the reversible case these conditions are expressed in terms of
the capacity and of the stationary measure of the metastable states
Parallel algorithms for simulating continuous time Markov chains
We have previously shown that the mathematical technique of uniformization can serve as the basis of synchronization for the parallel simulation of continuous-time Markov chains. This paper reviews the basic method and compares five different methods based on uniformization, evaluating their strengths and weaknesses as a function of problem characteristics. The methods vary in their use of optimism, logical aggregation, communication management, and adaptivity. Performance evaluation is conducted on the Intel Touchstone Delta multiprocessor, using up to 256 processors
Analysis of signalling pathways using continuous time Markov chains
We describe a quantitative modelling and analysis approach for signal transduction networks.
We illustrate the approach with an example, the RKIP inhibited ERK pathway [CSK+03]. Our models are high level descriptions of continuous time Markov chains: proteins are modelled by synchronous processes and reactions by transitions. Concentrations are modelled by discrete, abstract quantities. The main advantage of our approach is that using a (continuous time) stochastic logic and the PRISM model checker, we can perform quantitative analysis such as what is the probability that if a concentration reaches a certain level, it will remain at that level thereafter? or how does varying a given reaction rate affect that probability? We also perform standard simulations and compare our results with a traditional ordinary differential equation model. An interesting result is that for the example pathway, only a small number of discrete data values is required to render the simulations practically indistinguishable
- …