19,090 research outputs found
Qualitative Reachability for Open Interval Markov Chains
Interval Markov chains extend classical Markov chains with the possibility to
describe transition probabilities using intervals, rather than exact values.
While the standard formulation of interval Markov chains features closed
intervals, previous work has considered also open interval Markov chains, in
which the intervals can also be open or half-open. In this paper we focus on
qualitative reachability problems for open interval Markov chains, which
consider whether the optimal (maximum or minimum) probability with which a
certain set of states can be reached is equal to 0 or 1. We present
polynomial-time algorithms for these problems for both of the standard
semantics of interval Markov chains. Our methods do not rely on the closure of
open intervals, in contrast to previous approaches for open interval Markov
chains, and can characterise situations in which probability 0 or 1 can be
attained not exactly but arbitrarily closely.Comment: Full version of a paper published at RP 201
A logic for reasoning about time and reliability
We present a logic for stating properties such as, "after a request
for service there is at least a 98\045 probability that the service will
be carried out within 2 seconds". The logic extends the temporal
logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains.
We give algorithms for checking that a given Markov chain satis-
fies a formula in the logic. The algorithms require a polynomial
number of arithmetic operations, in size of both the formula and\003This research report
is a revised and extended version of a paper that has appeared under the title "A
Framework for Reasoning about Time and Reliability" in the
Proceeding of the 10thIEEE Real-time Systems Symposium, Santa Monica CA,
December 1989. This work was partially supported by the Swedish Board for Technical
Development (STU) as part of Esprit BRA Project SPEC, and by the Swedish
Telecommunication Administration.1the Markov chain. A simple example is included to
illustrate the
algorithms
On the complexity of computing maximum entropy for Markovian Models
We investigate the complexity of computing entropy of various Markovian models including
Markov Chains (MCs), Interval Markov Chains (IMCs) and Markov Decision Processes (MDPs).
We consider both entropy and entropy rate for general MCs, and study two algorithmic questions,
i.e., entropy approximation problem and entropy threshold problem. The former asks for
an approximation of the entropy/entropy rate within a given precision, whereas the latter aims
to decide whether they exceed a given threshold. We give polynomial-time algorithms for the
approximation problem, and show the threshold problem is in P
CH3 (hence in PSPACE) and
in P assuming some number-theoretic conjectures. Furthermore, we study both questions for
IMCs and MDPs where we aim to maximise the entropy/entropy rate among an infinite family
of MCs associated with the given model. We give various conditional decidability results for
the threshold problem, and show the approximation problem is solvable in polynomial-time via
convex programming
Flow Faster: Efficient Decision Algorithms for Probabilistic Simulations
Strong and weak simulation relations have been proposed for Markov chains,
while strong simulation and strong probabilistic simulation relations have been
proposed for probabilistic automata. However, decision algorithms for strong
and weak simulation over Markov chains, and for strong simulation over
probabilistic automata are not efficient, which makes it as yet unclear whether
they can be used as effectively as their non-probabilistic counterparts. This
paper presents drastically improved algorithms to decide whether some
(discrete- or continuous-time) Markov chain strongly or weakly simulates
another, or whether a probabilistic automaton strongly simulates another. The
key innovation is the use of parametric maximum flow techniques to amortize
computations. We also present a novel algorithm for deciding strong
probabilistic simulation preorders on probabilistic automata, which has
polynomial complexity via a reduction to an LP problem. When extending the
algorithms for probabilistic automata to their continuous-time counterpart, we
retain the same complexity for both strong and strong probabilistic
simulations.Comment: LMC
On the complexity of computing maximum entropy for Markovian models
We investigate the complexity of computing entropy of various Markovian models including Markov Chains (MCs), Interval Markov Chains (IMCs) and Markov Decision Processes (MDPs). We consider both entropy and entropy rate for general MCs, and study two algorithmic questions, i.e., entropy approximation problem and entropy threshold problem. The former asks for an approximation of the entropy/entropy rate within a given precision, whereas the latter aims to decide whether they exceed a given threshold. We give polynomial-time algorithms for the approximation problem, and show the threshold problem is in P CH3 (hence in PSPACE) and in P assuming some number-theoretic conjectures. Furthermore, we study both questions for IMCs and MDPs where we aim to maximise the entropy/entropy rate among an infinite family of MCs associated with the given model. We give various conditional decidability results for the threshold problem, and show the approximation problem is solvable in polynomial-time via convex programmin
Hierarchical semi-markov conditional random fields for recursive sequential data
Inspired by the hierarchical hidden Markov models (HHMM), we present the hierarchical semi-Markov conditional random field (HSCRF), a generalisation of embedded undirected Markov chains to model complex hierarchical, nested Markov processes. It is parameterised in a discriminative framework and has polynomial time algorithms for learning and inference. Importantly, we develop efficient algorithms for learning and constrained inference in a partially-supervised setting, which is important issue in practice where labels can only be obtained sparsely. We demonstrate the HSCRF in two applications: (i) recognising human activities of daily living (ADLs) from indoor surveillance cameras, and (ii) noun-phrase chunking. We show that the HSCRF is capable of learning rich hierarchical models with reasonable accuracy in both fully and partially observed data cases.<br /
LNCS
Responsiveness—the requirement that every request to a system be eventually handled—is one of the fundamental liveness properties of a reactive system. Average response time is a quantitative measure for the responsiveness requirement used commonly in performance evaluation. We show how average response time can be computed on state-transition graphs, on Markov chains, and on game graphs. In all three cases, we give polynomial-time algorithms
- …