206 research outputs found
Model Checking Markov Chains with Actions and State Labels
In the past, logics of several kinds have been proposed for reasoning about discrete- or continuous-time Markov chains. Most of these logics rely on either state labels (atomic propositions) or on transition labels (actions). However, in several applications it is useful to reason about both state-properties and action-sequences. For this purpose, we introduce the logic asCSL which provides powerful means to characterize execution paths of Markov chains with actions and state labels. asCSL can be regarded as an extension of the purely state-based logic asCSL (continuous stochastic logic). \ud
In asCSL, path properties are characterized by regular expressions over actions and state-formulas. Thus, the truth value of path-formulas does not only depend on the available actions in a given time interval, but also on the validity of certain state formulas in intermediate states.\ud
We compare the expressive power of CSL and asCSL and show that even the state-based fragment of asCSL is strictly more expressive than CSL if time intervals starting at zero are employed. Using an automaton-based technique, an asCSL formula and a Markov chain with actions and state labels are combined into a product Markov chain. For time intervals starting at zero we establish a reduction of the model checking problem for asCSL to CSL model checking on this product Markov chain. The usefulness of our approach is illustrated by through an elaborate model of a scalable cellular communication system for which several properties are formalized by means of asCSL-formulas, and checked using the new procedure
Extending the Logic IM-SPDL with Impulse and State Rewards
This report presents the logic SDRL (Stochastic Dynamic Reward Logic), an extension of the stochastic logic IM-SPDL, which supports the specication of complex performance and dependability requirements. SDRL extends IM-SPDL with the possibility to express impulse- and state reward measures.\ud
The logic is interpreted over extended action-based Markov reward model (EMRM), i.e. transition systems containing both immediate and Markovian transitions, where additionally the states and transitions can be enriched with rewards.\ud
We define ne the syntax and semantics of the new logic and show that SDRL provides powerful means to specify path-based properties with timing and reward-based restrictions.\ud
In general, paths can be characterised by regular expressions, also called programs, where the executability of a program may depend on the validity of test formulae. For the model checking of SDRL time- and reward-bounded path formulae, a deterministic program automaton is constructed from the requirement. Afterwards the product transition\ud
system between this automaton and the EMRM is built and subsequently transformed into a continuous time Markov reward model (MRM) on which numerical\ud
analysis is performed.\u
SPDL Model Checking via Property-Driven State Space Generation
In this report we describe how both, memory and time requirements for stochastic model checking of SPDL (stochastic propositional dynamic logic) formulae can significantly be reduced. SPDL is the stochastic extension of the multi-modal program logic PDL.\ud
SPDL provides means to specify path-based properties with or without timing restrictions. Paths can be characterised by so-called programs, essentially regular expressions, where the executability can be made dependent on the validity of test formulae. For model-checking SPDL path formulae it is necessary to build a product transition system (PTS)\ud
between the system model and the program automaton belonging to the path formula that is to be verified.\ud
In many cases, this PTS can be drastically reduced during the model checking procedure, as the program restricts the number of potentially satisfying paths. Therefore, we propose an approach that directly generates the reduced PTS from a given SPA specification and an SPDL path formula.\ud
The feasibility of this approach is shown through a selection of case studies, which show enormous state space reductions, at no increase in generation time.\u
Timed Comparisons of Semi-Markov Processes
Semi-Markov processes are Markovian processes in which the firing time of the
transitions is modelled by probabilistic distributions over positive reals
interpreted as the probability of firing a transition at a certain moment in
time. In this paper we consider the trace-based semantics of semi-Markov
processes, and investigate the question of how to compare two semi-Markov
processes with respect to their time-dependent behaviour. To this end, we
introduce the relation of being "faster than" between processes and study its
algorithmic complexity. Through a connection to probabilistic automata we
obtain hardness results showing in particular that this relation is
undecidable. However, we present an additive approximation algorithm for a
time-bounded variant of the faster-than problem over semi-Markov processes with
slow residence-time functions, and a coNP algorithm for the exact faster-than
problem over unambiguous semi-Markov processes
Stochastic abstraction of programs: towards performance-driven development
Distributed computer systems are becoming increasingly prevalent, thanks to modern
technology, and this leads to significant challenges for the software developers of these
systems. In particular, in order to provide a certain service level agreement with users,
the performance characteristics of the system are critical. However, developers today
typically consider performance only in the later stages of development, when it may be
too late to make major changes to the design. In this thesis, we propose a performance driven
approach to development — based around tool support that allows developers
to use performance modelling techniques, while still working at the level of program
code.
There are two central themes to the thesis. The first is to automatically relate performance
models to program code. We define the Simple Imperative Remote Invocation
Language (SIRIL), and provide a probabilistic semantics that interprets a program
as a Markov chain. To make such an interpretation both computable and efficient,
we develop an abstract interpretation of the semantics, from which we can derive a
Performance Evaluation Process Algebra (PEPA) model of the system. This is based
around abstracting the domain of variables to truncated multivariate normal measures.
The second theme of the thesis is to analyse large performance models by means
of compositional abstraction. We use two abstraction techniques based on aggregation
of states — abstract Markov chains, and stochastic bounds — and apply both of
them compositionally to PEPA models. This allows us to model check properties in
the three-valued Continuous Stochastic Logic (CSL), on abstracted models. We have
implemented an extension to the Eclipse plug-in for PEPA, which provides a graphical
interface for specifying which states in the model to aggregate, and for performing the
model checking
- …