3,913 research outputs found
Efficient Parallel Statistical Model Checking of Biochemical Networks
We consider the problem of verifying stochastic models of biochemical
networks against behavioral properties expressed in temporal logic terms. Exact
probabilistic verification approaches such as, for example, CSL/PCTL model
checking, are undermined by a huge computational demand which rule them out for
most real case studies. Less demanding approaches, such as statistical model
checking, estimate the likelihood that a property is satisfied by sampling
executions out of the stochastic model. We propose a methodology for
efficiently estimating the likelihood that a LTL property P holds of a
stochastic model of a biochemical network. As with other statistical
verification techniques, the methodology we propose uses a stochastic
simulation algorithm for generating execution samples, however there are three
key aspects that improve the efficiency: first, the sample generation is driven
by on-the-fly verification of P which results in optimal overall simulation
time. Second, the confidence interval estimation for the probability of P to
hold is based on an efficient variant of the Wilson method which ensures a
faster convergence. Third, the whole methodology is designed according to a
parallel fashion and a prototype software tool has been implemented that
performs the sampling/verification process in parallel over an HPC
architecture
A Hierarchy of Scheduler Classes for Stochastic Automata
Stochastic automata are a formal compositional model for concurrent
stochastic timed systems, with general distributions and non-deterministic
choices. Measures of interest are defined over schedulers that resolve the
nondeterminism. In this paper we investigate the power of various theoretically
and practically motivated classes of schedulers, considering the classic
complete-information view and a restriction to non-prophetic schedulers. We
prove a hierarchy of scheduler classes w.r.t. unbounded probabilistic
reachability. We find that, unlike Markovian formalisms, stochastic automata
distinguish most classes even in this basic setting. Verification and strategy
synthesis methods thus face a tradeoff between powerful and efficient classes.
Using lightweight scheduler sampling, we explore this tradeoff and demonstrate
the concept of a useful approximative verification technique for stochastic
automata
On-the-fly Probabilistic Model Checking
Model checking approaches can be divided into two broad categories: global
approaches that determine the set of all states in a model M that satisfy a
temporal logic formula f, and local approaches in which, given a state s in M,
the procedure determines whether s satisfies f. When s is a term of a process
language, the model checking procedure can be executed "on-the-fly", driven by
the syntactical structure of s. For certain classes of systems, e.g. those
composed of many parallel components, the local approach is preferable because,
depending on the specific property, it may be sufficient to generate and
inspect only a relatively small part of the state space. We propose an
efficient, on-the-fly, PCTL model checking procedure that is parametric with
respect to the semantic interpretation of the language. The procedure comprises
both bounded and unbounded until modalities. The correctness of the procedure
is shown and its efficiency is compared with a global PCTL model checker on
representative applications.Comment: In Proceedings ICE 2014, arXiv:1410.701
Certified Reinforcement Learning with Logic Guidance
This paper proposes the first model-free Reinforcement Learning (RL)
framework to synthesise policies for unknown, and continuous-state Markov
Decision Processes (MDPs), such that a given linear temporal property is
satisfied. We convert the given property into a Limit Deterministic Buchi
Automaton (LDBA), namely a finite-state machine expressing the property.
Exploiting the structure of the LDBA, we shape a synchronous reward function
on-the-fly, so that an RL algorithm can synthesise a policy resulting in traces
that probabilistically satisfy the linear temporal property. This probability
(certificate) is also calculated in parallel with policy learning when the
state space of the MDP is finite: as such, the RL algorithm produces a policy
that is certified with respect to the property. Under the assumption of finite
state space, theoretical guarantees are provided on the convergence of the RL
algorithm to an optimal policy, maximising the above probability. We also show
that our method produces ''best available'' control policies when the logical
property cannot be satisfied. In the general case of a continuous state space,
we propose a neural network architecture for RL and we empirically show that
the algorithm finds satisfying policies, if there exist such policies. The
performance of the proposed framework is evaluated via a set of numerical
examples and benchmarks, where we observe an improvement of one order of
magnitude in the number of iterations required for the policy synthesis,
compared to existing approaches whenever available.Comment: This article draws from arXiv:1801.08099, arXiv:1809.0782
A modest approach to Markov automata
A duplicate of https://zenodo.org/record/5758839.
Reason: The submitter forgot to indicate the DOI before publishing, so it got another one assigned automatically, which is unchangeable
- …