39 research outputs found
Tester versus Bug: A Generic Framework for Model-Based Testing via Games
We propose a generic game-based approach for test case generation. We set up
a game between the tester and the System Under Test, in such a way that test
cases correspond to game strategies, and the conformance relation ioco
corresponds to alternating refinement. We show that different test assumptions
from the literature can be easily incorporated, by slightly varying the moves
in the games and their outcomes. In this way, our framework allows a wide
plethora of game-theoretic techniques to be deployed for model based testing.Comment: In Proceedings GandALF 2018, arXiv:1809.0241
With a little help from your friends:semi-cooperative games via Joker moves
This paper coins the notion of Joker games where Player 2 is not strictly adversarial: Player 1 gets help from Player 2 by playing a Joker. We formalize these games as cost games, and study their theoretical properties. Finally, we illustrate their use in model-based testing
With a little help from your friends: semi-cooperative games via Joker moves
This paper coins the notion of Joker games where Player 2 is not strictly
adversarial: Player 1 gets help from Player 2 by playing a Joker. We formalize
these games as cost games, and study their theoretical properties. Finally, we
illustrate their use in model-based testing.Comment: Extended version with appendi
BFL:a Logic to Reason about Fault Trees
Safety-critical infrastructures must operate safely and reliably. Fault tree analysis is a widespread method used to assess risks in these systems: fault trees (FTs) are required - among others - by the Federal Aviation Authority, the Nuclear Regulatory Commission, in the ISO26262 standard for autonomous driving and for software development in aerospace systems. Although popular both in industry and academia, FTs lack a systematic way to formulate powerful and understandable analysis queries. In this paper, we aim to fill this gap and introduce Boolean Fault tree Logic (BFL), a logic to reason about FTs. BFL is a simple, yet expressive logic that supports easier formulation of complex scenarios and specification of FT properties. Alongside BFL, we present model checking algorithms based on binary decision diagrams (BDDs) to analyse specified properties in BFL, patterns and an algorithm to construct counterexamples. Finally, we propose a case-study application of BFL by analysing a COVID19-related FT
Maintenance of Smart Buildings using Fault Trees
Timely maintenance is an important means of increasing system dependability
and life span. Fault Maintenance trees (FMTs) are an innovative framework
incorporating both maintenance strategies and degradation models and serve as a
good planning platform for balancing total costs (operational and maintenance)
with dependability of a system. In this work, we apply the FMT formalism to a
{Smart Building} application and propose a framework that efficiently encodes
the FMT into Continuous Time Markov Chains. This allows us to obtain system
dependability metrics such as system reliability and mean time to failure, as
well as costs of maintenance and failures over time, for different maintenance
policies. We illustrate the pertinence of our approach by evaluating various
dependability metrics and maintenance strategies of a Heating, Ventilation and
Air-Conditioning system.Comment: arXiv admin note: substantial text overlap with arXiv:1801.0426
Correct-by-construction reach-avoid control of partially observable linear stochastic systems
We study feedback controller synthesis for reach-avoid control of discrete-time, linear time-invariant (LTI) systems with Gaussian process and measurement noise. The problem is to compute a controller such that, with at least some required probability, the system reaches a desired goal state in finite time while avoiding unsafe states. Due to stochasticity and nonconvexity, this problem does not admit exact algorithmic or closed-form solutions in general. Our key contribution is a correct-by-construction controller synthesis scheme based on a finite-state abstraction of a Gaussian belief over the unmeasured state, obtained using a Kalman filter. We formalize this abstraction as a Markov decision process (MDP). To be robust against numerical imprecision in approximating transition probabilities, we use MDPs with intervals of transition probabilities. By construction, any policy on the abstraction can be refined into a piecewise linear feedback controller for the LTI system. We prove that the closed-loop LTI system under this controller satisfies the reach-avoid problem with at least the required probability. The numerical experiments show that our method is able to solve reach-avoid problems for systems with up to 6D state spaces, and with control input constraints that cannot be handled by methods such as the rapidly-exploring random belief trees (RRBT)
CTMCs with Imprecisely Timed Observations
Labeled continuous-time Markov chains (CTMCs) describe processes subject to
random timing and partial observability. In applications such as runtime
monitoring, we must incorporate past observations. The timing of these
observations matters but may be uncertain. Thus, we consider a setting in which
we are given a sequence of imprecisely timed labels called the evidence. The
problem is to compute reachability probabilities, which we condition on this
evidence. Our key contribution is a method that solves this problem by
unfolding the CTMC states over all possible timings for the evidence. We
formalize this unfolding as a Markov decision process (MDP) in which each
timing for the evidence is reflected by a scheduler. This MDP has infinitely
many states and actions in general, making a direct analysis infeasible. Thus,
we abstract the continuous MDP into a finite interval MDP (iMDP) and develop an
iterative refinement scheme to upper-bound conditional probabilities in the
CTMC. We show the feasibility of our method on several numerical benchmarks and
discuss key challenges to further enhance the performance.Comment: Extended version (with appendix) of the paper accepted at TACAS 202
CTMCs with Imprecisely Timed Observations
Labeled continuous-time Markov chains (CTMCs) describe processes subject to random timing and partial observability. In applications such as runtime monitoring, we must incorporate past observations. The timing of these observations matters but may be uncertain. Thus, we consider a setting in which we are given a sequence of imprecisely timed labels called the evidence. The problem is to compute reachability probabilities, which we condition on this evidence. Our key contribution is a method that solves this problem by unfolding the CTMC states over all possible timings for the evidence. We formalize this unfolding as a Markov decision process (MDP) in which each timing for the evidence is reflected by a scheduler. This MDP has infinitely many states and actions in general, making a direct analysis infeasible. Thus, we abstract the continuous MDP into a finite interval MDP (iMDP) and develop an iterative refinement scheme to upper-bound conditional probabilities in the CTMC. We show the feasibility of our method on several numerical benchmarks and discuss key challenges to further enhance the performance
CTMCs with Imprecisely Timed Observations
Labeled continuous-time Markov chains (CTMCs) describe processes subject to random timing and partial observability. In applications such as runtime monitoring, we must incorporate past observations. The timing of these observations matters but may be uncertain. Thus, we consider a setting in which we are given a sequence of imprecisely timed labels called the evidence. The problem is to compute reachability probabilities, which we condition on this evidence. Our key contribution is a method that solves this problem by unfolding the CTMC states over all possible timings for the evidence. We formalize this unfolding as a Markov decision process (MDP) in which each timing for the evidence is reflected by a scheduler. This MDP has infinitely many states and actions in general, making a direct analysis infeasible. Thus, we abstract the continuous MDP into a finite interval MDP (iMDP) and develop an iterative refinement scheme to upper-bound conditional probabilities in the CTMC. We show the feasibility of our method on several numerical benchmarks and discuss key challenges to further enhance the performance