150 research outputs found

    From LTL to rLTL monitoring

    Get PDF
    Runtime monitoring is commonly used to detect the violation of desired properties in safety critical systems by observing run prefixes of the system. Bauer et al. introduced an influential framework for monitoring Linear Temporal Logic (LTL) properties, which is based on a three-valued semantics: the formula is already satisfied by the given prefix, it is already violated, or it is still undetermined, i.e., it can be satisfied and violated. However, a wide range of formulas are not monitorable under this approach, meaning that every prefix is undetermined. In particular, Bauer et al. report that 44% of the formulas they consider in their experiments fall into this category. Recently, robust semantics for LTL were introduced to capture degrees of violation of universal properties. Here, we define robust semantics for run prefixes and show its potential in monitoring: every formula considered by Bauer et al. is monitorable under our approach. Furthermore, we show that properties expressed with the robust semantics can be monitored by deterministic automata

    Runtime Verification Using a Temporal Description Logic Revisited

    Get PDF
    Formulae of linear temporal logic (LTL) can be used to specify (wanted or unwanted) properties of a dynamical system. In model checking, the system’s behaviour is described by a transition system, and one needs to check whether all possible traces of this transition system satisfy the formula. In runtime verification, one observes the actual system behaviour, which at any point in time yields a finite prefix of a trace. The task is then to check whether all continuations of this prefix to a trace satisfy (violate) the formula. More precisely, one wants to construct a monitor, i.e., a finite automaton that receives the finite prefix as input and then gives the right answer based on the state currently reached. In this paper, we extend the known approaches to LTL runtime verification in two directions. First, instead of propositional LTL we use the more expressive temporal logic ALC-LTL, which can use axioms of the Description Logic (DL) ALC instead of propositional variables to describe properties of single states of the system. Second, instead of assuming that the observed system behaviour provides us with complete information about the states of the system, we assume that states are described in an incomplete way by ALC-knowledge bases. We show that also in this setting monitors can effectively be constructed. The (double-exponential) size of the constructed monitors is in fact optimal, and not higher than in the propositional case. As an auxiliary result, we show how to construct Büchi automata for ALC-LTL-formulae, which yields alternative proofs for the known upper bounds of deciding satisfiability in ALC-LTL

    Ain't No Stopping Us Monitoring Now

    Full text link
    Not all properties are monitorable. This is a well-known fact, and it means there exist properties that cannot be fully verified at runtime. However, given a non-monitorable property, a monitor can still be synthesised, but it could end up in a state where no verdict will ever be concluded on the satisfaction (resp., violation) of the property. For this reason, non-monitorable properties are usually discarded. In this paper, we carry out an in-depth analysis on monitorability, and how non-monitorable properties can still be partially verified. We present our theoretical results at a semantic level, without focusing on a specific formalism. Then, we show how our theory can be applied to achieve partial runtime verification of Linear Temporal Logic (LTL)

    Model Checking Concurrent Programs with Nondeterminism and Randomization

    Get PDF
    For concurrent probabilistic programs having process-level nondeterminism, it is often necessary to restrict the class of schedulers that resolve nondeterminism to obtain sound and precise model checking algorithms. In this paper, we introduce two classes of schedulers called view consistent and locally Markovian schedulers and consider the model checking problem of concurrent, probabilistic programs under these alternate semantics. Specifically, given a B"{u}chi automaton SpecSpec, a threshold xx in [0,1][0,1], and a concurrent program PP, the model checking problem asks if the measure of computations of PP that satisfy SpecSpec is at least xx, under all view consistent (or locally Markovian) schedulers. We give precise complexity results for the model checking problem (for different classes of B"{u}chi automata specifications) and contrast it with the complexity under the standard semantics that considers all schedulers

    Synthesis Of Distributed Protocols From Scenarios And Specifications

    Get PDF
    Distributed protocols, typically expressed as stateful agents communicating asynchronously over buffered communication channels, are difficult to design correctly. This difficulty has spurred decades of research in the area of automated model-checking algorithms. In turn, practical implementations of model-checking algorithms have enabled protocol developers to prove the correctness of such distributed protocols. However, model-checking techniques are only marginally useful during the actual development of such protocols; typically as a debugging aid once a reasonably complete version of the protocol has already been developed. The actual development process itself is often tedious and requires the designer to reason about complex interactions arising out of concurrency and asynchrony inherent to such protocols. In this dissertation we describe program synthesis techniques which can be applied as an enabling technology to ease the task of developing such protocols. Specifically, the programmer provides a natural, but incomplete description of the protocol in an intuitive representation — such as scenarios or an incomplete protocol. This description specifies the behavior of the protocol in the common cases. The programmer also specifies a set of high-level formal requirements that a correct protocol is expected to satisfy. These requirements can include safety requirements as well as liveness requirements in the form of Linear Temporal Logic (LTL) formulas. We describe techniques to synthesize a correct protocol which is consistent with the common-case behavior specified by the programmer and also satisfies the high-level safety and liveness requirements set forth by the programmer. We also describe techniques for program synthesis in general, which serve to enable the solutions to distributed protocol synthesis that this dissertation explores

    Automata-theoretic and bounded model checking for linear temporal logic

    Get PDF
    In this work we study methods for model checking the temporal logic LTL. The focus is on the automata-theoretic approach to model checking and bounded model checking. We begin by examining automata-theoretic methods to model check LTL safety properties. The model checking problem can be reduced to checking whether the language of a finite state automaton on finite words is empty. We describe an efficient algorithm for generating small finite state automata for so called non-pathological safety properties. The presented implementation is the first tool able to decide whether a formula is non-pathological. The experimental results show that treating safety properties can benefit model checking at very little cost. In addition, we find supporting evidence for the view that minimising the automaton representing the property does not always lead to a small product state space. A deterministic property automaton can result in a smaller product state space even though it might have a larger number states. Next we investigate modular analysis. Modular analysis is a state space reduction method for modular Petri nets. The method can be used to construct a reduced state space called the synchronisation graph. We devise an on-the-fly automata-theoretic method for model checking the behaviour of a modular Petri net from the synchronisation graph. The solution is based on reducing the model checking problem to an instance of verification with testers. We analyse the tester verification problem and present an efficient on-the-fly algorithm, the first complete solution to tester verification problem, based on generalised nested depth-first search. We have also studied propositional encodings for bounded model checking LTL. A new simple linear sized encoding is developed and experimentally evaluated. The implementation in the NuSMV2 model checker is competitive with previously presented encodings. We show how to generalise the LTL encoding to a more succint logic: LTL with past operators. The generalised encoding compares favourably with previous encodings for LTL with past operators. Links between bounded model checking and the automata-theoretic approach are also explored.reviewe

    Temporalised Description Logics for Monitoring Partially Observable Events

    Get PDF
    Inevitably, it becomes more and more important to verify that the systems surrounding us have certain properties. This is indeed unavoidable for safety-critical systems such as power plants and intensive-care units. We refer to the term system in a broad sense: it may be man-made (e.g. a computer system) or natural (e.g. a patient in an intensive-care unit). Whereas in Model Checking it is assumed that one has complete knowledge about the functioning of the system, we consider an open-world scenario and assume that we can only observe the behaviour of the actual running system by sensors. Such an abstract sensor could sense e.g. the blood pressure of a patient or the air traffic observed by radar. Then the observed data are preprocessed appropriately and stored in a fact base. Based on the data available in the fact base, situation-awareness tools are supposed to help the user to detect certain situations that require intervention by an expert. Such situations could be that the heart-rate of a patient is rather high while the blood pressure is low, or that a collision of two aeroplanes is about to happen. Moreover, the information in the fact base can be used by monitors to verify that the system has certain properties. It is not realistic, however, to assume that the sensors always yield a complete description of the current state of the observed system. Thus, it makes sense to assume that information that is not present in the fact base is unknown rather than false. Moreover, very often one has some knowledge about the functioning of the system. This background knowledge can be used to draw conclusions about the possible future behaviour of the system. Employing description logics (DLs) is one way to deal with these requirements. In this thesis, we tackle the sketched problem in three different contexts: (i) runtime verification using a temporalised DL, (ii) temporalised query entailment, and (iii) verification in DL-based action formalisms
    • …
    corecore