55 research outputs found

    On the complexity of determinizing monitors

    Get PDF
    We examine the determinization of monitors. We demonstrate that every monitor is equivalent to a deterministic one, which is at most doubly exponential in size with respect to the original monitor. When monitors are described as CCS-like processes, this doubly-exponential bound is optimal. When (deterministic) monitors are described as finite automata (as their LTS), then they can be exponentially more succinct than their CCS process form.peer-reviewe

    The Cost of Monitoring Alone

    Full text link
    We compare the succinctness of two monitoring systems for properties of infinite traces, namely parallel and regular monitors. Although a parallel monitor can be turned into an equivalent regular monitor, the cost of this transformation is a double-exponential blowup in the syntactic size of the monitors, and a triple-exponential blowup when the goal is a deterministic monitor. We show that these bounds are tight and that they also hold for translations between corresponding fragments of Hennessy-Milner logic with recursion over infinite traces.Comment: 22 page

    Monitoring for Silent Actions

    Get PDF
    Silent actions are an essential mechanism for system modelling and specification. They are used to abstractly report the occurrence of computation steps without divulging their precise details, thereby enabling the description of important aspects such as the branching structure of a system. Yet, their use rarely features in specification logics used in runtime verification. We study monitorability aspects of a branching-time logic that employs silent actions, identifying which formulas are monitorable for a number of instrumentation setups. We also consider defective instrumentation setups that imprecisely report silent events, and establish monitorability results for tolerating these imperfections

    The Best a Monitor Can Do

    Get PDF
    Existing notions of monitorability for branching-time properties are fairly restrictive. This, in turn, impacts the ability to incorporate prior knowledge about the system under scrutiny - which corresponds to a branching-time property - into the runtime analysis. We propose a definition of optimal monitors that verify the best monitorable under- or over-approximation of a specification, regardless of its monitorability status. Optimal monitors can be obtained for arbitrary branching-time properties by synthesising a sound and complete monitor for their strongest monitorable consequence. We show that the strongest monitorable consequence of specifications expressed in Hennessy-Milner logic with recursion is itself expressible in this logic, and present a procedure to find it. Our procedure enables prior knowledge to be optimally incorporated into runtime monitors

    Optimized Temporal Monitors for SystemC

    Get PDF
    SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead

    A process calculus approach to correctness enforcement of PLCs

    Get PDF
    We define a simple process calculus, based on Hennessy and Regan\u2019s Timed Process Language, for specifying networks of communicating programmable logic controllers (PLCs) enriched with monitors enforcing specifications compliance. We define a synthesis algorithm that given an uncorrupted PLC returns a monitor that enforces the correctness of the PLC, even when injected with malware that may forge/drop actuator commands and inter-controller communications. Then, we strengthen the capabilities of our monitors by allowing the insertion of actions to mitigate malware activities. This gives us deadlock-freedom monitoring: malware may not drag monitored controllers into deadlock states

    Doctor of Philosophy

    Get PDF
    dissertationA modern software system is a composition of parts that are themselves highly complex: operating systems, middleware, libraries, servers, and so on. In principle, compositionality of interfaces means that we can understand any given module independently of the internal workings of other parts. In practice, however, abstractions are leaky, and with every generation, modern software systems grow in complexity. Traditional ways of understanding failures, explaining anomalous executions, and analyzing performance are reaching their limits in the face of emergent behavior, unrepeatability, cross-component execution, software aging, and adversarial changes to the system at run time. Deterministic systems analysis has a potential to change the way we analyze and debug software systems. Recorded once, the execution of the system becomes an independent artifact, which can be analyzed offline. The availability of the complete system state, the guaranteed behavior of re-execution, and the absence of limitations on the run-time complexity of analysis collectively enable the deep, iterative, and automatic exploration of the dynamic properties of the system. This work creates a foundation for making deterministic replay a ubiquitous system analysis tool. It defines design and engineering principles for building fast and practical replay machines capable of capturing complete execution of the entire operating system with an overhead of several percents, on a realistic workload, and with minimal installation costs. To enable an intuitive interface of constructing replay analysis tools, this work implements a powerful virtual machine introspection layer that enables an analysis algorithm to be programmed against the state of the recorded system through familiar terms of source-level variable and type names. To support performance analysis, the replay engine provides a faithful performance model of the original execution during replay

    Foundations for Safety-Critical on-Demand Medical Systems

    Get PDF
    In current medical practice, therapy is delivered in critical care environments (e.g., the ICU) by clinicians who manually coordinate sets of medical devices: The clinicians will monitor patient vital signs and then reconfigure devices (e.g., infusion pumps) as is needed. Unfortunately, the current state of practice is both burdensome on clinicians and error prone. Recently, clinicians have been speculating whether medical devices supporting ``plug & play interoperability\u27\u27 would make it easier to automate current medical workflows and thereby reduce medical errors, reduce costs, and reduce the burden on overworked clinicians. This type of plug & play interoperability would allow clinicians to attach devices to a local network and then run software applications to create a new medical system ``on-demand\u27\u27 which automates clinical workflows by automatically coordinating those devices via the network. Plug & play devices would let the clinicians build new medical systems compositionally. Unfortunately, safety is not considered a compositional property in general. For example, two independently ``safe\u27\u27 devices may interact in unsafe ways. Indeed, even the definition of ``safe\u27\u27 may differ between two device types. In this dissertation we propose a framework and define some conditions that permit reasoning about the safety of plug & play medical systems. The framework includes a logical formalism that permits formal reasoning about the safety of many device combinations at once, as well as a platform that actively prevents unintended timing interactions between devices or applications via a shared resource such as a network or CPU. We describe the various pieces of the framework, report some experimental results, and show how the pieces work together to enable the safety assessment of plug & play medical systems via a two case-studies

    A Foundation for Runtime Monitoring

    Get PDF
    Runtime Verification is a lightweight technique that complements other verification methods in an effort to ensure software correctness. The technique poses novel questions to software engineers: it is not easy to identify which specifications are amenable to runtime monitoring, nor is it clear which monitors effect the required runtime analysis correctly. This exposition targets a foundational understanding of these questions. Particularly, it considers an expressive specification logic (a syntactic variant of the mmucalc) that is agnostic of the verification method used, together with an elemental framework providing an operational semantics for the runtime analysis performed by monitors. The correspondence between the property satisfactions in the logic on the one hand, and the verdicts reached by the monitors performing the analysis on the other, is a central theme of the study. Such a correspondence underpins the concept of monitorability, used to identify the subsets of the logic that can be adequately monitored for by RV. Another theme of the study is that of understanding what should be expected of a monitor in order for the verification process to be correct. We show how the monitor framework considered can constitute a basis whereby various notions of monitor correctness may be defined and investigated
    corecore