19 research outputs found

    Time-triggered Runtime Verification of Real-time Embedded Systems

    Get PDF
    In safety-critical real-time embedded systems, correctness is of primary concern, as even small transient errors may lead to catastrophic consequences. Due to the limitations of well-established methods such as verification and testing, recently runtime verification has emerged as a complementary approach, where a monitor inspects the system to evaluate the specifications at run time. The goal of runtime verification is to monitor the behavior of a system to check its conformance to a set of desirable logical properties. The literature of runtime verification mostly focuses on event-triggered solutions, where a monitor is invoked when a significant event occurs (e.g., change in the value of some variable used by the properties). At invocation, the monitor evaluates the set of properties of the system that are affected by the occurrence of the event. This type of monitor invocation has two main runtime characteristics: (1) jittery runtime overhead, and (2) unpredictable monitor invocations. These characteristics result in transient overload situations and over-provisioning of resources in real-time embedded systems and hence, may result in catastrophic outcomes in safety-critical systems. To circumvent the aforementioned defects in runtime verification, this dissertation introduces a novel time-triggered monitoring approach, where the monitor takes samples from the system with a constant frequency, in order to analyze the system's health. We describe the formal semantics of time-triggered monitoring and discuss how to optimize the sampling period using minimum auxiliary memory and path prediction techniques. Experiments on real-time embedded systems show that our approach introduces bounded overhead, predictable monitoring, less over-provisioning, and effectively reduces the involvement of the monitor at run time by using negligible auxiliary memory. We further advance our time-triggered monitor to component-based multi-core embedded systems by establishing an optimization technique that provides the invocation frequency of the monitors and the mapping of components to cores to minimize monitoring overhead. Lastly, we present RiTHM, a fully automated and open source tool which provides time-triggered runtime verification specifically for real-time embedded systems developed in C

    Methods for Reducing Monitoring Overhead in Runtime Verification

    Get PDF
    Runtime verification is a lightweight technique that serves to complement existing approaches, such as formal methods and testing, to ensure system correctness. In runtime verification, monitors are synthesized to check a system at run time against a set of properties the system is expected to satisfy. Runtime verification may be used to determine software faults before and after system deployment. The monitor(s) can be synthesized to notify, steer and/or perform system recovery from detected software faults at run time. The research and proposed methods presented in this thesis aim to reduce the monitoring overhead of runtime verification in terms of memory and execution time by leveraging time-triggered techniques for monitoring system events. Traditionally, runtime verification frameworks employ event-triggered monitors, where the invocation of the monitor occurs after every system event. Because systems events can be sporadic or bursty in nature, event-triggered monitoring behaviour is difficult to predict. Time-triggered monitors, on the other hand, periodically preempt and process system events, making monitoring behaviour predictable. However, software system state reconstruction is not guaranteed (i.e., missed state changes/events between samples). The first part of this thesis analyzes three heuristics that efficiently solve the NP-complete problem of minimizing the amount of memory required to store system state changes to guarantee accurate state reconstruction. The experimental results demonstrate that adopting near-optimal algorithms do not greatly change the memory consumption and execution time of monitored programs; hence, NP-completeness is likely not an obstacle for time-triggered runtime verification. The second part of this thesis introduces a novel runtime verification technique called hybrid runtime verification. Hybrid runtime verification enables the monitor to toggle between event- and time-triggered modes of operation. The aim of this approach is to reduce the overall runtime monitoring overhead with respect to execution time. Minimizing the execution time overhead by employing hybrid runtime verification is not in NP. An integer linear programming heuristic is formulated to determine near-optimal hybrid monitoring schemes. Experimental results show that the heuristic typically selects monitoring schemes that are equal to or better than naively selecting exclusively one operation mode for monitoring

    Data flow analysis from UML/MARTE models based on binary traces

    Get PDF
    The design of increasingly complex embedded systems requires powerful solutions from the very beginning of the design process. Model Based Design (MBD) and early simulation have proven to be capable technologies to perform initial design space analysis to optimize system design. Traditional MBD methods and tools typically rely on fixed elements, which makes difficult the evaluation of different platform configurations, communication alternatives or models of computation. Addressing these challenges require flexible design technologies able to support, from a high-level abstract model, full design space exploration, including system specification, binary generation and performance evaluation. In this context, this paper proposes a UML/MARTE based approach able to address the challenges mentioned above by improving design flexibility and evaluation capabilities, including automatic code generation, trace execution collection and trace analysis from the initial UML models. The approach focuses on the definition and analysis of the paths data follow through the different application components, as a way to understand the behavior or the different design solutions.This work has been funded by the EU and the Spanish MICINN/AEI through the ECSEL Comp4Drones and the TEC2017-86722-C4-3-R PLATINO projects

    From Simulation to Runtime Verification and Back: Connecting Single-Run Verification Techniques

    Get PDF
    Modern safety-critical systems, such as aircraft and spacecraft, crucially depend on rigorous verification, from design time to runtime. Simulation is a highly-developed, time-honored design-time verification technique, whereas runtime verification is a much younger outgrowth from modern complex systems that both enable embedding analysis on-board and require mission-time verification, e.g., for flight certification. While the attributes of simulation are well-defined, the vocabulary of runtime verification is still being formed; both are active research areas needed to ensure safety and security. This invited paper explores the connections and differences between simulation and runtime verification and poses open research questions regarding how each might be used to advance past bottlenecks in the other. We unify their vocabulary, list their commonalities and contrasts, and examine how their artifacts may be connected to push the state of the art of what we can (safely) fly

    Runtime Verification with Controllable Time Predictability and Memory Utilization

    Get PDF
    The goal of runtime verifi cation is to inspect the well-being of a system by employing a monitor during its execution. Such monitoring imposes cost in terms of resource utilization. Memory usage and predictability of monitor invocations are the key indicators of the quality of a monitoring solution, especially in the context of embedded systems. In this work, we propose a novel control-theoretic approach for coordinating time predictability and memory utilization in runtime monitoring of real-time embedded systems. In particular, we design a PID controller and four fuzzy controllers with di erent optimization control objectives. Our approach controls the frequency of monitor invocations by incorporating a bounded memory bu er that stores events which need to be monitored. The controllers attempt to improve time predictability, and maximize memory utilization, while ensuring the soundness of the monitor. Unlike existing approaches based on static analysis, our approach is scalable and well-suited for reactive systems that are required to react to stimuli from the environment in a timely fashion. Our experiments using two case studies (a laser beam stabilizer for aircraft tracking, and a Bluetooth mobile payment system) demonstrate the advantages of using controllers to achieve low variation in the frequency of monitor invocations, while maintaining maximum memory utilization in highly non-linear environments. In addition to this problem, the thesis presents a brief overview of our preceding work on runtime verifi cation

    A Taxonomy for Classifying Runtime Verification Tools

    Get PDF
    International audienceOver the last 15 years Runtime Verification (RV) has grown into a diverse and active field, which has stimulated the development of numerous theoretical frameworks and tools. Many of the tools are at first sight very different and challenging to compare. Yet, there are similarities. In this work, we classify RV tools within a high-level taxonomy of concepts. We first present this taxonomy and discuss the different dimensions. Then, we survey RV tools and classify them according to the taxonomy. This paper constitutes a snapshot of the current state of the art and enables a comparison of existing tools

    RitHM: A Modular Software Framework for Runtime Monitoring Supporting Complete and Lossy Traces

    Get PDF
    Runtime verification (RV) is an effective and automated method for specification based offline testing as well as online monitoring of complex real-world systems. Firstly, a software framework for RV needs to exhibit certain design features to support usability, modifiability and efficiency. While usability and modifiability are important for providing support for expressive logical formalisms, efficiency is required to reduce the extra overhead at run time. Secondly, most existing techniques assume the existence of a complete execution trace for RV. However, real-world systems often produce incomplete execution traces due to reasons such as network issues, logging failures, etc. A few verification techniques have recently emerged for performing verification of incomplete execution traces. While some of these techniques sacrifice soundness, others are too restrictive in their tolerance for incompleteness. For addressing the first problem, we introduce RitHM, a comprehensive framework, which enables development and integration of efficient verification techniques. RitHM's design takes into account various state-of-the-art techniques that are developed to optimize RV w.r.t. the efficiency of monitors and expressivity of logical formalisms. RitHM's design supports modifiability by allowing a reuse of efficient monitoring algorithms in the form of plugins, which can utilize heterogeneous back-ends. RitHM also supports extensions of logical formalisms through logic plugins. It also facilitates the interoperability between implementations of monitoring algorithms, and this feature allows utilizing different efficient algorithms for monitoring different sub-parts of a specification. We evaluate RitHM's architecture and architectures of a few more tools using architecture trade-off analysis (ATAM) method. We also report empirical results, where RitHM is used for monitoring real-world systems. The results underscore the importance of various design features of RitHM. For addressing the second problem, we identify a fragment of LTL specifications, which can be soundly monitored in the presence of transient loss events in an execution trace. We present an offline algorithm, which identifies whether an LTL formula is monitorable in a presence of a transient loss of events and constructs a loss-tolerant monitor depending upon the monitorability of the formula. Our experimental results demonstrate that our method increases the applicability of RV for monitoring various real-world applications, which produce lossy traces. The extra overhead caused by our constructed monitors is minimal as demonstrated by application of our method on commonly used patterns of LTL formulas

    Quantitative and Approximate Monitoring

    Get PDF
    In runtime verification, a monitor watches a trace of a system and, if possible, decides after observing each finite prefix whether or not the unknown infinite trace satisfies a given specification. We generalize the theory of runtime verification to monitors that attempt to estimate numerical values of quantitative trace properties (instead of attempting to conclude boolean values of trace specifications), such as maximal or average response time along a trace. Quantitative monitors are approximate: with every finite prefix, they can improve their estimate of the infinite trace's unknown property value. Consequently, quantitative monitors can be compared with regard to a precision-cost trade-off: better approximations of the property value require more monitor resources, such as states (in the case of finite-state monitors) or registers, and additional resources yield better approximations. We introduce a formal framework for quantitative and approximate monitoring, show how it conservatively generalizes the classical boolean setting for monitoring, and give several precision-cost trade-offs for monitors. For example, we prove that there are quantitative properties for which every additional register improves monitoring precision.Comment: To appear in LICS 2021; corrected a referenc

    Model-Driven Trace Diagnostics for Pattern-based Temporal Specifications

    Get PDF
    Offline trace checking tools check whether a specification holds on a log of events recorded at run time; they yield a verification verdict (typically a boolean value) when the checking process ends. When the verdict is false, a software engineer needs to diagnose the property violations found in the trace in order to understand their cause and, if needed, decide for corrective actions to be performed on the system. However, a boolean verdict may not be informative enough to perform trace diagnostics, since it does not provide any useful information about the cause of the violation and because a property can be violated for multiple reasons. The goal of this paper is to provide a practical and scalable so- lution to solve the trace diagnostics problem, in the settings of model-driven trace checking of temporal properties expressed in TemPsy, a pattern-based specification language. The main contributions of the paper are: a model-driven approach for trace diagnostics of pattern-based temporal properties expressed in TemPsy, which relies on the evaluation of OCL queries on an instance of a trace meta-model; the implementation of this trace diagnostics procedure in the TemPsy-Report tool; the evaluation of the scalability of TemPsy-Report, when used for the diagnostics of violations of real properties derived from a case study of our industrial partner. The results show that TemPsy-Report is able to collect diagnostic information from large traces (with one million events) in less than ten seconds; TemPsy-Report scales linearly with respect to the length of the trace and keeps approximately constant performance as the number of violations increases
    corecore