44 research outputs found

    COST Action IC 1402 ArVI: Runtime Verification Beyond Monitoring -- Activity Report of Working Group 1

    Full text link
    This report presents the activities of the first working group of the COST Action ArVI, Runtime Verification beyond Monitoring. The report aims to provide an overview of some of the major core aspects involved in Runtime Verification. Runtime Verification is the field of research dedicated to the analysis of system executions. It is often seen as a discipline that studies how a system run satisfies or violates correctness properties. The report exposes a taxonomy of Runtime Verification (RV) presenting the terminology involved with the main concepts of the field. The report also develops the concept of instrumentation, the various ways to instrument systems, and the fundamental role of instrumentation in designing an RV framework. We also discuss how RV interplays with other verification techniques such as model-checking, deductive verification, model learning, testing, and runtime assertion checking. Finally, we propose challenges in monitoring quantitative and statistical data beyond detecting property violation

    Developing Multi-Agent Systems with Degrees of Neuro-Symbolic Integration [A Position Paper]

    Full text link
    In this short position paper we highlight our ongoing work on verifiable heterogeneous multi-agent systems and, in particular, the complex (and often non-functional) issues that impact the choice of structure within each agent

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling

    Formal modelling of data integration systems security policies

    No full text
    Data Integration Systems (DIS) are concerned with integrating data from multiple data sources to resolve user queries. Typically, organisations providing data sources specify security policies that impose stringent requirements on the collection, processing, and disclosure of personal and sensitive data. If the security policies were not correctly enforced by the integration component of DIS, the data is exposed to data leakage threats, e.g. unauthorised disclosure or secondary use of the data. SecureDIS is a framework that helps system designers to mitigate data leakage threats during the early phases of DIS development. SecureDIS provides designers with a set of informal guidelines written in natural language to specify and enforce security policies that capture confidentiality, privacy, and trust properties. In this paper, we apply a formal approach to model a DIS with the SecureDIS security policies and verify the correctness and consistency of the model. The model can be used as a basis to perform security policies analysis or automatically generate a Java code to enforce those policies within DIS

    From Simulation to Runtime Verification and Back: Connecting Single-Run Verification Techniques

    Get PDF
    Modern safety-critical systems, such as aircraft and spacecraft, crucially depend on rigorous verification, from design time to runtime. Simulation is a highly-developed, time-honored design-time verification technique, whereas runtime verification is a much younger outgrowth from modern complex systems that both enable embedding analysis on-board and require mission-time verification, e.g., for flight certification. While the attributes of simulation are well-defined, the vocabulary of runtime verification is still being formed; both are active research areas needed to ensure safety and security. This invited paper explores the connections and differences between simulation and runtime verification and poses open research questions regarding how each might be used to advance past bottlenecks in the other. We unify their vocabulary, list their commonalities and contrasts, and examine how their artifacts may be connected to push the state of the art of what we can (safely) fly

    Verifiable self-certifying autonomous systems

    Get PDF
    Autonomous systems are increasingly being used in safety-and mission-critical domains, including aviation, manufacturing, healthcare and the automotive industry. Systems for such domains are often verified with respect to essential requirements set by a regulator, as part of a process called certification. In principle, autonomous systems can be deployed if they can be certified for use. However, certification is especially challenging as the condition of both the system and its environment will surely change, limiting the effective use of the system. In this paper we discuss the technological and regulatory background for such systems, and introduce an architectural framework that supports verifiably-correct dynamic self-certification by the system, potentially allowing deployed systems to operate more safely and effectively

    First International Summer School on Runtime Verification: as part of the ArVi COST Action 1402

    Get PDF
    International audienceThis paper briefly reports on the first international summer school on Runtime Verification: Branches of practical topics rooted in theory, co-organized and sponsored by COST Action IC1402 ArVi which was held September 23-25, Madrid, Spain as part of the 16th international conference on Runtime Verification (RV 2016)
    corecore