9 research outputs found

    A Survey of Challenges for Runtime Verification from Advanced Application Domains (Beyond Software)

    Get PDF
    Runtime verification is an area of formal methods that studies the dynamic analysis of execution traces against formal specifications. Typically, the two main activities in runtime verification efforts are the process of creating monitors from specifications, and the algorithms for the evaluation of traces against the generated monitors. Other activities involve the instrumentation of the system to generate the trace and the communication between the system under analysis and the monitor. Most of the applications in runtime verification have been focused on the dynamic analysis of software, even though there are many more potential applications to other computational devices and target systems. In this paper we present a collection of challenges for runtime verification extracted from concrete application domains, focusing on the difficulties that must be overcome to tackle these specific challenges. The computational models that characterize these domains require to devise new techniques beyond the current state of the art in runtime verification

    STAN: analysis of data traces using an event-driven interval temporal logic

    Get PDF
    The increasing integration of systems into people’s daily routines, especially smartphones, requires ensuring correctness of their functionality and even some performance requirements. Sometimes, we can only observe the interaction of the system (e.g. the smartphone) with its environment at certain time points; that is, we only have access to the data traces produced due to this interaction. This paper presents the tool STAn, which performs runtime verification on data traces that combine timestamped discrete events and sampled real-valued magnitudes. STAn uses the Spin model checker as the underlying execution engine, and analyzes traces against properties described in the so-called event-driven interval temporal logic (eLTL) by transforming each eLTL formula into a network of concurrent automata, written in Promela, that monitors the trace. We present two different transformations for online and offline monitoring, respectively. Then, Spin explores the state space of the automata network and the trace to return a verdict about the corresponding property. We use the proposal to analyze data traces obtained during mobile application testing in different network scenarios.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. // Funding for open access charge: Universidad de Málaga / CBU

    Multilevel Runtime Verification for Safety and Security Critical Cyber Physical Systems from a Model Based Engineering Perspective

    Get PDF
    Advanced embedded system technology is one of the key driving forces behind the rapid growth of Cyber-Physical System (CPS) applications. CPS consists of multiple coordinating and cooperating components, which are often software-intensive and interact with each other to achieve unprecedented tasks. Such highly integrated CPSs have complex interaction failures, attack surfaces, and attack vectors that we have to protect and secure against. This dissertation advances the state-of-the-art by developing a multilevel runtime monitoring approach for safety and security critical CPSs where there are monitors at each level of processing and integration. Given that computation and data processing vulnerabilities may exist at multiple levels in an embedded CPS, it follows that solutions present at the levels where the faults or vulnerabilities originate are beneficial in timely detection of anomalies. Further, increasing functional and architectural complexity of critical CPSs have significant safety and security operational implications. These challenges are leading to a need for new methods where there is a continuum between design time assurance and runtime or operational assurance. Towards this end, this dissertation explores Model Based Engineering methods by which design assurance can be carried forward to the runtime domain, creating a shared responsibility for reducing the overall risk associated with the system at operation. Therefore, a synergistic combination of Verification & Validation at design time and runtime monitoring at multiple levels is beneficial in assuring safety and security of critical CPS. Furthermore, we realize our multilevel runtime monitor framework on hardware using a stream-based runtime verification language

    Evaluating verification awareness as a method for assessing adaptation risk

    Get PDF
    Self-integration requires a system to be self-aware and self-protecting of its functionality and communication processes to mitigate interference in accomplishing its goals. Incorporating self-protection into a framework for reasoning about compliance with critical requirements is a major challenge when the system’s operational environment may have uncertainties resulting in runtime changes. The reasoning should be over a range of impacts and tradeoffs in order for the system to immediately address an issue, even if only partially or imperfectly. Assuming that critical requirements can be formally specified and embedded as part of system self-awareness, runtime verification often involves extensive on-board resources and state explosion, with minimal explanation of results. Model-checking partially mitigates runtime verification issues by abstracting the system operations and architecture. However, validating the consistency of a model given a runtime change is generally performed external to the system and translated back to the operational environment, which can be inefficient.This paper focuses on codifying and embedding verification awareness into a system. Verification awareness is a type of self-awareness related to reasoning about compliance with critical properties at runtime when a system adaptation is needed. The premise is that an adaptation that interferes with a design-time proof process for requirement compliance increases the risk that the original proof process cannot be reused. The greater the risk to limiting proof process reuse, the higher the probability that the requirement would be violated by the adaptation. The application of Rice’s 1953 theorem to this domain indicates that determining whether a given adaptation inherently inhibits proof reuse is undecidable, suggesting the heuristic, comparative approach based on proof meta-data that is part of our approach. To demonstrate our deployment of verification awareness, we predefine four adaptations that are all available to three distinct wearable simulations (stress, insulin delivery, and hearables). We capture meta-data from applying automated theorem proving to wearable requirements and assess the risk among the four adaptations for limiting the proof process reuse for each of their requirements. The results show that the adaptations affect proof process reuse differently on each wearable. We evaluate our reasoning framework by embedding checkpoints on requirement compliance within the wearable code and log the execution trace of each adaptation. The logs confirm that the adaptation selected by each wearable with the lowest risk of inhibiting proof process reuse for its requirements also causes the least number of requirement failures in execution.Computer Scienc

    Trajectory planning based on adaptive model predictive control: Study of the performance of an autonomous vehicle in critical highway scenarios

    Get PDF
    Increasing automation in automotive industry is an important contribution to overcome many of the major societal challenges. However, testing and validating a highly autonomous vehicle is one of the biggest obstacles to the deployment of such vehicles, since they rely on data-driven and real-time sensors, actuators, complex algorithms, machine learning systems, and powerful processors to execute software, and they must be proven to be reliable and safe. For this reason, the verification, validation and testing (VVT) of autonomous vehicles is gaining interest and attention among the scientific community and there has been a number of significant efforts in this field. VVT helps developers and testers to determine any hidden faults, increasing systems confidence in safety, security, functional analysis, and in the ability to integrate autonomous prototypes into existing road networks. Other stakeholders like higher-management, public authorities and the public are also crucial to complete the VTT process. As autonomous vehicles require hundreds of millions of kilometers of testing driven on public roads before vehicle certification, simulations are playing a key role as they allow the simulation tools to virtually test millions of real-life scenarios, increasing safety and reducing costs, time and the need for physical road tests. In this study, a literature review is conducted to classify approaches for the VVT and an existing simulation tool is used to implement an autonomous driving system. The system will be characterized from the point of view of its performance in some critical highway scenarios.O aumento da automação na indústria automotiva é uma importante contribuição para superar muitos dos principais desafios da sociedade. No entanto, testar e validar um veículo altamente autónomo é um dos maiores obstáculos para a implantação de tais veículos, uma vez que eles contam com sensores, atuadores, algoritmos complexos, sistemas de aprendizagem de máquina e processadores potentes para executar softwares em tempo real, e devem ser comprovadamente confiáveis e seguros. Por esta razão, a verificação, validação e teste (VVT) de veículos autónomos está a ganhar interesse e atenção entre a comunidade científica e tem havido uma série de esforços significativos neste campo. A VVT ajuda os desenvolvedores e testadores a determinar quaisquer falhas ocultas, aumentando a confiança dos sistemas na segurança, proteção, análise funcional e na capacidade de integrar protótipos autónomos em redes rodoviárias existentes. Outras partes interessadas, como a alta administração, autoridades públicas e o público também são cruciais para concluir o processo de VTT. Como os veículos autónomos exigem centenas de milhões de quilómetros de testes conduzidos em vias públicas antes da certificação do veículo, as simulações estão a desempenhar cada vez mais um papel fundamental, pois permitem que as ferramentas de simulação testem virtualmente milhões de cenários da vida real, aumentando a segurança e reduzindo custos, tempo e necessidade de testes físicos em estrada. Neste estudo, é realizada uma revisão da literatura para classificar abordagens para a VVT e uma ferramenta de simulação existente é usada para implementar um sistema de direção autónoma. O sistema é caracterizado do ponto de vista do seu desempenho em alguns cenários críticos de autoestrad

    Rv-Enabled Framework For Self-Adaptive Software

    Get PDF
    Software systems keep increasing in scale and complexity, requiring ever more effort to design, build, test, and deploy. Systems are integrated from separately developed modules. Over the life of a system, individual modules may be updated, which may allow incompatibilities between modules to slip in. Consequently, many faults in a software system are often discovered after the system is built and deployed. Runtime verification (RV) is a collection of dynamic techniques for detecting faults of software systems. An executable monitor is constructed from a formally specified property of the system being checked (denoted as the target system) and is run over a stream of observations (events) to check whether the property is satisfied or not. Although existing tools are able to specify and monitor properties efficiently, it is still challenging to apply RV to large-scale real-world applications. From the perspective of monitoring requirements, we need a formalism that can describe both high and low-level behaviors of the target system. Complexity of the target program also brings some issues. For instance, it may contain a set of loosely-coupled components which may be added or removed dynamically. Correspondingly, monitoring requirements are often defined upon asynchronous observations that carry data of which the domain scale up along with expansion of the target system. How to conveniently specify these properties and generate monitors that can check them efficiently is a challenge. Beyond detecting faults, self-adaptive software is desirable for tolerating faults or unexpected environment changes during execution. By equipping monitors with reflexive adaptation actions, runtime enforcement (RE) can be used to improve robustness of the system. However, there is little work on analyzing possible interference between the implementation of adaptation actions and the target program. In this thesis, we present SMEDL, a RV framework using a specification language designed for high usability with respect to expressiveness, efficiency and flexible deployment. The property specification is composed of a set of communicating monitors described in the form of EFSMs (extend finite state machines). High-level properties can be straightforwardly transformed into SMEDL specifications while actions can be specified in transitions to express low-level imperative behaviors. Deployment of monitors can be explicitly specified to support both centralized and distributed software. Based on dynamically scalable monitor structure, we propose a novel method to efficiently check parametric properties that rely on the data events carry. To tackle challenges of monitoring timing properties in an asynchronous environment, we propose a conceptual monitor architecture that clearly separates monitoring of time intervals from the rest of property checking. To support software adaptation, we extend the SMEDL framework to specify enforcement specifications, generate implementations and instrument them into the target system. Analysis of interference between the adaptation implementation and the target system can be performed statically based on Hoare-logic. Instead of building a whole new proof for the target system globally, we present a method to generate local proof obligations for better scalability
    corecore