1,143 research outputs found

    Simulation-based Validation for Autonomous Driving Systems

    Full text link
    Simulation is essential to validate autonomous driving systems. However, a simple simulation, even for an extremely high number of simulated miles or hours, is not sufficient. We need well-founded criteria showing that simulation does indeed cover a large fraction of the relevant real-world situations. In addition, the validation must concern not only incidents, but also the detection of any type of potentially dangerous situation, such as traffic violations. We investigate a rigorous simulation and testing-based validation method for autonomous driving systems that integrates an existing industrial simulator and a formally defined testing environment. The environment includes a scenario generator that drives the simulation process and a monitor that checks at runtime the observed behavior of the system against a set of system properties to be validated. The validation method consists in extracting from the simulator a semantic model of the simulated system including a metric graph, which is a mathematical model of the environment in which the vehicles of the system evolve. The monitor can verify properties formalized in a first-order linear temporal logic and provide diagnostics explaining their non satisfaction. Instead of exploring the system behavior randomly as many simulators do, we propose a method to systematically generate sets of scenarios that cover potentially risky situations, especially for different types of junctions where specific traffic rules must be respected. We show that the systematic exploration of risky situations has uncovered many flaws in the real simulator that would have been very difficult to discover by a random exploration process

    Current and Future Challenges in Knowledge Representation and Reasoning

    Full text link
    Knowledge Representation and Reasoning is a central, longstanding, and active area of Artificial Intelligence. Over the years it has evolved significantly; more recently it has been challenged and complemented by research in areas such as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl Perspectives workshop was held on Knowledge Representation and Reasoning. The goal of the workshop was to describe the state of the art in the field, including its relation with other areas, its shortcomings and strengths, together with recommendations for future progress. We developed this manifesto based on the presentations, panels, working groups, and discussions that took place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge Representation: its origins, goals, milestones, and current foci; its relation to other disciplines, especially to Artificial Intelligence; and on its challenges, along with key priorities for the next decade

    Exploiting Process Algebras and BPM Techniques for Guaranteeing Success of Distributed Activities

    Get PDF
    The communications and collaborations among activities, pro- cesses, or systems, in general, are the base of complex sys- tems defined as distributed systems. Given the increasing complexity of their structure, interactions, and functionali- ties, many research areas are interested in providing mod- elling techniques and verification capabilities to guarantee their correctness and satisfaction of properties. In particular, the formal methods community provides robust verification techniques to prove system properties. However, most ap- proaches rely on manually designed formal models, making the analysis process challenging because it requires an expert in the field. On the other hand, the BPM community pro- vides a widely used graphical notation (i.e., BPMN) to design internal behaviour and interactions of complex distributed systems that can be enhanced with additional features (e.g., privacy technologies). Furthermore, BPM uses process min- ing techniques to automatically discover these models from events observation. However, verifying properties and ex- pected behaviour, especially in collaborations, still needs a solid methodology. This thesis aims at exploiting the features of the formal meth- ods and BPM communities to provide approaches that en- able formal verification over distributed systems. In this con- text, we propose two approaches. The modelling-based ap- proach starts from BPMN models and produces process al- gebra specifications to enable formal verification of system properties, including privacy-related ones. The process mining- based approach starts from logs observations to automati- xv cally generate process algebra specifications to enable veri- fication capabilities

    Efficient Model Checking: The Power of Randomness

    Get PDF

    A Last-Level Defense for Application Integrity and Confidentiality

    Full text link
    Our objective is to protect the integrity and confidentiality of applications operating in untrusted environments. Trusted Execution Environments (TEEs) are not a panacea. Hardware TEEs fail to protect applications against Sybil, Fork and Rollback Attacks and, consequently, fail to preserve the consistency and integrity of applications. We introduce a novel system, LLD, that enforces the integrity and consistency of applications in a transparent and scalable fashion. Our solution augments TEEs with instantiation control and rollback protection. Instantiation control, enforced with TEE-supported leases, mitigates Sybil/Fork Attacks without incurring the high costs of solving crypto-puzzles. Our rollback detection mechanism does not need excessive replication, nor does it sacrifice durability. We show that implementing these functionalities in the LLD runtime automatically protects applications and services such as a popular DBMS

    Outcome-Oriented Prescriptive Process Monitoring Based on Temporal Logic Patterns

    Full text link
    Prescriptive Process Monitoring systems recommend, during the execution of a business process, interventions that, if followed, prevent a negative outcome of the process. Such interventions have to be reliable, that is, they have to guarantee the achievement of the desired outcome or performance, and they have to be flexible, that is, they have to avoid overturning the normal process execution or forcing the execution of a given activity. Most of the existing Prescriptive Process Monitoring solutions, however, while performing well in terms of recommendation reliability, provide the users with very specific (sequences of) activities that have to be executed without caring about the feasibility of these recommendations. In order to face this issue, we propose a new Outcome-Oriented Prescriptive Process Monitoring system recommending temporal relations between activities that have to be guaranteed during the process execution in order to achieve a desired outcome. This softens the mandatory execution of an activity at a given point in time, thus leaving more freedom to the user in deciding the interventions to put in place. Our approach defines these temporal relations with Linear Temporal Logic over finite traces patterns that are used as features to describe the historical process data recorded in an event log by the information systems supporting the execution of the process. Such encoded log is used to train a Machine Learning classifier to learn a mapping between the temporal patterns and the outcome of a process execution. The classifier is then queried at runtime to return as recommendations the most salient temporal patterns to be satisfied to maximize the likelihood of a certain outcome for an input ongoing process execution. The proposed system is assessed using a pool of 22 real-life event logs that have already been used as a benchmark in the Process Mining community.Comment: 38 pages, 6 figures, 8 table

    Runtime Monitoring DNN-Based Perception

    Full text link
    Deep neural networks (DNNs) are instrumental in realizing complex perception systems. As many of these applications are safety-critical by design, engineering rigor is required to ensure that the functional insufficiency of the DNN-based perception is not the source of harm. In addition to conventional static verification and testing techniques employed during the design phase, there is a need for runtime verification techniques that can detect critical events, diagnose issues, and even enforce requirements. This tutorial aims to provide readers with a glimpse of techniques proposed in the literature. We start with classical methods proposed in the machine learning community, then highlight a few techniques proposed by the formal methods community. While we surely can observe similarities in the design of monitors, how the decision boundaries are created vary between the two communities. We conclude by highlighting the need to rigorously design monitors, where data availability outside the operational domain plays an important role

    Measuring the impact of COVID-19 on hospital care pathways

    Get PDF
    Care pathways in hospitals around the world reported significant disruption during the recent COVID-19 pandemic but measuring the actual impact is more problematic. Process mining can be useful for hospital management to measure the conformance of real-life care to what might be considered normal operations. In this study, we aim to demonstrate that process mining can be used to investigate process changes associated with complex disruptive events. We studied perturbations to accident and emergency (A &E) and maternity pathways in a UK public hospital during the COVID-19 pandemic. Co-incidentally the hospital had implemented a Command Centre approach for patient-flow management affording an opportunity to study both the planned improvement and the disruption due to the pandemic. Our study proposes and demonstrates a method for measuring and investigating the impact of such planned and unplanned disruptions affecting hospital care pathways. We found that during the pandemic, both A &E and maternity pathways had measurable reductions in the mean length of stay and a measurable drop in the percentage of pathways conforming to normative models. There were no distinctive patterns of monthly mean values of length of stay nor conformance throughout the phases of the installation of the hospital’s new Command Centre approach. Due to a deficit in the available A &E data, the findings for A &E pathways could not be interpreted

    What, Indeed, is an Achievable Provable Guarantee for Learning-Enabled Safety Critical Systems

    Full text link
    Machine learning has made remarkable advancements, but confidently utilising learning-enabled components in safety-critical domains still poses challenges. Among the challenges, it is known that a rigorous, yet practical, way of achieving safety guarantees is one of the most prominent. In this paper, we first discuss the engineering and research challenges associated with the design and verification of such systems. Then, based on the observation that existing works cannot actually achieve provable guarantees, we promote a two-step verification method for the ultimate achievement of provable statistical guarantees
    • …
    corecore