57 research outputs found

    A survey on compositional algorithms for verification and synthesis in supervisory control

    Get PDF
    This survey gives an overview of the current research on compositional algorithms for verification and synthesis of modular systems modelled as interacting finite-state machines. Compositional algorithms operate by repeatedly simplifying individual components of a large system, replacing them by smaller so-called abstractions, while preserving critical properties. In this way, the exponential growth of the state space can be limited, making it possible to analyse much bigger state spaces than possible by standard state space exploration. This paper gives an introduction to the principles underlying compositional methods, followed by a survey of algorithmic solutions from the recent literature that use compositional methods to analyse systems automatically. The focus is on applications in supervisory control of discrete event systems, particularly on methods that verify critical properties or synthesise controllable and nonblocking supervisors

    Social convergence in times of spatial distancing: The rRole of music during the COVID-19 Pandemic

    Get PDF

    Formal Methods and Safety for Automated Vehicles: Modeling, Abstractions, and Synthesis of Tactical Planners

    Get PDF
    One goal of developing automated road vehicles is to completely free people from driving tasks. Automated vehicles with no human driver must handle all traffic situations that human drivers are expected to handle, possibly more. Though human drivers cause a lot of traffic accidents, they still have a very low accident and failure rate that automated vehicles must match.Tactical planners are responsible for making discrete decisions for the coming seconds or minutes. As with all subsystems in an automated vehicle, these planners need to be supported with a credible and convincing argument of their correctness. The planners interact with other road users in a feedback loop, so their correctness depends on their behavior in relation to other drivers and road users over time. One way to ascertain their correctness is to test the vehicles in real traffic. But to be sufficiently certain that a tactical planner is safe, it has to be tested on 255 million miles with no accidents.Formal methods can, in contrast to testing, mathematically prove that given requirements are fulfilled. Hence, these methods are a promising alternative for making credible arguments for tactical planners’ correctness. The topic of this thesis is the use of formal methods in the automotive industry to design safe tactical planners. What is interesting is both how automotive systems can be modeled in formal frameworks, and how formal methods can be used practically within the automotive development process.The main findings of this thesis are that it is viable to formally express desired properties of tactical planners, and to use formal methods to prove their correctness. However, the difficulty to anticipate and inspect the interaction of several desired properties is found to be an obstacle. Model Checking, Reactive Synthesis, and Supervisory Control Theory have been used in the design and development process of tactical planners, and these methods have their benefits, depending on the application. To be feasible and useful, these methods need to operate on both a high and a low level of abstraction, and this thesis contributes an automatic abstraction method that bridges this divide.It is also found that artifacts from formal methods tools may be used to convincingly argue that a realization of a tactical planner is safe, and that such an argument puts formal requirements on the vehicle’s other subsystems and its surroundings

    Deriving behavioral specifications of industrial software components

    Get PDF

    Model-based supervisory control synthesis of cyber-physical systems

    Get PDF

    On Supervisor Synthesis via Active Automata Learning

    Get PDF
    Our society\u27s reliance on computer-controlled systems is rapidly growing. Such systems are found in various devices, ranging from simple light switches to safety-critical systems like autonomous vehicles. In the context of safety-critical systems, safety and correctness are of utmost importance. Faults and errors could have catastrophic consequences. Thus, there is a need for rigorous methodologies that help provide guarantees of safety and correctness. Supervisor synthesis, the concept of being able to mathematically synthesize a supervisor that ensures that the closed-loop system behaves in accordance with known requirements, can indeed help.This thesis introduces supervisor learning, an approach to help automate the learning of supervisors in the absence of plant models. Traditionally, supervisor synthesis makes use of plant models and specification models to obtain a supervisor. Industrial adoption of this method is limited due to, among other things, the difficulty in obtaining usable plant models. Manually creating these plant models is an error-prone and time-consuming process. Thus, supervisor learning intends to improve the industrial adoption of supervisory control by automating the process of generating supervisors in the absence of plant models.The idea here is to learn a supervisor for the system under learning (SUL) by active interaction and experimentation. To this end, we present two algorithms, SupL*, and MSL, that directly learn supervisors when provided with a simulator of the SUL and its corresponding specifications. SupL* is a language-based learner that learns one supervisor for the entire system. MSL, on the other hand, learns a modular supervisor, that is, several smaller supervisors, one for each specification. Additionally, a third algorithm, MPL, is introduced for learning a modular plant model.The approach is realized in the tool MIDES and has been used to learn supervisors in a virtual manufacturing setting for the Machine Buffer Machine example, as well as learning a model of the Lateral State Manager, a sub-component of a self-driving car. These case studies show the feasibility and applicability of the proposed approach, in addition to helping identify future directions for research

    IV Міжнародний науковий конгрес "Society of Ambient Intelligence - 2021" (ISCSAI 2021). Кривий Ріг, Україна, 12-16 квітня 2021 року

    Get PDF
    IV Міжнародний науковий конгрес "Society of Ambient Intelligence - 2021" (ISCSAI 2021). Кривий Ріг, Україна, 12-16 квітня 2021 року - матеріали.IV International Scientific Congress “Society of Ambient Intelligence – 2021” (ISCSAI 2021). Kryvyi Rih, Ukraine, April 12-16, 2021 - proceedings

    On Provably Correct Decision-Making for Automated Driving

    Get PDF
    The introduction of driving automation in road vehicles can potentially reduce road traffic crashes and significantly improve road safety. Automation in road vehicles also brings several other benefits such as the possibility to provide independent mobility for people who cannot and/or should not drive. Many different hardware and software components (e.g. sensing, decision-making, actuation, and control) interact to solve the autonomous driving task. Correctness of such automated driving systems is crucial as incorrect behaviour may have catastrophic consequences. Autonomous vehicles operate in complex and dynamic environments, which requires decision-making and planning at different levels. The aim of such decision-making components in these systems is to make safe decisions at all times. The challenge of safety verification of these systems is crucial for the commercial deployment of full autonomy in vehicles. Testing for safety is expensive, impractical, and can never guarantee the absence of errors. In contrast, formal methods, which are techniques that use rigorous mathematical models to build hardware and software systems can provide a mathematical proof of the correctness of the system. The focus of this thesis is to address some of the challenges in the safety verification of decision-making in automated driving systems. A central question here is how to establish formal verification as an efficient tool for automated driving software development.A key finding is the need for an integrated formal approach to prove correctness and to provide a complete safety argument. This thesis provides insights into how three different formal verification approaches, namely supervisory control theory, model checking, and deductive verification differ in their application to automated driving and identifies the challenges associated with each method. It identifies the need for the introduction of more rigour in the requirement refinement process and presents one possible solution by using a formal model-based safety analysis approach. To address challenges in the manual modelling process, a possible solution by automatically learning formal models directly from code is proposed

    Active Learning of Modular Plant Models

    Get PDF
    Model-based techniques are these days being embraced by the industry in their development frameworks. While model-based approaches allow for offline verification and validation of the system, and have other advantages over existing methods, they do have their own challenges. One of the challenges is to obtain a model describing the behavior of the system. In this paper we present the Modular Plant Learner (MPL), an algorithm that explores the state-space and constructs a discrete model of a system. The MPL takes as input a hypothesis structure of the system - called the PSH - and using this information, interacts with a simulation of the system to construct a modular discrete-event model. Using an example we show how the algorithm uses the structural information provided - the PSH - to search the state-space in a smart manner, mitigating the state-space explosion problem
    corecore