220 research outputs found

    Security of systems: modeling and analysis methodology

    Get PDF
    Die Security-Bewertung eines Systems erfordert eine Systembeschreibung. Die Beschreibung bestimmt die Qualität der Analyse und die Qualität der entsprechenden Security-Lösung. In der Arbeit wird eine Methodik zur Bewertung der Security von Systemen entwickelt. Es wird mit einem einfachen Modell begonnen und dieses iterativ verfeinert. Das resultierende Modell repräsentiert eine möglichst vollständige Sicht auf das zu evaluierende System, wobei die einzelnen Schritte überschaubar bleiben. In der Praxis variiert der Grad der verfügbaren Informationen. Der Ansatz kann mit fehlenden Informationen über Teile des Systems umgehen. Das Modell beinhaltet schließlich Teilsysteme auf verschiedenen Abstraktionsebenen. Nach jedem atomaren Schritt der Modellierung kann eine Analyse durchgeführt werden, um die Security des modellierten Systems zu bewerten. Die Analyse ermittelt die Pfade, die ein Angreifer durch das System nehmen könnte. Da sich bei einem komplexen System eine große Anzahl an Pfaden ergibt, können diese für eine detailliertere Betrachtung priorisiert werden. Die Methodik kann in allen Phasen des Systemlebenszyklus eingesetzt werden. Sie ist erweiterbar gehalten, um zusätzliche Informationen und Konzepte einbeziehen zu können.The evaluation of security of a system requires a system description. The description determines the quality of the analysis and the quality of the corresponding security solution. The thesis introduces a methodology for evaluating the security of systems. By starting with a simple model and iteratively refining it, the resulting model represents an as complete as needed view on the system under evaluation by keeping the single steps manageable. In real world scenarios, it is a common case that the degree of information available varies. The approach can deal with missing information on parts of the system. Finally, it leads to a model of different levels of abstraction for each subsystem. After each atomic step of modeling, an analysis can be executed to evaluate the security of the modeled system. The analysis determines the paths an attacker could take through the system. As there will be a large number of paths for a complex system, they can be sorted for prioritized in depth inspection. The methodology is intended to be used at all steps of system life cycle. Additionally, it is extendable to allow inclusion of further information and concepts

    A survey on compositional algorithms for verification and synthesis in supervisory control

    Get PDF
    This survey gives an overview of the current research on compositional algorithms for verification and synthesis of modular systems modelled as interacting finite-state machines. Compositional algorithms operate by repeatedly simplifying individual components of a large system, replacing them by smaller so-called abstractions, while preserving critical properties. In this way, the exponential growth of the state space can be limited, making it possible to analyse much bigger state spaces than possible by standard state space exploration. This paper gives an introduction to the principles underlying compositional methods, followed by a survey of algorithmic solutions from the recent literature that use compositional methods to analyse systems automatically. The focus is on applications in supervisory control of discrete event systems, particularly on methods that verify critical properties or synthesise controllable and nonblocking supervisors

    How Useful is Learning in Mitigating Mismatch Between Digital Twins and Physical Systems?

    Get PDF
    In the control of complex systems, we observe two diametrical trends: model-based control derived from digital twins, and model-free control through AI. There are also attempts to bridge the gap between the two by incorporating learning-based AI algorithms into digital twins to mitigate mismatches between the digital twin model and the physical system. One of the most straightforward approaches to this is direct input adaptation. In this paper, we ask whether it is useful to employ a generic learning algorithm in such a setting, and our conclusion is "not very". We denote an algorithm to be more useful than another algorithm based on three aspects: 1) it requires fewer data samples to reach a desired minimal performance, 2) it achieves better performance for a reasonable number of data samples, and 3) it accumulates less regret. In our evaluation, we randomly sample problems from an industrially relevant geometry assurance context and measure the aforementioned performance indicators of 16 different algorithms. Our conclusion is that blackbox optimization algorithms, designed to leverage specific properties of the problem, generally perform better than generic learning algorithms, once again finding that "there is no free lunch"

    Safety Proofs for Automated Driving using Formal Methods

    Get PDF
    The introduction of driving automation in road vehicles can potentially reduce road traffic crashes and significantly improve road safety. Automation in road vehicles also brings other benefits such as the possibility to provide independent mobility for people who cannot and/or should not drive. Correctness of such automated driving systems (ADSs) is crucial as incorrect behaviour may have catastrophic consequences.Automated vehicles operate in complex and dynamic environments, which requires decision-making and control at different levels. The aim of such decision-making is for the vehicle to be safe at all times. Verifying safety of these systems is crucial for the commercial deployment of full autonomy in vehicles. Testing for safety is expensive, impractical, and can never guarantee the absence of errors. In contrast, formal methods, techniques that use rigorous mathematical models to build hardware and software systems, can provide mathematical proofs of the correctness of the systems.The focus of this thesis is to address some of the challenges in the safety verification of decision and control systems for automated driving. A central question here is how to establish formal methods as an efficient approach to develop a safe ADS. A key finding is the need for an integrated formal approach to prove correctness of ADS. Several formal methods to model, specify, and verify ADS are evaluated. Insights into how the evaluated methods differ in various aspects and the challenges in the respective methods are discussed. To help developers and safety experts design safe ADSs, the thesis presents modelling guidelines and methods to identify and address subtle modelling errors that might inadvertently result in proving a faulty design to be safe. To address challenges in the manual modelling process, a systematic approach to automatically obtain formal models from ADS software is presented and validated by a proof of concept. Finally, a structured approach on how to use the different formal artifacts to provide evidence for the safety argument of an ADS is shown

    Verification Techniques for xMAS

    Get PDF

    Towards an infrastructure for preparation and control of intelligent automation systems

    Get PDF
    In an attempt to handle some of the challenges of modern production, intelligent automation systems offer solutions that are flexible, adaptive, and collaborative. Contrary to traditional solutions, intelligent automation systems emerged just recently and thus lack the supporting tools and infrastructure that traditional systems nowadays take for granted. To support efficient development, commissioning, and control of such systems, this thesis summarizes various lessons learned during years of implementation. Based on what was learned, this thesis investigates key features of infrastructure for modern and flexible intelligent automation systems, as well as a number of important design solutions. For example, an important question is raised whether to decentralize the global state or to give complete access to the main controller.Moreover, in order to develop such systems, a framework for virtual preparation and commissioning is presented, with the main goal to offer support for engineers. As traditional virtual commissioning solutions are not intended for preparing highly flexible, collaborative, and dynamic systems, this framework aims to provide some of the groundwork and point to a direction for fast and integrated preparation and virtual commissioning of such systems.Finally, this thesis summarizes some of the investigations made on planning as satisfiability, in order to evaluate how different methods improve planning performance. Throughout the thesis, an industrial material kitting use case exemplifies presented perspectives, lessons learned, and frameworks

    Determining the role of hydrogen in the future UK's private vehicle fleet using growth and Lotka-Volterra concepts.

    Get PDF
    This research aimed to explore effective strategies for the UK’s private vehicle fleet to transition to a hydrogen one. The main barrier for hydrogen is the lack of refuelling infrastructure impacting the uptake of hydrogen-based vehicles. Current studies focus on the introduction of hydrogen alone with a pre-determined supply chain or consider the study of one part of the supply chain such as the storage. A computational modelling approach was considered to reflect the private vehicle market based on predator-prey concepts. The Lotka-Volterra model captures the dynamic behaviour between two or more competing species/technologies to simulate the introduction of alternative vehicle types and their impact on current vehicles. The behaviour of the predator-prey model was limited to reflect the private vehicle fleet by developing a first-order growth model representing the growth of conventional vehicles over the last 50 years. By modelling the growth of conventional vehicles, the private vehicle fleet was considered holistically rather than a selected supply chain(s). The implication of this was to overcome the issue of lack of data and insights to forecasting hydrogen and alternative fuels, whilst capturing the mutually interaction between multiple competing vehicle types. A key finding associated with this thesis was the demonstration that the modified Lotka-Volterra model is suitable to represent the dynamic relationship of introducing new and multiple vehicle types into the current private vehicle fleet. The results indicated that the model simplified the current hydrogen infrastructure problem by reducing the number of factors and variables considered, offering a robust alternative modelling tool. This thesis suggests that it is unlikely that the entire private fleet will be displaced by hydrogen vehicles, and the upper limit should be set at 50% of the market. The optimum strategy for the UK is 80:20 in favour of non-fuel cell hybrids and electric vehicles to hydrogen-based ones focusing on a centralised network of stations. It is recommended that the HRS is at least operated at 75% increasing to maximum when necessary, avoiding under-utilisation. The main implications are that stakeholders can plan according to the best-scenario from a holistic view to shape the future of UK’s private fleet

    Formal Methods and Safety for Automated Vehicles: Modeling, Abstractions, and Synthesis of Tactical Planners

    Get PDF
    One goal of developing automated road vehicles is to completely free people from driving tasks. Automated vehicles with no human driver must handle all traffic situations that human drivers are expected to handle, possibly more. Though human drivers cause a lot of traffic accidents, they still have a very low accident and failure rate that automated vehicles must match.Tactical planners are responsible for making discrete decisions for the coming seconds or minutes. As with all subsystems in an automated vehicle, these planners need to be supported with a credible and convincing argument of their correctness. The planners interact with other road users in a feedback loop, so their correctness depends on their behavior in relation to other drivers and road users over time. One way to ascertain their correctness is to test the vehicles in real traffic. But to be sufficiently certain that a tactical planner is safe, it has to be tested on 255 million miles with no accidents.Formal methods can, in contrast to testing, mathematically prove that given requirements are fulfilled. Hence, these methods are a promising alternative for making credible arguments for tactical planners’ correctness. The topic of this thesis is the use of formal methods in the automotive industry to design safe tactical planners. What is interesting is both how automotive systems can be modeled in formal frameworks, and how formal methods can be used practically within the automotive development process.The main findings of this thesis are that it is viable to formally express desired properties of tactical planners, and to use formal methods to prove their correctness. However, the difficulty to anticipate and inspect the interaction of several desired properties is found to be an obstacle. Model Checking, Reactive Synthesis, and Supervisory Control Theory have been used in the design and development process of tactical planners, and these methods have their benefits, depending on the application. To be feasible and useful, these methods need to operate on both a high and a low level of abstraction, and this thesis contributes an automatic abstraction method that bridges this divide.It is also found that artifacts from formal methods tools may be used to convincingly argue that a realization of a tactical planner is safe, and that such an argument puts formal requirements on the vehicle’s other subsystems and its surroundings

    How Useful is Learning in Mitigating Mismatch Between Digital Twins and Physical Systems?

    Get PDF
    In the control of complex systems, we observe two diametrical trends: model-based control derived from digital twins, and model-free control through AI. There are also attempts to bridge the gap between the two by incorporating learning-based AI algorithms into digital twins to mitigate mismatches between the digital twin model and the physical system. One of the most straightforward approaches to this is direct input adaptation. In this paper, we ask whether it is useful to employ a generic learning algorithm in such a setting, and our conclusion is "not very". We denote an algorithm to be more useful than another algorithm based on three aspects: 1) it requires fewer data samples to reach a desired minimal performance, 2) it achieves better performance for a reasonable number of data samples, and 3) it accumulates less regret. In our evaluation, we randomly sample problems from an industrially relevant geometry assurance context and measure the aforementioned performance indicators of 16 different algorithms. Our conclusion is that blackbox optimization algorithms, designed to leverage specific properties of the problem, generally perform better than generic learning algorithms, once again finding that "there is no free lunch"
    corecore