1,004 research outputs found

    R2U2: Tool Overview

    Get PDF
    R2U2 (Realizable, Responsive, Unobtrusive Unit) is an extensible framework for runtime System HealthManagement (SHM) of cyber-physical systems. R2U2 can be run in hardware (e.g., FPGAs), or software; can monitorhardware, software, or a combination of the two; and can analyze a range of different types of system requirementsduring runtime. An R2U2 requirement is specified utilizing a hierarchical combination of building blocks: temporal formula runtime observers (in LTL or MTL), Bayesian networks, sensor filters, and Boolean testers. Importantly, the framework is extensible; it is designed to enable definitions of new building blocks in combination with the core structure. Originally deployed on Unmanned Aerial Systems (UAS), R2U2 is designed to run on a wide range of embedded platforms, from autonomous systems like rovers, satellites, and robots, to human-assistive ground systems and cockpits. R2U2 is named after the requirements it satisfies; while the exact requirements vary by platform and mission, the ability to formally reason about realizability, responsiveness, and unobtrusiveness is necessary for flight certifiability, safety-critical system assurance, and achievement of technology readiness levels for target systems. Realizability ensures that R2U2 is suficiently expressive to encapsulate meaningful runtime requirements while maintaining adaptability to run on different platforms, transition between different mission stages, and update quickly between missions. Responsiveness entails continuously monitoring the system under test, real-time reasoning, reporting intermediate status, and as-early-as-possible requirements evaluations. Unobtrusiveness ensures compliance with the crucial properties of the target architecture: functionality, certifiability, timing, tolerances, cost, or other constraints

    Architectural Design of a Safe Mission Manager for Unmanned Aircraft Systems

    Full text link
    [EN] Civil Aviation Authorities are elaborating a new regulatory framework for the safe operation of Unmanned Aircraft Systems (UAS). Current proposals are based on the analysis of the specific risks of the operation as well as on the definition of some risk mitigation measures. In order to achieve the target level of safety, we propose increasing the level of automation by providing the on-board system with Automated Contingency Management functions. The aim of the resulting Safe Mission Manager System is to autonomously adapt to contingency events while still achieving mission objectives through the degradation of mission performance. In this paper, we discuss some of the architectural issues in designing this system. The resulting architecture makes a conceptual differentiation between event monitoring, decision-making on a policy for dealing with contingencies and the execution of the corresponding policy. We also discuss how to allocate the different Safe Mission Manager components to a partitioned, Integrated Modular Avionics architecture. Finally, determinism and predictability are key aspects in contingency management due to their overall impact on safety. For this reason, we model and verify the correctness of a contingency management policy using formal methods.This work was supported by the Spanish Regional Government "Generalitat Valenciana" under contract ACIF/2016/197.Usach Molina, H.; Vila Carbó, JA.; Torens, C.; Adolf, FM. (2018). Architectural Design of a Safe Mission Manager for Unmanned Aircraft Systems. Journal of Systems Architecture. 90:94-108. https://doi.org/10.1016/j.sysarc.2018.09.003S941089

    Verifiable self-certifying autonomous systems

    Get PDF
    Autonomous systems are increasingly being used in safety-and mission-critical domains, including aviation, manufacturing, healthcare and the automotive industry. Systems for such domains are often verified with respect to essential requirements set by a regulator, as part of a process called certification. In principle, autonomous systems can be deployed if they can be certified for use. However, certification is especially challenging as the condition of both the system and its environment will surely change, limiting the effective use of the system. In this paper we discuss the technological and regulatory background for such systems, and introduce an architectural framework that supports verifiably-correct dynamic self-certification by the system, potentially allowing deployed systems to operate more safely and effectively

    Optimisation-based verification process of obstacle avoidance systems for unmanned vehicles

    Get PDF
    This thesis deals with safety verification analysis of collision avoidance systems for unmanned vehicles. The safety of the vehicle is dependent on collision avoidance algorithms and associated control laws, and it must be proven that the collision avoidance algorithms and controllers are functioning correctly in all nominal conditions, various failure conditions and in the presence of possible variations in the vehicle and operational environment. The current widely used exhaustive search based approaches are not suitable for safety analysis of autonomous vehicles due to the large number of possible variations and the complexity of algorithms and the systems. To address this topic, a new optimisation-based verification method is developed to verify the safety of collision avoidance systems. The proposed verification method formulates the worst case analysis problem arising the verification of collision avoidance systems into an optimisation problem and employs optimisation algorithms to automatically search the worst cases. Minimum distance to the obstacle during the collision avoidance manoeuvre is defined as the objective function of the optimisation problem, and realistic simulation consisting of the detailed vehicle dynamics, the operational environment, the collision avoidance algorithm and low level control laws is embedded in the optimisation process. This enables the verification process to take into account the parameters variations in the vehicle, the change of the environment, the uncertainties in sensors, and in particular the mismatching between model used for developing the collision avoidance algorithms and the real vehicle. It is shown that the resultant simulation based optimisation problem is non-convex and there might be many local optima. To illustrate and investigate the proposed optimisation based verification process, the potential field method and decision making collision avoidance method are chosen as an obstacle avoidance candidate technique for verification study. Five benchmark case studies are investigated in this thesis: static obstacle avoidance system of a simple unicycle robot, moving obstacle avoidance system for a Pioneer 3DX robot, and a 6 Degrees of Freedom fixed wing Unmanned Aerial Vehicle with static and moving collision avoidance algorithms. It is proven that although a local optimisation method for nonlinear optimisation is quite efficient, it is not able to find the most dangerous situation. Results in this thesis show that, among all the global optimisation methods that have been investigated, the DIviding RECTangle method provides most promising performance for verification of collision avoidance functions in terms of guaranteed capability in searching worst scenarios

    Using Formal Methods for Autonomous Systems: Five Recipes for Formal Verification

    Get PDF
    Formal Methods are mathematically-based techniques for software design and engineering, which enable the unambiguous description of and reasoning about a system's behaviour. Autonomous systems use software to make decisions without human control, are often embedded in a robotic system, are often safety-critical, and are increasingly being introduced into everyday settings. Autonomous systems need robust development and verification methods, but formal methods practitioners are often asked: Why use Formal Methods for Autonomous Systems? To answer this question, this position paper describes five recipes for formally verifying aspects of an autonomous system, collected from the literature. The recipes are examples of how Formal Methods can be an effective tool for the development and verification of autonomous systems. During design, they enable unambiguous description of requirements; in development, formal specifications can be verified against requirements; software components may be synthesised from verified specifications; and behaviour can be monitored at runtime and compared to its original specification. Modern Formal Methods often include highly automated tool support, which enables exhaustive checking of a system's state space. This paper argues that Formal Methods are a powerful tool for the repertoire of development techniques for safe autonomous systems, alongside other robust software engineering techniques.Comment: Accepted at Journal of Risk and Reliabilit

    CERTIFYING AN AUTONOMOUS SYSTEM TO COMPLETE TASKS CURRENTLY RESERVED FOR QUALIFIED PILOTS

    Get PDF
    When naval certification officials issue a safety of flight clearance, they are certifying that when the vehicle is used by a qualified pilot they can safety accomplish their mission. The pilot is ultimately responsible for the vehicle. While the naval safety of flight clearance process is an engineering based risk mitigation process, the qualification process for military pilots is largely a trust process. When a commanding officer designates a pilot as being fully qualified, they are placing their trust in the pilot's decision making abilities during off nominal conditions. The advent of autonomous systems will shift this established paradigm as there will no longer be a human in the loop who is responsible for the vehicle. Yet, a method for certifying an autonomous vehicle to make decisions currently reserved for qualified pilots does not exist. We propose and exercise a methodology for certifying an autonomous system to complete tasks currently reserved for qualified pilots. First, we decompose the steps currently taken by qualified pilots to the basic requirements. We then develop a specification which defines the envelope where a system can exhibit autonomous behavior. Following a formal methods approach to analyzing the specification, we developed a protocol that software developers can use to ensure the vehicle will remain within the clearance envelope when operating autonomously. Second, we analyze flight test data of an autonomous system completing a task currently reserved for qualified pilots while focusing on legacy test and evaluation methods to determine suitability for obtaining a certification. We found that the system could complete the task under controlled conditions. However, when faced with conditions that were not anticipated (situations where a pilot uses their judgment) the vehicle was unable to complete the task. Third, we highlight an issue with the use of onboard sensors to build the situational awareness of an autonomous system. As those sensors degrade, a point exists where the situational awareness provided is insufficient for sound aeronautical decisions. We demonstrate (through modeling and simulation) an objective measure for adequate situational awareness (subjective end) to complete a task currently reserved for qualified pilots

    Certification Considerations for Adaptive Systems

    Get PDF
    Advanced capabilities planned for the next generation of aircraft, including those that will operate within the Next Generation Air Transportation System (NextGen), will necessarily include complex new algorithms and non-traditional software elements. These aircraft will likely incorporate adaptive control algorithms that will provide enhanced safety, autonomy, and robustness during adverse conditions. Unmanned aircraft will operate alongside manned aircraft in the National Airspace (NAS), with intelligent software performing the high-level decision-making functions normally performed by human pilots. Even human-piloted aircraft will necessarily include more autonomy. However, there are serious barriers to the deployment of new capabilities, especially for those based upon software including adaptive control (AC) and artificial intelligence (AI) algorithms. Current civil aviation certification processes are based on the idea that the correct behavior of a system must be completely specified and verified prior to operation. This report by Rockwell Collins and SIFT documents our comprehensive study of the state of the art in intelligent and adaptive algorithms for the civil aviation domain, categorizing the approaches used and identifying gaps and challenges associated with certification of each approach

    An investigation into hazard-centric analysis of complex autonomous systems

    Get PDF
    This thesis proposes a hypothesis that a conventional, and essentially manual, HAZOP process can be improved with information obtained with model-based dynamic simulation, using a Monte Carlo approach, to update a Bayesian Belief model representing the expected relations between cause and effects – and thereby produce an enhanced HAZOP. The work considers how the expertise of a hazard and operability study team might be augmented with access to behavioural models, simulations and belief inference models. This incorporates models of dynamically complex system behaviour, considering where these might contribute to the expertise of a hazard and operability study team, and how these might bolster trust in the portrayal of system behaviour. With a questionnaire containing behavioural outputs from a representative systems model, responses were collected from a group with relevant domain expertise. From this it is argued that the quality of analysis is dependent upon the experience and expertise of the participants but this might be artificially augmented using probabilistic data derived from a system dynamics model. Consequently, Monte Carlo simulations of an improved exemplar system dynamics model are used to condition a behavioural inference model and also to generate measures of emergence associated with the deviation parameter used in the study. A Bayesian approach towards probability is adopted where particular events and combinations of circumstances are effectively unique or hypothetical, and perhaps irreproducible in practice. Therefore, it is shown that a Bayesian model, representing beliefs expressed in a hazard and operability study, conditioned by the likely occurrence of flaw events causing specific deviant behaviour from evidence observed in the system dynamical behaviour, may combine intuitive estimates based upon experience and expertise, with quantitative statistical information representing plausible evidence of safety constraint violation. A further behavioural measure identifies potential emergent behaviour by way of a Lyapunov Exponent. Together these improvements enhance the awareness of potential hazard cases

    Model Checking for Decision Making System of Long Endurance Unmanned Surface Vehicle

    Get PDF
    This work aims to develop a model checking method to verify the decision making system of Unmanned Surface Vehicle (USV) in a long range surveillance mission. The scenario in this work was captured from a long endurance USV surveillance mission using C-Enduro, an USV manufactured by ASV Ltd. The C-Enduro USV may encounter multiple non-deterministic and concurrent problems including lost communication signals, collision risk and malfunction. The vehicle is designed to utilise multiple energy sources from solar panel, wind turbine and diesel generator. The energy state can be affected by the solar irradiance condition, wind condition, states of the diesel generator, sea current condition and states of the USV. In this research, the states and the interactive relations between environmental uncertainties, sensors, USV energy system, USV and Ground Control Station (GCS) decision making systems are abstracted and modelled successfully using Kripke models. The desirable properties to be verified are expressed using temporal logic statement and finally the safety properties and the long endurance properties are verified using the model checker MCMAS, a model checker for multi-agent systems. The verification results are analyzed and show the feasibility of applying model checking method to retrospect the desirable property of the USV decision making system. This method could assist researcher to identify potential design error of decision making system in advance
    corecore