11 research outputs found

    Model-Driven Development of Safety Architectures

    Get PDF
    We describe the use of model-driven development for safety assurance of a pioneering NASA flight operation involving a fleet of small unmanned aircraft systems (sUAS) flying beyond visual line of sight. The central idea is to develop a safety architecture that provides the basis for risk assessment and visualization within a safety case, the formal justification of acceptable safety required by the aviation regulatory authority. A safety architecture is composed from a collection of bow tie diagrams (BTDs), a practical approach to manage safety risk by linking the identified hazards to the appropriate mitigation measures. The safety justification for a given unmanned aircraft system (UAS) operation can have many related BTDs. In practice, however, each BTD is independently developed, which poses challenges with respect to incremental development, maintaining consistency across different safety artifacts when changes occur, and in extracting and presenting stakeholder specific information relevant for decision making. We show how a safety architecture reconciles the various BTDs of a system, and, collectively, provide an overarching picture of system safety, by considering them as views of a unified model. We also show how it enables model-driven development of BTDs, replete with validations, transformations, and a range of views. Our approach, which we have implemented in our toolset, AdvoCATE, is illustrated with a running example drawn from a real UAS safety case. The models and some of the innovations described here were instrumental in successfully obtaining regulatory flight approval

    The Last Decade in Review: Tracing the Evolution of Safety Assurance Cases through a Comprehensive Bibliometric Analysis

    Full text link
    Safety assurance is of paramount importance across various domains, including automotive, aerospace, and nuclear energy, where the reliability and acceptability of mission-critical systems are imperative. This assurance is effectively realized through the utilization of Safety Assurance Cases. The use of safety assurance cases allows for verifying the correctness of the created systems capabilities, preventing system failure. The latter may result in loss of life, severe injuries, large-scale environmental damage, property destruction, and major economic loss. Still, the emergence of complex technologies such as cyber-physical systems (CPSs), characterized by their heterogeneity, autonomy, machine learning capabilities, and the uncertainty of their operational environments poses significant challenges for safety assurance activities. Several papers have tried to propose solutions to tackle these challenges, but to the best of our knowledge, no secondary study investigates the trends, patterns, and relationships characterizing the safety case scientific literature. This makes it difficult to have a holistic view of the safety case landscape and to identify the most promising future research directions. In this paper, we, therefore, rely on state-of-the-art bibliometric tools(e.g., VosViewer) to conduct a bibliometric analysis that allows us to generate valuable insights, identify key authors and venues, and gain a birds eye view of the current state of research in the safety assurance area. By revealing knowledge gaps and highlighting potential avenues for future research, our analysis provides an essential foundation for researchers, corporate safety analysts, and regulators seeking to embrace or enhance safety practices that align with their specific needs and objectives

    Assuring Ground-Based Detect and Avoid for UAS Operations

    Get PDF
    One of the goals of the Marginal Ice Zones Observations and Processes Experiment (MIZOPEX) NASA Earth science mission was to show the operational capabilities of Unmanned Aircraft Systems (UAS) when deployed on challenging missions, in difficult environments. Given the extreme conditions of the Arctic environment where MIZOPEX measurements were required, the mission opted to use a radar to provide a ground-based detect-and-avoid (GBDAA) capability as an alternate means of compliance (AMOC) with the see-and-avoid federal aviation regulation. This paper describes how GBDAA safety assurance was provided by interpreting and applying the guidelines in the national policy for UAS operational approval. In particular, we describe how we formulated the appropriate safety goals, defined the processes and procedures for system safety, identified and assembled the relevant safety verification evidence, and created an operational safety case in compliance with Federal Aviation Administration (FAA) requirements. To the best of our knowledge, the safety case, which was ultimately approved by the FAA, is the first successful example of non-military UAS operations using GBDAA in the U.S. National Airspace System (NAS), and, therefore, the first nonmilitary application of the safety case concept in this context

    Automated identification and qualitative characterization of safety concerns reported in UAV software platforms

    Get PDF
    Unmanned Aerial Vehicles (UAVs) are nowadays used in a variety of applications. Given the cyber-physical nature of UAVs, software defects in these systems can cause issues with safety-critical implications. An important aspect of the lifecycle of UAV software is to minimize the possibility of harming humans or damaging properties through a continuous process of hazard identification and safety risk management. Specifically, safety-related concerns typically emerge during the operation of UAV systems, reported by end-users and developers in the form of issue reports and pull requests. However, popular UAV systems daily receive tens or hundreds of reports of varying types and quality. To help developers timely identifying and triaging safety-critical UAV issues, we (i) experiment with automated approaches (previously used for issue classification) for detecting the safety-related matters appearing in the titles and descriptions of issues and pull requests reported in UAV platforms, and (ii) propose a categorization of the main hazards and accidents discussed in such issues. Our results (i) show that shallow machine learning-based approaches can identify safety-related sentences with precision, recall, and F-measure values of about 80\%; and (ii) provide a categorization and description of the relationships between safety issue hazards and accidents

    Generation of model-based safety arguments from automatically allocated safety integrity levels

    Get PDF
    To certify safety-critical systems, assurance arguments linking evidence of safety to appropriate requirements must be constructed. However, modern safety-critical systems feature increasing complexity and integration, which render manual approaches impractical to apply. This thesis addresses this problem by introducing a model-based method, with an exemplary application based on the aerospace domain.Previous work has partially addressed this problem for slightly different applications, including verification-based, COTS, product-line and process-based assurance. Each of the approaches is applicable to a specialised case and does not deliver a solution applicable to a generic system in a top-down process. This thesis argues that such a solution is feasible and can be achieved based on the automatic allocation of safety requirements onto a system’s architecture. This automatic allocation is a recent development which combines model-based safety analysis and optimisation techniques. The proposed approach emphasises the use of model-based safety analysis, such as HiP-HOPS, to maximise the benefits towards the system development lifecycle.The thesis investigates the background and earlier work regarding construction of safety arguments, safety requirements allocation and optimisation. A method for addressing the problem of optimal safety requirements allocation is first introduced, using the Tabu Search optimisation metaheuristic. The method delivers satisfactory results that are further exploited for construction of safety arguments. Using the produced requirements allocation, an instantiation algorithm is applied onto a generic safety argument pattern, which is compliant with standards, to automatically construct an argument establishing a claim that a system’s safety requirements have been met. This argument is hierarchically decomposed and shows how system and subsystem safety requirements are satisfied by architectures and analyses at low levels of decomposition. Evaluation on two abstract case studies demonstrates the feasibility and scalability of the method and indicates good performance of the algorithms proposed. Limitations and potential areas of further investigation are identified

    Tool Support for Assurance Case Development

    Get PDF
    Argument-based assurance cases, often represented and organized using graphical argument structures, are increasingly being used in practice to provide assurance to stakeholders, e.g., regulatory authorities, that a system is acceptable for its intended use with respect to dependability and safety concerns. In general, comprehensive system-wide assurance arguments aggregate a substantial amount of diverse information, such as the results of safety analysis, requirements analysis, design, verification and other engineering activities. Although a variety of assurance case tools exist, many desirable argument structure operations such as hierarchical and modular abstraction, argument pattern instantiation, and inclusion extraction of richly structured information have limited to no automation support. Consequently, a considerable amount of time and effort can be spent in creating, understanding, evaluating, and managing argument structures. Over the past three years, we have been developing a toolset for assurance case automation, AdvoCATE, at the NASA Ames Research Center, to close this automation gap. This paper describes how AdvoCATE is being engineered atop formal foundations for assurance case argument structures, to provide unique capabilities for: (a) automated creation and assembly of assurance arguments, (b) integration of formal methods into wider assurance arguments, (c) automated pattern instantiation, (d) hierarchical abstraction, (e) queries and views, and (f) verification of arguments. We (and our colleagues) have used AdvoCATE in real projects for safety and airworthiness assurance, in the context of both manned and unmanned aircraft systems

    Um método para o desenvolvimento e certificação de software de sistemas embarcados baseado em redes de petri coloridas e casos de garantia.

    Get PDF
    Sistemas embarcados estão presentes em atividades diárias da população em geral, de ambientes domésticos até industriais e governamentais. O uso de sistemas embarcados tem aumentado como resultado, por exemplo, da disseminação da comunicação sem fio, de dispositivos eletrônicos com custos e tamanhos reduzidos, e de software embarcado em equipamentos eletrônicos. Software embarcado pode ser projetado como parte, desde sistemas embarcados simples para o controle de equipamentos domésticos, até sistemas críticos de segurança. Quanto mais complexo um sistema embarcado, maior a probabilidade de ocorrer situações adversas que ofereçam riscos financeiros, físicos, entre outros. Em sistemas embarcados críticos de segurança (e.g., médicos, aviônicos e aeroespaciais), falhas podem resultar em desastres naturais e danos à integridade física da população. Diante deste cenário, sistemas devem ser desenvolvidos de modo que sejam seguros e eficazes, e que estejam em conformidade com requisitos regulatórios. Portanto, um desafio importante que emerge dessa situação é o desenvolvimento de sistemas de acordo com sua especificação de requisitos, e ao mesmo tempo confiáveis e certificáveis. É no contexto de sistemas embarcados críticos de segurança que se insere esse trabalho. Propõe-se um método para o desenvolvimento e certificação de software desses sistemas. O método é baseado em redes de Petri coloridas (Coloured Petri Nets - CPN) e casos de garantia (assurance cases) representados com a notação estruturada por metas (Goal Structuring Notation - GSN). Conceitos associados com os processos de certificação prescritivo (padrões de processo) e baseado em metas (características de produto) são integrados durante o processo de desenvolvimento. Além disso, a definição e rastreabilidade de requisitos regulatórios e específicos do produto, juntamente com a verificação de conformidade com requisitos regulatórios, é realizada por meio de casos de garantia. Por fim, neste trabalho também é apresentado um estudo de caso sobre um sistema de Eletrocardiografia (ECG) configurado como um monitor cardíaco. Esse estudo de caso serve como cenário de implementação e avaliação experimental do método.Embedded systems are part of the general population’s everyday life, from domestic, to industrial and governmental environments. The use of embedded systems has grown as a result, for example, of the dissemination of wireless communication, low power and portable electronic devices, and software embedded into electronic equipments. Embedded software can be designed to compose from simple embedded systems used to control domestic equipments, to safety-critical systems. The most complex an embedded system is, the more adverse situations are likely to occur, leading to financial risks, safety risks, among other. In safety-critical embedded systems (e.g., medical, avionics, and aerospace), failures may result in natural disasters and injuries to the population. Given this scenario, systemsmust be developedinorder tobesafeand effective, andto conform to regulatory requirements. Therefore, an important challenge that raises from this situation is to develop systems according to their requirements specification, and at the same time, being reliable and certifiable. This work is applied in the context of safety-critical embedded systems. A method to develop and certify software embedded in these systems is proposed. The method is based on Coloured Petri Nets (CPN) and assurance cases represented with the Goal Structuring Notation (GSN). Concepts related to prescriptive (process standards) and goal based (product features) certification processes are integrated during the development process. Moreover, the requirements specification and regulatory andproduct- specificrequirementstraceability,alongwiththeverificationofconformanceto regulatory requirements, is carried out through assurance cases. Finally, a case study on an Electrocardiography (ECG) system configured as a cardiac monitor is presented. The case study is useful as an implementation scenario and experimental evaluation of the method

    Safety Analysis Concept and Methodology for EDDI development (Initial Version)

    Get PDF
    Executive Summary:This deliverable describes the proposed safety analysis concept and accompanying methodology to be defined in the SESAME project. Three overarching challenges to the development of safe and secure multi-robot systems are identified — complexity, intelligence, and autonomy — and in each case, we review state-of-the-art techniques that can be used to address them and explain how we intend to integrate them as part of the key SESAME safety and security concept, the EDDI.The challenge of complexity is largely addressed by means of compositional model-based safety analysis techniques that can break down the complexity into more manageable parts. This applies both to scale — modelling systems hierarchically and embedding local failure logic at the component-level — and to tasks, where different safety-related tasks (including not just analysis but also requirements allocation and assurance case generation) can be handled by the same set of models. All of this can be combined with the existing DDI concept to create models — EDDIs — that store all of the necessary information to support a gamut of design-time safety processes.Against the challenge of intelligence, we propose a trio of techniques: SafeML and Uncertainty Wrappers for estimating the confidence of a given classification, which can be used as a form of reliability measure, and SMILE for explainability purposes. By enabling us to measure and explain the reliability of ML decision making, we can integrate ML behaviour as part of a wider system safety model, e.g. as one input into a fault tree or Bayesian network. In addition to providing valuable feedback during training, testing, and verification, this allows the EDDI to perform runtime safety monitoring of ML components.The EDDI itself is therefore our primary solution to the twin challenges of autonomy and openness. Using the ConSert approach as a foundation, EDDIs can be made to operate cooperatively as part of a distributed system, issuing and receiving guarantees on the basis of their internal executable safety models to collectively achieve tasks in a safe and secure manner. Finally, a simple methodology is defined to show how the relevant techniques can be applied as part of the EDDI concept throughout the safety development lifecycle

    The Integration of Explanation-Based Learning and Fuzzy Control in the Context of Software Assurance as Applied to Modular Avionics

    Get PDF
    A Modular Power Management System (MPMS) is an energy management system intended for highly modular applications, able to adapt to changing hardware intelligently. There is a dearth in the literature on Integrated Modular Avionics (IMA), which has previously not addressed the implications for software operating within this architecture. Namely, the adaptation of control laws to changing hardware. This work proposes some approaches to address this issue. Control laws may require adaptation to overcome hardware degradation, or system upgrades. There is also a growing interest in the ability to change hardware configurations of UASs (Unmanned Aerial Systems) between missions, to better fit the characteristics of each one. Hardware changes in the aviation industry come with an additional caveat: in order for a software system to be used in aviation it must be certified as part of a platform. This certification process has no clear guidelines for adaptive systems. Adapting to a changing platform, as well as addressing the necessary certification effort, motivated the development of the MPMS. The aim of the work is twofold. Firstly, to modify existing control strategies for new hardware. This is achieved with generalisation and transfer earning. Secondly, to reduce the workload involved with maintaining a safety argument for an adaptive controller. Three areas of work are used to demonstrate the satisfaction of this aim. Explanation-Based Learning (EBL) is proposed for the derivation of new control laws. The EBL domain theory embodies general control strategies, which are specialised to form fuzzy rules. A method for translating explanation structures into fuzzy rules is presented. The generation of specific rules, from a general control strategy, is one way to adapt to controlling a modular platform. A fuzzy controller executes the rules derived by EBL. This maintains fast rule execution as well as the separation of strategy and application. The ability of EBL to generate rules which are useful when executed by a fuzzy controller is demonstrated by an experiment. A domain theory is given to control throttle output, which is used to generate fuzzy rules. These rules have a positive impact on energy consumption in simulated flight. EBL is proposed, for rule derivation, because it focuses on generalisation. Generalisations can apply knowledge from one situation, or hardware, to another. This can be preferable to re-derivation of similar control laws. Furthermore, EBL can be augmented to include analogical reasoning when reaching an impasse. An algorithm which integrates analogy into EBL has been developed as part of this work. The inclusion of analogical reasoning facilitates transfer learning, which furthers the flexibility of the MPMS in adapting to new hardware. The adaptive capability of the MPMS is demonstrated by application to multiple simulated platforms. EBL produces explanation structures. Augmenting these explanation structures with a safetyspecific domain theory can produce skeletal safety cases. A technique to achieve this has been developed. Example structures are generated for previously derived fuzzy rules. Generating safety cases from explanation structures can form the basis for an adaptive safety argument
    corecore