317,106 research outputs found

    How Do Practitioners Perceive Assurance Cases in Safety-Critical Software Systems?

    Full text link
    Safety-critical software systems are those whose failure or malfunction could result in casualty and/or serious financial loss. In such systems, safety assurance cases (SACs) are an emerging approach that adopts a proactive strategy to produce structuralized safety justifications and arguments. While SACs are recommended in many software-intensive safety-critical domains, the lack of knowledge regarding the practitioners' perspectives on using SACs hinders effective adoption of this approach. To gain such knowledge, we interviewed nine practitioners and safety experts who focused on safety-critical software systems. In general, our participants found the SAC approach beneficial for communication of safety arguments and management of safety issues in a multidisciplinary setting. The challenges they faced when using SACs were primarily associated with (1) a lack of tool support, (2) insufficient process integration, and (3) scarcity of experienced personnel. To overcome those challenges, our participants suggested tactics that focused on creating direct safety arguments. Process and organizational adjustments are also needed to streamline SAC analysis and creation. Finally, our participants emphasized the importance of knowledge sharing about SACs across software-intensive safety-critical domains

    Integration of Safety Analysis in Model-Driven Software Development

    Get PDF
    I Safety critical software requires integrating verification techniques in software development methods. Software architectures must guarantee that developed systems will meet safety requirements and safety analyses are frequently used in the assessment. Safety engineers and software architects must reach a common understanding on an optimal architecture from both perspectives. Currently both groups of engineers apply different modelling techniques and languages: safety analysis models and software modelling languages. The solutions proposed seek to integrate both domains coupling the languages of each domain. It constitutes a sound example of the use of language engineering to improve efficiency in a software-related domain. A model-driven development approach and the use of a platform-independent language are used to bridge the gap between safety analyses (failure mode effects and criticality analysis and fault tree analysis) and software development languages (e.g. unified modelling language). Language abstract syntaxes (metamodels), profiles, language mappings (model transformations) and language refinements, support the direct application of safety analysis to software architectures for the verification of safety requirements. Model consistency and the possibility of automation are found among the benefits

    Two techniques for software safety analysis

    Get PDF
    Currently many safety-critical systems are being built. Safety-critical systems are those software systems where a single failure or hazard may cause catastrophic consequences. Therefore, safety is a property which must be satisfied for safety-critical systems. This research develops techniques to address two areas of software safety analysis in which structured methodologies have been lacking. The first contribution of the paper is to define a top-down, tree-based analysis technique, the Fault Contribution Tree Analysis (FCTA), that operates on the results of a product-family domain analysis. This paper then describes a method by which the FCTA of a product family can serve as a reusable asset in the building of new members of the family. Specifically, we describe both the construction of the fault contribution tree for a product family (domain engineering) and the reuse of the appropriately pruned fault contribution tree for the analysis of a new member of the product family (application engineering). The second contribution of the paper is to develop an analysis process which combines the different perspectives of system decomposition with hazard analysis methods to identify the safety-related scenarios. The derived safety-related scenarios are the detailed instantiations of system safety requirements that serve as input to future software architectural evaluation. The paper illustrates the two techniques with examples from applications to two product families in Chapter One and to a safety-critical system in Chapter Two

    A Review of Formal Methods applied to Machine Learning

    Full text link
    We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems. Formal methods can provide rigorous correctness guarantees on hardware and software systems. Thanks to the availability of mature tools, their use is well established in the industry, and in particular to check safety-critical applications as they undergo a stringent certification process. As machine learning is becoming more popular, machine-learned components are now considered for inclusion in critical systems. This raises the question of their safety and their verification. Yet, established formal methods are limited to classic, i.e. non machine-learned software. Applying formal methods to verify systems that include machine learning has only been considered recently and poses novel challenges in soundness, precision, and scalability. We first recall established formal methods and their current use in an exemplar safety-critical field, avionic software, with a focus on abstract interpretation based techniques as they provide a high level of scalability. This provides a golden standard and sets high expectations for machine learning verification. We then provide a comprehensive and detailed review of the formal methods developed so far for machine learning, highlighting their strengths and limitations. The large majority of them verify trained neural networks and employ either SMT, optimization, or abstract interpretation techniques. We also discuss methods for support vector machines and decision tree ensembles, as well as methods targeting training and data preparation, which are critical but often neglected aspects of machine learning. Finally, we offer perspectives for future research directions towards the formal verification of machine learning systems

    Architecture-driven fault-based testing for software safety

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2014.Thesis (Master's) -- Bilkent University, 2014.Includes bibliographical references leaves 159-166.A safety-critical system is defined as a system in which the malfunctioning of software could result in death, injury or damage to environment. To mitigate these serious risks the architecture of safety-critical systems need to be carefully designed and analyzed. A common practice for modeling software architecture is the adoption of architectural perspectives and software architecture viewpoint approaches. Existing approaches tend to be general purpose and do not explicitly focus on safety concern in particular. To provide a complementary and dedicated support for designing safety-critical systems we propose safety perspective and an architecture framework approach for software safety. Once the safety-critical systems are designed it is important to analyze these for fitness before implementation, installation and operation. Hereby, it is important to ensure that the potential faults can be identified and cost-effective solutions are provided to avoid or recover from the failures. In this context, one of the most important issues is to investigate the effectiveness of the applied safety tactics to safety-critical systems. Since the safety-critical systems are complex systems, testing of these systems is challenging and very hard to define proper test suites for these systems. Several fault-based software testing approaches exist that aim to analyze the quality of the test suites. Unfortunately, these approaches do not directly consider safety concern and tend to be general purpose and they doesn’t consider the applied the safety tactics. We propose a fault-based testing approach for analyzing the test suites using the safety tactic and fault knowledge.GĂŒrbĂŒz, Havva GĂŒlayM.S

    The organisational precursors to human automation interaction issues in safety-critical domains: the case of an automated alarm system from the air traffic management domain

    Get PDF
    Much has been written about the side effects of automation in complex safety-critical domains, such as air traffic management, aviation, nuclear power generation, and healthcare. Here, human factors and safety researchers have long acknowledged that the potential of automation to increase cost-effectiveness, quality of service and safety, is accompanied by undesired side effects or issues in human automation interaction (HAI). Such HAI issues may introduce the potential for increased confusion, uncertainty, and frustration amongst sharp end operators, i.e. the users of automation. These conditions may result in operators to refuse to use the automation, in impaired ability of operators to control the hazardous processes for which they are responsible, and in new, unintended paths to safety failure. The present thesis develops a qualitative framework of the organisational precursors to HAI issues (OPHAII) that can be found in safety-critical domains. Organisational precursors denote those organisational and managerial conditions that, although distant in time and space from the operational environment, may actually influence the quality of HAI found there. Such precursors have been extensively investigated by organisational safety (OS) scholars in relation to the occurrence of accidents and disasters—although not HAI issues. Thus, the framework’s development is motivated by the intent to explore the theoretical gap lying at the intersection between the OS area and the current perspectives on the problem—the human computer interaction (HCI) and the system lifecycle ones. While considering HAI issues as a design problem or a failure in human factors integration and/or safety assurance respectively, both perspectives, in fact, ignore, the organisational roots of the problem. The OPHAII framework was incrementally developed based on three qualitative studies: two successive, historical, case studies coupled with a third corroboratory expert study. The first two studies explored the organisational precursors to a known HAI issue: the nuisance alert problem relative to an automated alarm system from the air traffic management domain. In particular, the first case study investigated retrospectively the organisational response to the nuisance alert problem in the context of the alarm’s implementation and improvement in the US between 1977 and 2006. The second case study has a more contemporary focus, and examined at the organisational response to the same problem within two European Air Navigation Service Providers between 1990 and 2010. The first two studies produced a preliminary version of the framework. The third study corroborated and refined this version by subjecting it to the criticism from a panel of 11 subject matter experts. The resulting framework identifies three classes of organisational precursors: (i) the organisational assumptions driving automation adoption and improvement; (2) the availability of specific organisational capabilities for handling HAI issues; and (3) the control of implementation quality at the boundary between the service provider and the software manufacturer. These precursors advance current understanding of the organisational factors involved in the (successful and problematic) handling of HAI issues within safety-critical service provider organisations. Its dimensions support the view that HAI issues can be seen as and organisational phenomenon—an organisational problem that can be the target of analysis and improvements complementary to those identified by the HCI and the system lifecycle perspectives

    Future cities and autonomous vehicles: analysis of the barriers to full adoption

    Get PDF
    The inevitable upcoming technology of autonomous vehicles (AVs) will affect our cities and several aspects of our lives. The widespread adoption of AVs repose at crossing distinct barriers that prevent their full adoption. This paper presents a critical review of recent debates about AVs and analyse the key barriers to their full adoption. This study has employed a mixed research methodology on a selected database of recently published research works. Thus, the outcomes of this review integrate the barriers into two main categories; (1) User/Government perspectives that include (i) Users' acceptance and behaviour, (ii) Safety, and (iii) Legislation. (2) Information and Communication Technologies (ICT) which include (i) Computer software and hardware, (ii) Communication systems V2X, and (iii) accurate positioning and mapping. Furthermore, a framework of barriers and their relations to AVs system architecture has been suggested to support future research and technology development

    On Using Blockchains for Safety-Critical Systems

    Full text link
    Innovation in the world of today is mainly driven by software. Companies need to continuously rejuvenate their product portfolios with new features to stay ahead of their competitors. For example, recent trends explore the application of blockchains to domains other than finance. This paper analyzes the state-of-the-art for safety-critical systems as found in modern vehicles like self-driving cars, smart energy systems, and home automation focusing on specific challenges where key ideas behind blockchains might be applicable. Next, potential benefits unlocked by applying such ideas are presented and discussed for the respective usage scenario. Finally, a research agenda is outlined to summarize remaining challenges for successfully applying blockchains to safety-critical cyber-physical systems

    WCET Computation of Safety-Critical Avionics Programs: Challenges, Achievements and Perspectives

    Get PDF
    Time-critical avionics software products must compute their output in due time. If it is not the case, the safety of the avionics systems to which they belong might be affected. Consequently, the Worst Case Excution Time of the tasks of such programs must be computed safely, i.e., they must not be under-estimated. Since computing the exact WCET of a real-size software product task is not possible (undecidability), "safe WCET" means over-estimated WCET. Here we have an industrial issue in the sense that too over-estimating the WCET leads to a waste of CPU power. Hence, the computation a safe and precise WCET is the big challenge. Solutions to that problem cannot only rely on the technique for computing the WCET. Indeed, both hardware and software must be designed to be as deterministic as possible. For its Flight controls software products, Airbus has always been applying these principles but, since the A380, the use of more complex processors required to move from a technique based on measurements to a new one based on static analysis by Abstract Interpretation. Another kind of avionics applications are the so-called High-performance avionics software products, which are significantly less affected by - rare - delays in the computation of their outputs. In this case, the need for a "safe WCET" is less strong, hence opening the door to different other ways of computing it. In this context, the aim of the talk is to present the challenge of computing WCET in Airbus\u27s industrial context, the achievements in this field and the evocation of some trends and perspectives
    • 

    corecore