44,945 research outputs found

    FRAM for systemic accident analysis: a matrix representation of functional resonance

    Get PDF
    Due to the inherent complexity of nowadays Air Traffic Management (ATM) system, standard methods looking at an event as a linear sequence of failures might become inappropriate. For this purpose, adopting a systemic perspective, the Functional Resonance Analysis Method (FRAM) originally developed by Hollnagel, helps identifying non-linear combinations of events and interrelationships. This paper aims to enhance the strength of FRAM-based accident analyses, discussing the Resilience Analysis Matrix (RAM), a user-friendly tool that supports the analyst during the analysis, in order to reduce the complexity of representation of FRAM. The RAM offers a two dimensional representation which highlights systematically connections among couplings, and thus even highly connected group of couplings. As an illustrative case study, this paper develops a systemic accident analysis for the runway incursion happened in February 1991 at LAX airport, involving SkyWest Flight 5569 and USAir Flight 1493. FRAM confirms itself a powerful method to characterize the variability of the operational scenario, identifying the dynamic couplings with a critical role during the event and helping discussing the systemic effects of variability at different level of analysis

    Reasoning about the Reliability of Diverse Two-Channel Systems in which One Channel is "Possibly Perfect"

    Get PDF
    This paper considers the problem of reasoning about the reliability of fault-tolerant systems with two "channels" (i.e., components) of which one, A, supports only a claim of reliability, while the other, B, by virtue of extreme simplicity and extensive analysis, supports a plausible claim of "perfection." We begin with the case where either channel can bring the system to a safe state. We show that, conditional upon knowing pA (the probability that A fails on a randomly selected demand) and pB (the probability that channel B is imperfect), a conservative bound on the probability that the system fails on a randomly selected demand is simply pA.pB. That is, there is conditional independence between the events "A fails" and "B is imperfect." The second step of the reasoning involves epistemic uncertainty about (pA, pB) and we show that under quite plausible assumptions, a conservative bound on system pfd can be constructed from point estimates for just three parameters. We discuss the feasibility of establishing credible estimates for these parameters. We extend our analysis from faults of omission to those of commission, and then combine these to yield an analysis for monitored architectures of a kind proposed for aircraft

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Making intelligent systems team players: Overview for designers

    Get PDF
    This report is a guide and companion to the NASA Technical Memorandum 104738, 'Making Intelligent Systems Team Players,' Volumes 1 and 2. The first two volumes of this Technical Memorandum provide comprehensive guidance to designers of intelligent systems for real-time fault management of space systems, with the objective of achieving more effective human interaction. This report provides an analysis of the material discussed in the Technical Memorandum. It clarifies what it means for an intelligent system to be a team player, and how such systems are designed. It identifies significant intelligent system design problems and their impacts on reliability and usability. Where common design practice is not effective in solving these problems, we make recommendations for these situations. In this report, we summarize the main points in the Technical Memorandum and identify where to look for further information

    Information for the user in design of intelligent systems

    Get PDF
    Recommendations are made for improving intelligent system reliability and usability based on the use of information requirements in system development. Information requirements define the task-relevant messages exchanged between the intelligent system and the user by means of the user interface medium. Thus, these requirements affect the design of both the intelligent system and its user interface. Many difficulties that users have in interacting with intelligent systems are caused by information problems. These information problems result from the following: (1) not providing the right information to support domain tasks; and (2) not recognizing that using an intelligent system introduces new user supervisory tasks that require new types of information. These problems are especially prevalent in intelligent systems used for real-time space operations, where data problems and unexpected situations are common. Information problems can be solved by deriving information requirements from a description of user tasks. Using information requirements embeds human-computer interaction design into intelligent system prototyping, resulting in intelligent systems that are more robust and easier to use

    Learning from major accidents to improve system design

    Get PDF
    © 2015 Elsevier Ltd.Despite the massive developments in new technologies, materials and industrial systems, notably supported by advanced structural and risk control assessments, recent major accidents are challenging the practicality and effectiveness of risk control measures designed to improve reliability and reduce the likelihood of losses. Contemporary investigations of accidents occurred in high-technology systems highlighted the connection between human-related issues and major events, which led to catastrophic consequences. Consequently, the understanding of human behavioural characteristics interlaced with current technology aspects and organisational context seems to be of paramount importance for the safety & reliability field. First, significant drawbacks related to the human performance data collection will be minimised by the development of a novel industrial accidents dataset, the Multi-attribute Technological Accidents Dataset (MATA-D), which groups 238 major accidents from different industrial backgrounds and classifies them under a common framework (the Contextual Control Model used as basis for the Cognitive Reliability and Error Analysis Method). The accidents collection and the detailed interpretation will provide a rich data source, enabling the usage of integrated information to generate input to design improvement schemes. Then, implications to improve robustness of system design and tackle the surrounding factors and tendencies that could lead to the manifestation of human errors will be effectively addressed

    Safety Engineering with COTS components

    Get PDF
    Safety-critical systems are becoming more widespread, complex and reliant on software. Increasingly they are engineered through Commercial Off The Shelf (COTS) (Commercial Off The Shelf) components to alleviate the spiralling costs and development time, often in the context of complex supply chains. A parallel increased concern for safety has resulted in a variety of safety standards, with a growing consensus that a safety life cycle is needed which is fully integrated with the design and development life cycle, to ensure that safety has appropriate influence on the design decisions as system development progresses. In this article we explore the application of an integrated approach to safety engineering in which assurance drives the engineering process. The paper re- ports on the outcome of a case study on a live industrial project with a view to evaluate: its suitability for application in a real-world safety engineering setting; its benefits and limitations in counteracting some of the difficulties of safety en- gineering with COTS components across supply chains; and, its effectiveness in generating evidence which can contribute directly to the construction of safety cases
    • …
    corecore