8 research outputs found

    An abnormal situation modeling method to assist operators in safety-critical systems

    Full text link
    © 2014 Elsevier Ltd. One of the main causes of accidents in safety-critical systems is human error. In order to reduce human errors in the process of handling abnormal situations that are highly complex and mentally taxing activities, operators need to be supported, from a cognitive perspective, in order to reduce their workload, stress, and the consequent error rate. Of the various cognitive activities, a correct understanding of the situation, i.e. situation awareness (SA), is a crucial factor in improving performance and reducing errors. Despite the importance of SA in decision-making in time- and safety-critical situations, the difficulty of SA modeling and assessment means that very few methods have as yet been developed. This study confronts this challenge, and develops an innovative abnormal situation modeling (ASM) method that exploits the capabilities of risk indicators, Bayesian networks and fuzzy logic systems. The risk indicators are used to identify abnormal situations, Bayesian networks are utilized to model them and a fuzzy logic system is developed to assess them. The ASM method can be used in the development of situation assessment decision support systems that underlie the achievement of SA. The performance of the ASM method is tested through a real case study at a chemical plant

    Handling uncertainty in cloud resource management using fuzzy Bayesian networks

    Full text link
    © 2015 IEEE. The success of cloud services depends critically on the effective management of virtualized resources. This paper aims to design and implement a decision support method to handle uncertainties in resource management from the cloud provider perspective that enables underlying complexity, automates resource provisioning and controls client-perceived quality of service. The paper includes a probabilistic decision making module that relies upon a fuzzy Bayesian network to determine the current situation status of a cloud infrastructure, including physical and virtual machines, and predicts the near future state, that will help the hypervisor to migrate or expand the VMs to reduce execution time and meet quality of service requirements. First, the framework of resource management is presented. Second, the decision making module is developed. Lastly, a series of experiments to investigate the performance of the proposed module is implemented. Experiments reveal the efficiency of the module prototype

    HAZOP: Our Primary Guide in the Land of Process Risks: How can we improve it and do more with its results?

    Get PDF
    PresentationAll risk management starts in determining what can happen. Reliable predictive analysis is key. So, we perform process hazard analysis, which should result in scenario identification and definition. Apart from material/substance properties, thereby, process conditions and possible deviations and mishaps form inputs. Over the years HAZOP has been the most important tool to identify potential process risks by systematically considering deviations in observables, by determining possible causes and consequences, and, if necessary, suggesting improvements. Drawbacks of HAZOP are known; it is effort-intensive while the results are used only once. The exercise must be repeated at several stages of process build-up, and when the process is operational, it must be re-conducted periodically. There have been many past attempts to semi- automate the HazOp procedure to ease the effort of conducting it, but lately new promising developments have been realized enabling also the use of the results for facilitating operational fault diagnosis. This paper will review the directions in which improved automation of HazOp is progressing and how the results, besides for risk analysis and design of preventive and protective measures, also can be used during operations for early warning of upcoming abnormal process situations

    The role of situation awareness in accidents of large-scale technological systems

    Full text link
    © 2015 The Institution of Chemical Engineers. Published by Elsevier B.V. All rights reserved. In the last two decades, several serious accidents at large-scale technological systems that have had grave consequences, such as that at Bhopal, have primarily been attributed to human error. However, further investigations have revealed that humans are not the primary cause of these accidents, but have inherited the problems and difficulties of working with complex systems created by engineers. The operators have to comprehend malfunctions in real time, respond quickly, and make rapid decisions to return operational units to normal conditions, but under these circumstances, the mental workload of operators rises sharply, and a mental workload that is too high increases the rate of error. Therefore, cognivitive human features such as situation awareness (SA) - one of the most important prerequisite for decision-making - should be considered and analyzed appropriately. This paper applys the SA Error Taxonomy methodology to analyze the role of SA in three different accidents: (1) A runaway chemical reaction at Institute, West Virginia killing two employees, injuring eight people, and requiring the evacuation of more than 40,000 residents adjacent to the facility, (2) The ignition of a vapor cloud at Bellwood, Illinois that killed one person, injured two employees, and caused significant business interruption, and (3) An explosion at Ontario, California injuring four workers and caused extensive damage to the facility. In addition, the paper presents certain requirements for cognitive operator support system development and operator training under abnormal situations to promote operators' SA in the process industry

    How to Treat Expert Judgment? With certainty it contains uncertainty!

    Get PDF
    PresentationTo be acceptably safe one must identify the risks one is exposed to. It is uncertain whether the threat really will materialize, but determining the size and probability of the risk is also full of uncertainty. When performing an analysis and preparing for decision making under uncertainty, quite frequently failure rate data, information on consequence severity or on a probability value, yes, even on the possibility an event can or cannot occur is lacking. In those cases, the only way to proceed is to revert to expert judgment. Even in case historical data are available, but one should like to know whether these data still hold in the current situation, an expert can be asked about their reliability. Anyhow, expert elicitation comes with an uncertainty depending on the expert’s reliability, which becomes very visible when two or more experts give different answers or even conflicting ones. This is not a new problem, and very bright minds have thought how to tackle it. But so far, however, the topic has not been given much attention in process safety and risk assessment. The paper has a review character and will present various approaches with detailed explanation and examples

    A human-system interface risk assessment method based on mental models

    Full text link
    © 2015 Elsevier Ltd. In many safety-critical systems, it is necessary to maintain operators' situation awareness at a high level to ensure the safety of operations. Today, in many such systems, operators have to rely on the principles and design of human-system interfaces (HSIs) to observe and comprehend the overwhelming amount of process data. Thus, poor HSIs may cause serious consequences, such as occupational accidents and diseases including stress, and they have therefore been considered an emerging risk. Despite the importance of this, very few methods have as yet been developed to assess the risk of HSIs. This paper presents a new risk assessment method that relies upon operators' mental models, human reliability analysis (HRA) event tree, and the situation awareness global assessment technique (SAGAT) to produce a risk profile for the intended HSI. In the proposed method, the operator's understanding (i.e. mental models) about possible abnormal situations in the intended plant is modeled on the basis of the capabilities of Bayesian networks. The situation models are combined with the HRA event tree, which paves the way for the incorporation of operator responses in the assessment method. Probe questions in line with the SAGAT through simulated scenarios in a virtual environment are then administrated to gather operator responses. Finally, the proposed method determines a risk level for the HSI by assigning the operator responses to the developed situational networks. The performance of the proposed method is investigated through a case study at a chemical plant

    A safety-critical decision support system evaluation using situation awareness and workload measures

    Full text link
    © 2016 Elsevier Ltd. To ensure the safety of operations in safety-critical systems, it is necessary to maintain operators' situation awareness (SA) at a high level. A situation awareness support system (SASS) has therefore been developed to handle uncertain situations [1]. This paper aims to systematically evaluate the enhancement of SA in SASS by applying a multi-perspective approach. The approach consists of two SA metrics, SAGAT and SART, and one workload metric, NASA-TLX. The first two metrics are used for the direct objective and subjective measurement of SA, while the third is used to estimate operator workload. The approach is applied in a safety-critical environment called residue treater, located at a chemical plant in which a poor human-system interface reduced the operators' SA and caused one of the worst accidents in US history. A counterbalanced within-subjects experiment is performed using a virtual environment interface with and without the support of SASS. The results indicate that SASS improves operators' SA, and specifically has benefits for SA levels 2 and 3. In addition, it is concluded that SASS reduces operator workload, although further investigations in different environments with a larger number of participants have been suggested

    Process hazard analysis, hazard identification and scenario definition: are the conventional tools sufficient, or should and can we do much better?

    Get PDF
    Hazard identification is the first and most crucial step in any risk assessment. Since the late 1960s it has been done in a systematic manner using hazard and operability studies (HAZOP) and failure mode and effect analysis (FMEA). In the area of process safety these methods have been successful in that they have gained global recognition. There still remain numerous and significant challenges when using these methodologies. These relate to the quality of human imagination in eliciting failure events and subsequent causal pathways, the breadth and depth of outcomes, application across operational modes, the repetitive nature of the methods and the substantial effort expended in performing this important step within risk management practice. The present article summarizes the attempts and actual successes that have been made over the last 30 years to deal with many of these challenges. It analyzes what should be done in the case of a full systems approach and describes promising developments in that direction. It shows two examples of how applying experience and historical data with Bayesian network, HAZOP and FMEA can help in addressing issues in operational risk management
    corecore