9 research outputs found
Using Information from Operating Experience to Inform Human Reliability Analysis
This paper reports on efforts being sponsored by the U.S. NRC and performed by INEEL to develop a technical basis and perform work to extract information from sources for use in HRA. The objectives of this work are to: 1) develop a method for conducting risk-informed event analysis of human performance information that stems from operating experience at nuclear power plants and for compiling and documenting the results in a structured manner; 2) provide information from these analyses for use in risk-informed and performance-based regulatory activities; 3) create methods for information extraction and a repository for this information that, likewise, support HRA methods and their applications
Recommended from our members
Lessons Learned on Benchmarking from the International Human Reliability Analysis Empirical Study
The International Human Reliability Analysis (HRA) Empirical Study is a comparative benchmark of the prediction of HRA methods to the performance of nuclear power plant crews in a control room simulator. There are a number of unique aspects to the present study that distinguish it from previous HRA benchmarks, most notably the emphasis on a method-to-data comparison instead of a method-to-method comparison. This paper reviews seven lessons learned about HRA benchmarking from conducting the study: (1) the dual purposes of the study afforded by joining another HRA study; (2) the importance of comparing not only quantitative but also qualitative aspects of HRA; (3) consideration of both negative and positive drivers on crew performance; (4) a relatively large sample size of crews; (5) the use of multiple methods and scenarios to provide a well-rounded view of HRA performance; (6) the importance of clearly defined human failure events; and (7) the use of a common comparison language to “translate” the results of different HRA methods. These seven lessons learned highlight how the present study can serve as a useful template for future benchmarking studies
Recommended from our members
Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review.
There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors
Recommended from our members
Capturing Control Room Simulator Data with the HERA Database
The Human Event Repository and Analysis (HERA) system has been developed as a tool for classifying and recording human performance data extracted from primary data sources. This paper reviews the process of extracting data from simulator studies for use in HERA. Simulator studies pose unique data collection challenges, both in types and quality of data measures, but such studies are ideally suited to gather operator performance data, including the full spectrum of performance shaping factors used in a HERA analysis. This paper provides suggestions for obtaining relevant human performance data for a HERA analysis from a control room simulator study and for inputting those data in a format suitable for HERA
Recommended from our members
Summary of Information Presented at an NRC-Sponsored Low-Power Shutdown Public Workshop, April 27, 1999, Rockville, Maryland
This report summarizes a public workshop that was held on April 27, 1999, in Rockville, Maryland. The workshop was conducted as part of the US Nuclear Regulatory Commission's (NRC) efforts to further develop its understanding of the risks associated with low power and shutdown operations at US nuclear power plants. A sufficient understanding of such risks is required to support decision-making for risk-informed regulation, in particular Regulatory Guide 1.174, and the development of a consensus standard. During the workshop the NRC staff discussed and requested feedback from the public (including representatives of the nuclear industry, state governments, consultants, private industry, and the media) on the risk associated with low-power and shutdown operations
SANDIA REPORT Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review Issues in Benchmarking Human Reliability Analysis Methods: A Literature Review
ABSTRACT There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.
Recommended from our members
A Mid-Layer Model for Human Reliability Analysis: Understanding the Cognitive Causes of Human Failure Events
The Office of Nuclear Regulatory Research (RES) is sponsoring work in response to a Staff Requirements Memorandum (SRM) directing an effort to establish a single human reliability analysis (HRA) method for the agency or guidance for the use of multiple methods. As part of this effort an attempt to develop a comprehensive HRA qualitative approach is being pursued. This paper presents a draft of the method’s middle layer, a part of the qualitative analysis phase that links failure mechanisms to performance shaping factors. Starting with a Crew Response Tree (CRT) that has identified human failure events, analysts identify potential failure mechanisms using the mid-layer model. The mid-layer model presented in this paper traces the identification of the failure mechanisms using the Information-Diagnosis/Decision-Action (IDA) model and cognitive models from the psychological literature. Each failure mechanism is grouped according to a phase of IDA. Under each phase of IDA, the cognitive models help identify the relevant performance shaping factors for the failure mechanism. The use of IDA and cognitive models can be traced through fault trees, which provide a detailed complement to the CRT
Recommended from our members
An overview of the evolution of human reliability analysis in the context of probabilistic risk assessment.
Since the Reactor Safety Study in the early 1970's, human reliability analysis (HRA) has been evolving towards a better ability to account for the factors and conditions that can lead humans to take unsafe actions and thereby provide better estimates of the likelihood of human error for probabilistic risk assessments (PRAs). The purpose of this paper is to provide an overview of recent reviews of operational events and advances in the behavioral sciences that have impacted the evolution of HRA methods and contributed to improvements. The paper discusses the importance of human errors in complex human-technical systems, examines why humans contribute to accidents and unsafe conditions, and discusses how lessons learned over the years have changed the perspective and approach for modeling human behavior in PRAs of complicated domains such as nuclear power plants. It is argued that it has become increasingly more important to understand and model the more cognitive aspects of human performance and to address the broader range of factors that have been shown to influence human performance in complex domains. The paper concludes by addressing the current ability of HRA to adequately predict human failure events and their likelihood