11 research outputs found
Fuzzy Human Reliability Analysis: Applications and Contributions Review
The applications and contributions of fuzzy set theory to human reliability analysis (HRA) are reassessed. The main contribution of fuzzy mathematics relies on its ability to represent vague information. Many HRA authors have made contributions developing new models, introducing fuzzy quantification methodologies. Conversely, others have drawn on fuzzy techniques or methodologies for quantifying already existing models. Fuzzy contributions improve HRA in five main aspects: (1) uncertainty treatment, (2) expert judgment data treatment, (3) fuzzy fault trees, (4) performance shaping factors, and (5) human behaviour model. Finally, recent fuzzy applications and new trends in fuzzy HRA are herein discussed
Using Expert Models in Human Reliability Analysis - A Dependence Assessment Method Based on Fuzzy Logic
International audienceIn human reliability analysis (HRA), dependence analysis refers to assessing the influence of the failure of the operators to perform one task on the failure probabilities of subsequent tasks. A commonly used approach is the technique for human error rate prediction (THERP). The assessment of the dependence level in THERP is a highly subjective judgment based on general rules for the influence of five main factors. A frequently used alternative method extends the THERP model with decision trees. Such trees should increase the repeatability of the assessments but they simplify the relationships among the factors and the dependence level. Moreover, the basis for these simplifications and the resulting tree is difficult to trace. The aim of this work is a method for dependence assessment in HRA that captures the rules used by experts to assess dependence levels and incorporates this knowledge into an algorithm and software tool to be used by HRA analysts. A fuzzy expert system (FES) underlies the method. The method and the associated expert elicitation process are demonstrated with a working model. The expert rules are elicited systematically and converted into a traceable, explicit, and computable model. Anchor situations are provided as guidance for the HRA analyst's judgment of the input factors. The expert model and the FES-based dependence assessment method make the expert rules accessible to the analyst in a usable and repeatable way, with an explicit and traceable basis
A Systems Approach to Assessing, Interpreting and Applying Human Error Mishap Data to Mitigate Risk of Future Incidents in a Space Exploration Ground Processing Operations Environment
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. Although many existing incident report systems have been beneficial for identifying engineering failures, most of them are not designed around a theoretical framework of human error, thus failing to address core issues and causes of the mishaps. Therefore, it is imperative to develop a human error assessment framework to identify these causes. This research focused on identifying causes of human error and leading contributors to historical Launch Vehicle Ground Processing Operations mishaps based on past mishaps, near mishaps, and close calls. Three hypotheses were discussed. The first hypothesis addressed the impact Human Factor Analysis and Classification System (HFACS) contributing factors (unsafe acts of operators, preconditions for unsafe acts, unsafe supervision, and/or organizational influences) have on human error events (i.e. mishaps, close calls, incident or accidents) in NASA Ground Processing Operations. The second hypothesis focused on determining if the HFACS framework conceptual model could be proven to be a viable analysis and classification system to help classify both latent and active underlying contributors and causes of human error in ground processing operations. Lastly, the third hypothesis focused on determining if the development of a model using the Human Error Assessment and Reduction Technique (HEART) could be used as a tool to help determine the probability of human error occurrence in ground processing operations. A model to analyze and classify contributing factors to mishaps or incidents, and generate predicted Human Error Probabilities (HEPs) of future occurrence was developed using the HEART and HFACS tools. The research methodology was applied (retrospectively) to six Ground Processing Operations (GPO) Scenarios and 30 years of Launch Vehicle Related Mishap Data. Surveys were used to provide Subject Matter Experts\u27 (SMEs) subjective assessments of the impact Error Producing Conditions (EPC) had on specific tasks. In this research a Logistic Binary Regression model, which identified the four most significant contributing HFACS human error factors was generated. This model provided predicted probabilities of future occurrence of mishaps when these contributing factors are present. The results showed that the HEART and HFACS methods, when modified, can be used as an analysis tool to identify contributing factors, their impact on human error events, and predict the potential probability of future human error occurrence. This methodology and framework was validated through consistency and comparison to other related research. A contribution methodology for other space operations and similar complex operations to follow was provided from this research. Future research should involve broadening the scope to explore and identify other existing models of human error management systems to integrate into complex space systems beyond what was conducted in this research
A possibilistic approach to latent structure analysis for symmetric fuzzy data.
In many situations the available amount of data is huge and can be intractable. When the data set is single valued, latent structure models are recognized techniques, which provide a useful compression of the information. This is done by considering a regression model between observed and unobserved (latent) fuzzy variables. In this paper, an extension of latent structure analysis to deal with fuzzy data is proposed. Our extension follows the possibilistic approach, widely used both in the cluster and regression frameworks. In this case, the possibilistic approach involves the formulation of a latent structure analysis for fuzzy data by optimization. Specifically, a non-linear programming problem in which the fuzziness of the model is minimized is introduced. In order to show how our model works, the results of two applications are given.Latent structure analysis, symmetric fuzzy data set, possibilistic approach.
Human Errors Assessment for Board Man in a Control Room of Petrochemical Industrial Companies using the extended CREAM
زمينه و اهداف: از جمله ویژگیهای مهم صنایع امروزی، کنترل دقیق اغلب اجزای کلیدی صنعت از طریق اتاقهای کنترل مرکزی است. به همین دلیل بروز خطا توسط پرسنل اتاقهای کنترل، میتواند فاجعه بار باشد. مطالعه حاضر با هدف شناسايي و ارزيابي خطاهاي انساني در اتاق كنترل يكي از صنايع پتروشيمي به انجام رسيد.
مواد و روشها: مطالعه حاضر، يك پژوهش مورد پژوهي توصيفي- تحليلي ميباشد كه در اتاق كنترل يكي از صنايع پتروشـيمي اجرا گرديد. در این پژوهش ابتدا وظايف شغلي موجود در اتاق کنترل اصلی با استفاده از روش تجزيه و تحليل سلسله مراتبي تحليل شد. سپس با اسـتفاده از روش CREAM گسترده ضمن شناسایی خطاهای انسانی، كنترلهاي محتمل كاربر و خطاهاي احتمالي شناختي براي وظايف شغلي تعیین و ارزیابی شد. در طی انجام این مطالعه کلیه موازین اخلاقی رعایت و مجوزهای مربوطه دریافت گردید.
يافتهها: نتایج مطالعه نشان داد که نوع سبك كنترلي برای وظایف بردمن در 88 درصد موارد از نوع استراتژیک و در 12 درصد مابقی از نوع لحظهای بود. براساس نتایج روش CREAM گسترده، از تعداد کل خطاهای شناسایی شده، خطای اجرا (55 درصد)، خطای تفسیر (20 درصد)، خطای برنامهریزی (14/9درصد) و خطای مشاهده (10/1درصد) بدست آمد.
نتيجهگيری: با توجه به تعیین نقش مهمترین عوامل شکل دهنده عمکلرد در اجرای وظایف بردمن، بازنگری و باز طراحی برنامه نوبت كاري و بهينه سازي سامانه ارتباطي از مهمترین پیشنهادات حاصل از مطالعه حاضر بود.Background and Aims: One of the important characteristic features of modern industries is the precise control of key components of the industry through central control rooms. Thus, committing an error by the control room staff can be catastrophic. The present study was conducted with the aim of identifying andevaluating human errors in the control room of one of the petrochemical industries.Materials and methods: The present descriptive-analytic study was conducted in a control room of one of the petrochemical industries. In this research, the job tasks in the main control room were first analyzed using hierarchical analysis. Then, using the extensive CREAM method, in addition to identifying human errors, probabilistic user controls and cognitive probability errors for job tasks were determined andevaluated. All stages of this research were conducted ethically.Results: The results of the study showed that the type of control style for the Board man tasks was strategic in 88% of the cases and the remaining 12% was of the instant type. Based on the results of the CREAM method, execution errors (55%), interpretation errors (20%), planning errors (14.9%) and observationalerrors (10.1%) were respectively the most determined errors.Conclusion: Regarding the determination of the role of the most important factors in the implementation of tasks, the review and redevelopment of the program of shift work and optimization of the communication system were among the most important suggestions of the present study
Development of a Team Human Reliability Tool (ROCCI)
Human Reliability Assessments (HRA) have been developed so designers and users can
understand how likely it is for a human to make an error when using a product or
system in the workplace. This is called the reliability of the product. Approximately
twenty-six techniques exist to assess the reliability of an individual human in a process.
However, often a team of people interact within a system and not just one individual on
their own. Hence a new generation of HRAs is needed to assess the effects of teamwork
on reliability.
This EPSRC CASE studentship, supported by BAE systems, develops a prototype,
which enables a designer to quantify and answer to the question: “If I allocate this team
to execute that task in System X, how likely is it that they will succeed?”
This prototype assumes that a process can be defined in the form of a flow diagram and
that roles can be allocated to execute it. Then, using one of those twenty-six
techniques, individual reliabilities can be calculated. These are then modulated, by
considering how the team interaction affects the three core elements of Trust,
Communication and Decision Making Power Distance. This creates an ‘interactive
reliability’ factor for each individual in the team. These individual reliability factors are
combined according to the team architecture for the process in order to determine the
overall team reliability factor.
The methods of development include: stakeholder interviews; the evolution of
requirements specification; sensitivity analysis; and a stakeholder review of the tool.
The information from these analyses produced a model about team interaction and the
requirements for the new tool together with statements and algorithms that need to be
used in the new tool: ROCCI.
This technique is useful for use in the early stages of the design process. The successful
prototype can be extended into applications for operations and used to assess and adapt
products and systems, which involve teams
Proceedings. 23. Workshop Computational Intelligence, Dortmund, 5. - 6. Dezember 2013
Dieser Tagungsband enthält die Beiträge des 23. Workshops Computational Intelligence des Fachausschusses 5.14 der VDI/VDE-Gesellschaft für Mess- und Automatisierungstechnik (GMA), der vom 5. - 6. Dezember 2013 in Dortmund stattgefunden hat. Im Fokus stehen Methoden, Anwendungen und Tools für Fuzzy-Systeme, Künstliche Neuronale Netze, Evolutionäre Algorithmen und Data-Mining-Verfahren
Particle Physics Reference Library
This second open access volume of the handbook series deals with detectors, large experimental facilities and data handling, both for accelerator and non-accelerator based experiments. It also covers applications in medicine and life sciences. A joint CERN-Springer initiative, the “Particle Physics Reference Library” provides revised and updated contributions based on previously published material in the well-known Landolt-Boernstein series on particle physics, accelerators and detectors (volumes 21A,B1,B2,C), which took stock of the field approximately one decade ago. Central to this new initiative is publication under full open access
Libro de actas. XXXV Congreso Anual de la Sociedad Española de Ingeniería Biomédica
596 p.CASEIB2017 vuelve a ser el foro de referencia a nivel nacional para el intercambio científico de conocimiento, experiencias y promoción de la I D i en Ingeniería Biomédica. Un punto de encuentro de científicos, profesionales de la industria, ingenieros biomédicos y profesionales clínicos interesados en las últimas novedades en investigación, educación y aplicación industrial y clínica de la ingeniería biomédica.
En la presente edición, más de 160 trabajos de alto nivel científico serán presentados en áreas relevantes de la ingeniería biomédica, tales como: procesado de señal e imagen, instrumentación biomédica, telemedicina, modelado de sistemas biomédicos, sistemas inteligentes y sensores, robótica, planificación y simulación quirúrgica, biofotónica y biomateriales.
Cabe destacar las sesiones dedicadas a la competición por el Premio José María Ferrero Corral, y la sesión de competición de alumnos de Grado en Ingeniería biomédica, que persiguen fomentar la participación de jóvenes estudiantes e investigadores