7,982 research outputs found

    A DATA-INFORMED MODEL OF PERFORMANCE SHAPING FACTORS FOR USE IN HUMAN RELIABILITY ANALYSIS

    Get PDF
    Many Human Reliability Analysis (HRA) models use Performance Shaping Factors (PSFs) to incorporate human elements into system safety analysis and to calculate the Human Error Probability (HEP). Current HRA methods rely on different sets of PSFs that range from a few to over 50 PSFs, with varying degrees of interdependency among the PSFs. This interdependency is observed in almost every set of PSFs, yet few HRA methods offer a way to account for dependency among PSFs. The methods that do address interdependencies generally do so by varying different multipliers in linear or log-linear formulas. These relationships could be more accurately represented in a causal model of PSF interdependencies. This dissertation introduces a methodology to produce a Bayesian Belief Network (BBN) of interactions among PSFs. The dissertation also presents a set of fundamental guidelines for the creation of a PSF set, a hierarchy of PSFs developed specifically for causal modeling, and a set of models developed using currently available data. The models, methodology, and PSF set were developed using nuclear power plant data available from two sources: information collected by the University of Maryland for the Information-Decision-Action model [1] and data from the Human Events Repository and Analysis (HERA) database [2] , currently under development by the United States Nuclear Regulatory Commission. Creation of the methodology, the PSF hierarchy, and the models was an iterative process that incorporated information from available data, current HRA methods, and expert workshops. The fundamental guidelines are the result of insights gathered during the process of developing the methodology; these guidelines were applied to the final PSF hierarchy. The PSF hierarchy reduces overlap among the PSFs so that patterns of dependency observed in the data can be attribute to PSF interdependencies instead of overlapping definitions. It includes multiple levels of generic PSFs that can be expanded or collapsed for different applications. The model development methodology employs correlation and factor analysis to systematically collapse the PSF hierarchy and form the model structure. Factor analysis is also used to identify Error Contexts (ECs) – specific PSF combinations that together produce an increased probability of human error (versus the net effect of the PSFs acting alone). Three models were created to demonstrate how the methodology can be used provide different types of data-informed insights. By employing Bayes' Theorem, the resulting model can be used to replace linear calculations for HEPs used in Probabilistic Risk Assessment. When additional data becomes available, the methodology can be used to produce updated causal models to further refine HEP values

    3D hand tracking.

    Get PDF
    The hand is often considered as one of the most natural and intuitive interaction modalities for human-to-human interaction. In human-computer interaction (HCI), proper 3D hand tracking is the first step in developing a more intuitive HCI system which can be used in applications such as gesture recognition, virtual object manipulation and gaming. However, accurate 3D hand tracking, remains a challenging problem due to the hand’s deformation, appearance similarity, high inter-finger occlusion and complex articulated motion. Further, 3D hand tracking is also interesting from a theoretical point of view as it deals with three major areas of computer vision- segmentation (of hand), detection (of hand parts), and tracking (of hand). This thesis proposes a region-based skin color detection technique, a model-based and an appearance-based 3D hand tracking techniques to bring the human-computer interaction applications one step closer. All techniques are briefly described below. Skin color provides a powerful cue for complex computer vision applications. Although skin color detection has been an active research area for decades, the mainstream technology is based on individual pixels. This thesis presents a new region-based technique for skin color detection which outperforms the current state-of-the-art pixel-based skin color detection technique on the popular Compaq dataset (Jones & Rehg 2002). The proposed technique achieves 91.17% true positive rate with 13.12% false negative rate on the Compaq dataset tested over approximately 14,000 web images. Hand tracking is not a trivial task as it requires tracking of 27 degreesof- freedom of hand. Hand deformation, self occlusion, appearance similarity and irregular motion are major problems that make 3D hand tracking a very challenging task. This thesis proposes a model-based 3D hand tracking technique, which is improved by using proposed depth-foreground-background ii feature, palm deformation module and context cue. However, the major problem of model-based techniques is, they are computationally expensive. This can be overcome by discriminative techniques as described below. Discriminative techniques (for example random forest) are good for hand part detection, however they fail due to sensor noise and high interfinger occlusion. Additionally, these techniques have difficulties in modelling kinematic or temporal constraints. Although model-based descriptive (for example Markov Random Field) or generative (for example Hidden Markov Model) techniques utilize kinematic and temporal constraints well, they are computationally expensive and hardly recover from tracking failure. This thesis presents a unified framework for 3D hand tracking, using the best of both methodologies, which out performs the current state-of-the-art 3D hand tracking techniques. The proposed 3D hand tracking techniques in this thesis can be used to extract accurate hand movement features and enable complex human machine interaction such as gaming and virtual object manipulation

    Encouraging professional skepticism in the industry specialization era: a dual-process model and an experimental test

    Get PDF
    I develop a framework that elucidates how the primary target of auditors??? professional skepticism ??? audit evidence or their own judgment and decision making ??? interacts with other factors to affect auditors??? professional judgments. As an initial test of the framework, I conduct an experiment that examines how the target of auditors??? skepticism and industry specialization jointly affect auditors??? judgments. When working inside their specialization, auditors make more automatic, intuitive judgments. Automaticity naturally manifests for industry specialists as a result of industry experience, social norms to appear knowledgeable and decisive, and their own expectations to proficiently interpret audit evidence. Priming industry specialists to be skeptical of audit evidence, therefore, has little influence on their judgments. In contrast, priming such auditors to be skeptical of their otherwise automated, intuitive judgment and decision making substantially alters their decision processing. They begin to question what they do and do not know, in an epistemological sense and, as a result, elevate their overall concern about material misstatements due to well-concealed fraud. This pattern of results is consistent with my framework???s predictions and suggests that specialization is more about improving the interpretation and assimilation of domain evidence rather than enhancing reflective, self-critical thinking. It also suggests it would be beneficial to identify other factors that promote industry specialists??? skepticism towards their judgment and decision making to make them more circumspect about the possibility of management fraud (cf., Bell, Peecher, and Solomon 2005)

    Managerial practices that promote voice and taking charge among frontline workers

    Get PDF
    Process-improvement ideas often come from frontline workers who speak up by voicing concerns about problems and by taking charge to resolve them. We hypothesize that organization-wide process-improvement campaigns encourage both forms of speaking up, especially voicing concern. We also hypothesize that the effectiveness of such campaigns depends on the prior responsiveness of line managers. We test our hypotheses in the healthcare setting, in which problems are frequent. We use data on nearly 7,500 reported incidents extracted from an incident-reporting system that is similar to those used by many organizations to encourage employees to communicate about operational problems. We find that process-improvement campaigns prompt employees to speak up and that campaigns increase the frequency of voicing concern to a greater extent than they increase taking charge. We also find that campaigns are particularly effective in eliciting taking charge among employees whose managers have been relatively unresponsive to previous instances of speaking up. Our results therefore indicate that organization-wide campaigns can encourage voicing concerns and taking charge, two important forms of speaking up. These results can enable managers to solicit ideas from frontline workers that lead to performance improvement.

    Fourth Conference on Artificial Intelligence for Space Applications

    Get PDF
    Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming

    Leveraging Evolutionary Changes for Software Process Quality

    Full text link
    Real-world software applications must constantly evolve to remain relevant. This evolution occurs when developing new applications or adapting existing ones to meet new requirements, make corrections, or incorporate future functionality. Traditional methods of software quality control involve software quality models and continuous code inspection tools. These measures focus on directly assessing the quality of the software. However, there is a strong correlation and causation between the quality of the development process and the resulting software product. Therefore, improving the development process indirectly improves the software product, too. To achieve this, effective learning from past processes is necessary, often embraced through post mortem organizational learning. While qualitative evaluation of large artifacts is common, smaller quantitative changes captured by application lifecycle management are often overlooked. In addition to software metrics, these smaller changes can reveal complex phenomena related to project culture and management. Leveraging these changes can help detect and address such complex issues. Software evolution was previously measured by the size of changes, but the lack of consensus on a reliable and versatile quantification method prevents its use as a dependable metric. Different size classifications fail to reliably describe the nature of evolution. While application lifecycle management data is rich, identifying which artifacts can model detrimental managerial practices remains uncertain. Approaches such as simulation modeling, discrete events simulation, or Bayesian networks have only limited ability to exploit continuous-time process models of such phenomena. Even worse, the accessibility and mechanistic insight into such gray- or black-box models are typically very low. To address these challenges, we suggest leveraging objectively [...]Comment: Ph.D. Thesis without appended papers, 102 page

    Distributed Knowledge Modeling and Integration of Model-Based Beliefs into the Clinical Decision-Making Process

    Get PDF
    Das Treffen komplexer medizinischer Entscheidungen wird durch die stetig steigende Menge an zu berücksichtigenden Informationen zunehmend komplexer. Dieser Umstand ist vor allem auf die Verfügbarkeit von immer präziseren diagnostischen Methoden zur Charakterisierung der Patienten zurückzuführen (z.B. genetische oder molekulare Faktoren). Hiermit einher geht die Entwicklung neuartiger Behandlungsstrategien und Wirkstoffe sowie die damit verbundenen Evidenzen aus klinischen Studien und Leitlinien. Dieser Umstand stellt die behandelnden Ärztinnen und Ärzte vor neuartige Herausforderungen im Hinblick auf die Berücksichtigung aller relevanten Faktoren im Kontext der klinischen Entscheidungsfindung. Moderne IT-Systeme können einen wesentlichen Beitrag leisten, um die klinischen Experten weitreichend zu unterstützen. Diese Assistenz reicht dabei von Anwendungen zur Vorverarbeitung von Daten für eine Reduktion der damit verbundenen Komplexität bis hin zur systemgestützten Evaluation aller notwendigen Patientendaten für eine therapeutischen Entscheidungsunterstützung. Möglich werden diese Funktionen durch die formale Abbildung von medizinischem Fachwissen in Form einer komplexen Wissensbasis, welche die kognitiven Prozesse im Entscheidungsprozess adaptiert. Entsprechend werden an den Prozess der IT-konformen Wissensabbildung erhöhte Anforderungen bezüglich der Validität und Signifikanz der enthaltenen Informationen gestellt. In den ersten beiden Kapiteln dieser Arbeit wurden zunächst wichtige methodische Grundlagen im Kontext der strukturierten Abbildung von Wissen sowie dessen Nutzung für die klinische Entscheidungsunterstützung erläutert. Hierbei wurden die inhaltlichen Kernthemen weiterhin im Rahmen eines State of the Art mit bestehenden Ansätzen abgeglichen, um den neuartigen Charakter der vorgestellten Lösungen herauszustellen. Als innovativer Kern wurde zunächst die Konzeption und Umsetzung eines neuartigen Ansatzes zur Fusion von fragmentierten Wissensbausteinen auf der formalen Grundlage von Bayes-Netzen vorgestellt. Hierfür wurde eine neuartige Datenstruktur unter Verwendung des JSON Graph Formats erarbeitet. Durch die Entwicklung von qualifizierten Methoden zum Umgang mit den formalen Kriterien eines Bayes-Netz wurden weiterhin Lösungen aufgezeigt, welche einen automatischen Fusionsprozess durch einen eigens hierfür entwickelten Algorithmus ermöglichen. Eine prototypische und funktionale Plattform zur strukturierten und assistierten Integration von Wissen sowie zur Erzeugung valider Bayes-Netze als Resultat der Fusion wurde unter Verwendung eines Blockchain Datenspeichers implementiert und in einer Nutzerstudie gemäß ISONORM 9241/110-S evaluiert. Aufbauend auf dieser technologischen Plattform wurden im Anschluss zwei eigenständige Entscheidungsunterstützungssysteme vorgestellt, welche relevante Anwendungsfälle im Kontext der HNO-Onkologie adressieren. Dies ist zum einen ein System zur personalisierten Bewertung von klinischen Laborwerten im Kontext einer Radiochemotherapie und zum anderen ein in Form eines Dashboard implementiertes Systems zur effektiveren Informationskommunikation innerhalb des Tumor Board. Beide Konzepte wurden hierbei zunächst im Rahmen einer initialen Nutzerstudie auf Relevanz geprüft, um eine nutzerzentrische Umsetzung zu gewährleisten. Aufgrund des zentralen Fokus dieser Arbeit auf den Bereich der klinischen Entscheidungsunterstützung, werden an zahlreichen Stellen sowohl kritische als auch optimistische Aspekte der damit verbundenen praktischen Lösungen diskutiert.:1 Introduction 1.1 Motivation and Clinical Setting 1.2 Objectives 1.3 Thesis Outline 2 State of the Art 2.1 Medical Knowledge Modeling 2.2 Knowledge Fusion 2.3 Clinical Decision Support Systems 2.4 Clinical Information Access 3 Fundamentals 3.1 Evidence-Based Medicine 3.1.1 Literature-Based Evidence 3.1.2 Practice-Based Evidence 3.1.3 Patient-Directed Evidence 3.2 Knowledge Representation Formats 3.2.1 Logic-Based Representation 3.2.2 Procedural Representation 3.2.3 Network or Graph-Based Representation 3.3 Knowledge-Based Clinical Decision Support 3.4 Conditional Probability and Bayesian Networks 3.5 Clinical Reasoning 3.5.1 Deterministic Reasoning 3.5.2 Probabilistic Reasoning 3.6 Knowledge Fusion of Bayesian Networks 4 Block-Based Collaborative Knowledge Modeling 4.1 Data Model 4.1.1 Belief Structure 4.1.2 Conditional Probabilities 4.1.3 Metadata 4.2 Constraint-Based Automatic Knowledge Fusion 4.2.1 Fusion of the Bayesian Network Structures 4.2.2 Fusion of the Conditional Probability Tables 4.3 Blockchain-Based Belief Storage and Retrieval 4.3.1 Blockchain Characteristics 4.3.2 Relevance for Belief Management 5 Selected CDS Applications for Clinical Practice 5.1 Distributed Knowledge Modeling Platform 5.1.1 Requirement Analysis 5.1.2 System Architecture 5.1.3 System Evaluation 5.1.4 Limitations of the Proposed Solution 5.2 Personalization of Laboratory Findings 5.2.1 Requirement Analysis 5.2.2 System Architecture 5.2.3 Limitations of the Proposed Solution 5.3 Dashboard for Collaborative Decision-Making in the Tumor Board 5.3.1 Requirement Analysis 5.3.2 System Architecture 5.3.3 Limitations of the Proposed Solution 6 Discussion 6.1 Goal Achievements 6.2 Contributions and Conclusion 7 Bibliograph
    • …
    corecore