454 research outputs found

    Use of evidential reasoning for eliciting bayesian subjective probabilities in human reliability analysis: A maritime case

    Get PDF
    Modelling the interdependencies among the factors influencing human error (e.g. the common performance conditions (CPCs) in Cognitive Reliability Error Analysis Method (CREAM)) stimulates the use of Bayesian Networks (BNs) in Human Reliability Analysis (HRA). However, subjective probability elicitation for a BN is often a daunting and complex task. To create conditional probability values for each given variable in a BN requires a high degree of knowledge and engineering effort, often from a group of domain experts. This paper presents a novel hybrid approach for incorporating the evidential reasoning (ER) approach with BNs to facilitate HRA under incomplete data. The kernel of this approach is to develop the best and the worst possible conditional subjective probabilities of the nodes representing the factors influencing HRA when using BNs in human error probability (HEP). The proposed hybrid approach is demonstrated by using CREAM to estimate HEP in the maritime area. The findings from the hybrid ER-BN model can effectively facilitate HEP analysis in specific and decision-making under uncertainty in general

    Subjective Causality and Counterfactuals in the Social Sciences

    Get PDF
    The article explores the role that subjective evidence of causality and associated counterfactuals and counterpotentials might play in the social sciences where comparative cases are scarce. This scarcity rules out statistical inference based upon frequencies and usually invites in-depth ethnographic studies. Thus, if causality is to be preserved in such situations, a conception of ethnographic causal inference is required. Ethnographic causality inverts the standard statistical concept of causal explanation in observational studies, whereby comparison and generalization, across a sample of cases, are both necessary prerequisites for any causal inference. Ethnographic causality allows, in contrast, for causal explanation prior to any subsequent comparison or generalization

    The objective Bayesian conceptualisation of proof and reference class problems

    Get PDF
    The objective Bayesian view of proof (or logical probability, or evidential support) is explained and defended: that the relation of evidence to hypothesis (in legal trials, science etc) is a strictly logical one, comparable to deductive logic. This view is distinguished from the thesis, which had some popularity in law in the 1980s, that legal evidence ought to be evaluated using numerical probabilities and formulas. While numbers are not always useful, a central role is played in uncertain reasoning by the ‘proportional syllogism’, or argument from frequencies, such as ‘nearly all aeroplane flights arrive safely, so my flight is very likely to arrive safely’. Such arguments raise the ‘problem of the reference class’, arising from the fact that an individual case may be a member of many different classes in which frequencies differ. For example, if 15 per cent of swans are black and 60 per cent of fauna in the zoo is black, what should I think about the likelihood of a swan in the zoo being black? The nature of the problem is explained, and legal cases where it arises are given. It is explained how recent work in data mining on the relevance of features for prediction provides a solution to the reference class problem

    How to Treat Expert Judgment? With certainty it contains uncertainty!

    Get PDF
    PresentationTo be acceptably safe one must identify the risks one is exposed to. It is uncertain whether the threat really will materialize, but determining the size and probability of the risk is also full of uncertainty. When performing an analysis and preparing for decision making under uncertainty, quite frequently failure rate data, information on consequence severity or on a probability value, yes, even on the possibility an event can or cannot occur is lacking. In those cases, the only way to proceed is to revert to expert judgment. Even in case historical data are available, but one should like to know whether these data still hold in the current situation, an expert can be asked about their reliability. Anyhow, expert elicitation comes with an uncertainty depending on the expert’s reliability, which becomes very visible when two or more experts give different answers or even conflicting ones. This is not a new problem, and very bright minds have thought how to tackle it. But so far, however, the topic has not been given much attention in process safety and risk assessment. The paper has a review character and will present various approaches with detailed explanation and examples

    The rational continued influence of misinformation

    Get PDF
    Misinformation has become an increasingly topical field of research. Studies on the ‘Continued Influence Effect’ (CIE) show that misinformation continues to influence reasoning despite subsequent retraction. Current explanatory theories of the CIE tacitly assume continued reliance on misinformation is the consequence of a biased process. In the present work, we show why this perspective may be erroneous. Using a Bayesian formalism, we conceptualize the CIE as a scenario involving contradictory testimonies and incorporate the previously overlooked factors of the temporal dependence (misinformation precedes its retraction) between, and the perceived reliability of, misinforming and retracting sources. When considering such factors, we show the CIE to have normative backing. We demonstrate that, on aggregate, lay reasoners (N = 101) intuitively endorse the necessary assumptions that demarcate CIE as a rational process, still exhibit the standard effect, and appropriately penalize the reliability of contradicting sources. Individual-level analyses revealed that although many participants endorsed assumptions for a rational CIE, very few were able to execute the complex model update that the Bayesian model entails. In sum, we provide a novel illustration of the pervasive influence of misinformation as the consequence of a rational process

    Does consistency predict accuracy of beliefs?: Economists surveyed about PSA

    Get PDF
    Subjective beliefs and behavior regarding the Prostate Specific Antigen (PSA) test for prostate cancer were surveyed among attendees of the 2006 meeting of the American Economic Association. Logical inconsistency was measured in percentage deviations from a restriction imposed by Bayes’ Rule on pairs of conditional beliefs. Economists with inconsistent beliefs tended to be more accurate than average, and consistent Bayesians were substantially less accurate. Within a loss function framework, we look for and cannot find evidence that inconsistent beliefs cause economic losses. Subjective beliefs about cancer risks do not predict PSA testing decisions, but social influences do.logical consistency, predictive accuracy, elicitation, non-Bayesian, ecological rationality

    Does Consistency Predict Accuracy of Beliefs?: Economists Surveyed About PSA

    Get PDF
    Subjective beliefs and behavior regarding the Prostate Specific Antigen (PSA) test for prostate cancer were surveyed among attendees of the 2006 meeting of the American Economic Association. Logical inconsistency was measured in percentage deviations from a restriction imposed by Bayes’ Rule on pairs of conditional beliefs. Economists with inconsistent beliefs tended to be more accurate than average, and consistent Bayesians were substantially less accurate. Within a loss function framework, we look for and cannot find evidence that inconsistent beliefs cause economic losses. Subjective beliefs about cancer risks do not predict PSA testing decisions, but social influences do.logical consistency, predictive accuracy, elicitation, non-Bayesian, ecological rationality

    Confirmation, Decision, and Evidential Probability

    Get PDF
    Henry Kyburg’s theory of Evidential Probability offers a neglected tool for approaching problems in confirmation theory and decision theory. I use Evidential Probability to examine some persistent problems within these areas of the philosophy of science. Formal tools in general and probability theory in particular have great promise for conceptual analysis in confirmation theory and decision theory, but they face many challenges. In each chapter, I apply Evidential Probability to a specific issue in confirmation theory or decision theory. In Chapter 1, I challenge the notion that Bayesian probability offers the best basis for a probabilistic theory of evidence. In Chapter 2, I criticise the conventional measures of quantities of evidence that use the degree of imprecision of imprecise probabilities. In Chapter 3, I develop an alternative to orthodox utility-maximizing decision theory using Kyburg’s system. In Chapter 4, I confront the orthodox notion that Nelson Goodman’s New Riddle of Induction makes purely formal theories of induction untenable. Finally, in Chapter 5, I defend probabilistic theories of inductive reasoning against John D. Norton’s recent collection of criticisms. My aim is the development of fresh perspectives on classic problems and contemporary debates. I both defend and exemplify a formal approach to the philosophy of science. I argue that Evidential Probability has great potential for clarifying our concepts of evidence and rationality

    Application of a CREAM based framework to assess human reliability in emergency response to engine room fires on ships

    Get PDF
    For a human reliability assessment in the maritime domain, the main question is how we correctly understand the human factors in the maritime situation in a practical manner. This paper introduces a new approach based on Cognitive Reliability and Error Analysis Method (CREAM). The key to the method is to provide a framework for evaluating specific scenarios associated with maritime human errors and for conducting an assessment of the context, in which human actions take place. The output of the context assessment is, then, to be applied for the procedure assessment as model inputs for reflection of the context effect. The proposed approach can be divided into two parts: processing context assessment and modelling human error quantification. Fuzzy multiple attributive group decision-making method, Bayesian networks and evidential reasoning are employed for enhancing the reliability of human error quantification. Fuzzy conclusion of the context assessment is utilised by the model input in CREAM basic method and weighting factors in CREAM extended method respectively for considering human failure probability which varies depending on external conditions. This paper is expected to contribute to the improvement of safety by identifying frequently occurred human errors during the maritime operating for minimising of human failures
    corecore