354,626 research outputs found

    Effects of question formats on causal judgments and model evaluation

    Get PDF
    Evaluation of causal reasoning models depends on how well the subjects' causal beliefs are assessed. Elicitation of causal beliefs is determined by the experimental questions put to subjects. We examined the impact of question formats commonly used in causal reasoning research on participant's responses. The results of our experiment (Study 1) demonstrate that both the mean and homogeneity of the responses can be substantially influenced by the type of question (structure induction versus strength estimation versus prediction). Study 2A demonstrates that subjects' responses to a question requiring them to predict the effect of a candidate cause can be significantly lower and more heterogeneous than their responses to a question asking them to diagnose a cause when given an effect. Study 2B suggests that diagnostic reasoning can strongly benefit from cues relating to temporal precedence of the cause in the question. Finally, we evaluated 16 variations of recent computational models and found the model fitting was substantially influenced by the type of questions. Our results show that future research in causal reasoning should place a high priority on disentangling the effects of question formats from the effects of experimental manipulations, because that will enable comparisons between models of causal reasoning uncontaminated by method artifact

    When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

    Full text link
    Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation. In our framework, CE is calculated using features in a latent space and perturbed prediction from a DNN-based model. We further provide the first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods \footnote{~~https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg}. Experimental results show that CE is a competitive and robust index for understanding DNNs when compared with conventional methods such as class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for human-interpretable feature(s) (e.g., symptom) reasoning. Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.Comment: Noted our camera-ready version has changed the title. "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks" as the v3 official paper title in IEEE Proceeding. Please use it in your formal reference. Accepted at IEEE ICIP 2019. Pytorch code has released on https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvIm

    Norm Based Causal Reasoning in Textual Corpus

    Full text link
    Truth based entailments are not sufficient for a good comprehension of NL. In fact, it can not deduce implicit information necessary to understand a text. On the other hand, norm based entailments are able to reach this goal. This idea was behind the development of Frames (Minsky 75) and Scripts (Schank 77, Schank 79) in the 70's. But these theories are not formalized enough and their adaptation to new situations is far from being obvious. In this paper, we present a reasoning system which uses norms in a causal reasoning process in order to find the cause of an accident from a text describing it

    The Search for Invariance: Repeated Positive Testing Serves the Goals of Causal Learning

    Get PDF
    Positive testing is characteristic of exploratory behavior, yet it seems to be at odds with the aim of information seeking. After all, repeated demonstrations of one’s current hypothesis often produce the same evidence and fail to distinguish it from potential alternatives. Research on the development of scientific reasoning and adult rule learning have both documented and attempted to explain this behavior. The current chapter reviews this prior work and introduces a novel theoretical account—the Search for Invariance (SI) hypothesis—which suggests that producing multiple positive examples serves the goals of causal learning. This hypothesis draws on the interventionist framework of causal reasoning, which suggests that causal learners are concerned with the invariance of candidate hypotheses. In a probabilistic and interdependent causal world, our primary goal is to determine whether, and in what contexts, our causal hypotheses provide accurate foundations for inference and intervention—not to disconfirm their alternatives. By recognizing the central role of invariance in causal learning, the phenomenon of positive testing may be reinterpreted as a rational information-seeking strategy
    corecore