4 research outputs found

    Computation of context as a cognitive tool

    Get PDF
    In the field of cognitive science, as well as the area of Artificial Intelligence (AI), the role of context has been investigated in many forms, and for many purposes. It is clear in both areas that consideration of contextual information is important. However, the significance of context has not been emphasized in the Bayesian networks literature. We suggest that consideration of context is necessary for acquiring knowledge about a situation and for refining current representational models that are potentially erroneous due to hidden independencies in the data.In this thesis, we make several contributions towards the automation of contextual consideration by discovering useful contexts from probability distributions. We show how context-specific independencies in Bayesian networks and discovery algorithms, traditionally used for efficient probabilistic inference can contribute to the identification of contexts, and in turn can provide insight on otherwise puzzling situations. Also, consideration of context can help clarify otherwise counter intuitive puzzles, such as those that result in instances of Simpson's paradox. In the social sciences, the branch of attribution theory is context-sensitive. We suggest a method to distinguish between dispositional causes and situational factors by means of contextual models. Finally, we address the work of Cheng and Novick dealing with causal attribution by human adults. Their probabilistic contrast model makes use of contextual information, called focal sets, that must be determined by a human expert. We suggest a method for discovering complete focal sets from probabilistic distributions, without the human expert

    Exploring Causal Influences

    Get PDF
    Recent data mining techniques exploit patterns of statistical independence in multivariate data to make conjectures about cause/effect relationships. These relationships can be used to construct causal graphs, which are sometimes represented by weighted node-link diagrams, with nodes representing variables and combinations of weighted links and/or nodes showing the strength of causal relationships. We present an interactive visualization for causal graphs (ICGs), inspired in part by the Influence Explorer. The key principles of this visualization are as follows: Variables are represented with vertical bars attached to nodes in a graph. Direct manipulation of variables is achieved by sliding a variable value up and down, which reveals causality by producing instantaneous change in causally and/or probabilistically linked variables. This direct manipulation technique gives users the impression they are causally influencing the variables linked to the one they are manipulating. In this context, we demonstrate the subtle distinction between seeing and setting of variable values, and in an extended example, show how this visualization can help a user understand the relationships in a large variable set, and with some intuitions about the domain and a few basic concepts, quickly detect bugs in causal models constructed from these data mining techniques

    Discovering Hidden Dispositions and Situational Factors in Causal Relations by Means of Contextual Independencies

    No full text
    Correspondent inferences in attribution theory deal with assigning causes to behaviour based on true dispositions rather than situational factors. In this paper, we inves-tigate how knowledge representation tools in Artificial Intelligence (AI), such as Bayesian networks (BNs), can help represent such situations and distinguish between the types of clues used in assessing the behaviour (dispo-sitional or situational). We also demonstrate how a dis-covery algorithm for contextual independencies can pro-vide the information needed to separate a seemingly er-roneous causal model (considering dispositions and situ-ations together) into two more accurate models, one for dispositions and one for situations
    corecore