38,406 research outputs found

    Learning causal models that make correct manipulation predictions with time series data

    Get PDF
    One of the fundamental purposes of causal models is using them to predict the effects of manipulating various components of a system. It has been argued by Dash (2005, 2003) that the Do operator will fail when applied to an equilibrium model, unless the underlying dynamic system obeys what he calls Equilibration-Manipulation Commutability. Unfortunately, this fact renders most existing causal discovery algorithms unreliable for reasoning about manipulations. Motivated by this caveat, in this paper we present a novel approach to causal discovery of dynamic models from time series. The approach uses a representation of dynamic causal models motivated by Iwasaki and Simon (1994), which asserts that all “causation across time" occurs because a variable’s derivative has been affected instantaneously. We present an algorithm that exploits this representation within a constraint-based learning framework by numerically calculating derivatives and learning instantaneous relationships. We argue that due to numerical errors in higher order derivatives, care must be taken when learning causal structure, but we show that the Iwasaki-Simon representation reduces the search space considerably, allowing us to forego calculating many high-order derivatives. In order for our algorithm to discover the dynamic model, it is necessary that the time-scale of the data is much finer than any temporal process of the system. Finally, we show that our approach can correctly recover the structure of a fairly complex dynamic system, and can predict the effect of manipulations accurately when a manipulation does not cause an instability. To our knowledge, this is the first causal discovery algorithm that has demonstrated that it can correctly predict the effects of manipulations for a system that does not obey the EMC condition

    The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

    Get PDF
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will lack a well-defined goal. Aside from providing a clearer objective for XAI, focusing on understanding also allows us to relax the factivity condition on explanation, which is impossible to fulfill in many machine learning models, and to focus instead on the pragmatic conditions that determine the best fit between a model and the methods and devices deployed to understand it. After an examination of the different types of understanding discussed in the philosophical and psychological literature, I conclude that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability. This conclusion is partly based on the shortcomings of the purely functionalist approach to post hoc interpretability that seems to be predominant in most recent literature

    Causal Induction from Continuous Event Streams: Evidence for Delay-Induced Attribution Shifts

    Get PDF
    Contemporary theories of Human Causal Induction assume that causal knowledge is inferred from observable contingencies. While this assumption is well supported by empirical results, it fails to consider an important problem-solving aspect of causal induction in real time: In the absence of well structured learning trials, it is not clear whether the effect of interest occurred because of the cause under investigation, or on its own accord. Attributing the effect to either the cause of interest or alternative background causes is an important precursor to induction. We present a new paradigm based on the presentation of continuous event streams, and use it to test the Attribution-Shift Hypothesis (Shanks & Dickinson, 1987), according to which temporal delays sever the attributional link between cause and effect. Delays generally impaired attribution to the candidate, and increased attribution to the constant background of alternative causes. In line with earlier research (Buehner & May, 2002, 2003, 2004) prior knowledge and experience mediated this effect. Pre-exposure to a causally ineffective background context was found to facilitate the discovery of delayed causal relationships by reducing the tendency for attributional shifts to occur. However, longer exposure to a delayed causal relationship did not improve discovery. This complex pattern of results is problematic for associative learning theories, but supports the Attribution-Shift Hypothesi

    Types of cost in inductive concept learning

    Get PDF
    Inductive concept learning is the task of learning to assign cases to a discrete set of classes. In real-world applications of concept learning, there are many different types of cost involved. The majority of the machine learning literature ignores all types of cost (unless accuracy is interpreted as a type of cost measure). A few papers have investigated the cost of misclassification errors. Very few papers have examined the many other types of cost. In this paper, we attempt to create a taxonomy of the different types of cost that are involved in inductive concept learning. This taxonomy may help to organize the literature on cost-sensitive learning. We hope that it will inspire researchers to investigate all types of cost in inductive concept learning in more depth
    corecore