33,640 research outputs found

    A time series causal model

    Get PDF
    Cause-effect relations are central in economic analysis. Uncovering empirical cause-effect relations is one of the main research activities of empirical economics. In this paper we develop a time series casual model to explore casual relations among economic time series. The time series causal model is grounded on the theory of inferred causation that is a probabilistic and graph-theoretic approach to causality featured with automated learning algorithms. Applying our model we are able to infer cause-effect relations that are implied by the observed time series data. The empirically inferred causal relations can then be used to test economic theoretical hypotheses, to provide evidence for formulation of theoretical hypotheses, and to carry out policy analysis. Time series causal models are closely related to the popular vector autoregressive (VAR) models in time series analysis. They can be viewed as restricted structural VAR models identified by the inferred causal relations.Inferred Causation, Automated Learning, VAR, Granger Causality, Wage-Price Spiral

    A causal model of radiating stellar collapse

    Get PDF
    We find a simple exact model of radiating stellar collapse, with a shear-free and non-accelerating interior matched to a Vaidya exterior. The heat flux is subject to causal thermodynamics, leading to self-consistent determination of the temperature TT. We solve for TT exactly when the mean collision time Ï„c\tau_{c} is constant, and perturbatively in a more realistic case of variable Ï„c\tau_{c}. Causal thermodynamics predicts temperature behaviour that can differ significantly from the predictions of non-causal theory. In particular, the causal theory gives a higher central temperature and greater temperature gradient.Comment: Latex [ioplppt style] 9 pages; to appear Class. Quantum Gra

    A causal model for a closed universe

    Full text link
    We study a closed model of a universe filled with viscous fluid and quintessence matter components. The dynamical equations imply that the universe might look like an accelerated flat Friedmann-Robertson-Walker (FRW) universe at low redshift. We consider here dissipative processes which obey a causal thermodynamics. Here, we account for the entropy production via causal dissipative inflation.Comment: 9 pages. Accepted for publication in IJMP

    A quantum causal discovery algorithm

    Full text link
    Finding a causal model for a set of classical variables is now a well-established task---but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally-ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm provides a first step towards more general methods for quantum causal discovery.Comment: 11 pages, 10 figures, revised to match published versio

    CausaLM: Causal Model Explanation Through Counterfactual Language Models

    Full text link
    Understanding predictions made by deep neural networks is notoriously difficult, but also crucial to their dissemination. As all ML-based methods, they are as good as their training data, and can also capture unwanted biases. While there are tools that can help understand whether such biases exist, they do not distinguish between correlation and causation, and might be ill-suited for text-based models and for reasoning about high level language concepts. A key problem of estimating the causal effect of a concept of interest on a given model is that this estimation requires the generation of counterfactual examples, which is challenging with existing generation technology. To bridge that gap, we propose CausaLM, a framework for producing causal model explanations using counterfactual language representation models. Our approach is based on fine-tuning of deep contextualized embedding models with auxiliary adversarial tasks derived from the causal graph of the problem. Concretely, we show that by carefully choosing auxiliary adversarial pre-training tasks, language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest, and be used to estimate its true causal effect on model performance. A byproduct of our method is a language representation model that is unaffected by the tested concept, which can be useful in mitigating unwanted bias ingrained in the data.Comment: Our code and data are available at: https://amirfeder.github.io/CausaLM/ Under review for the Computational Linguistics journa
    • …
    corecore