45,650 research outputs found

    Detecting and Explaining Causes From Text For a Time Series Event

    Full text link
    Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.Comment: Accepted at EMNLP 201

    Causality re-established

    Get PDF
    Causality never gained the status of a "law" or "principle" in physics. Some recent literature even popularized the false idea that causality is a notion that should be banned from theory. Such misconception relies on an alleged universality of reversibility of laws of physics, based either on determinism of classical theory, or on the multiverse interpretation of quantum theory, in both cases motivated by mere interpretational requirements for realism of the theory. Here, I will show that a properly defined unambiguous notion of causality is a theorem of quantum theory, which is also a falsifiable proposition of the theory. Such causality notion appeared in the literature within the framework of operational probabilistic theories. It is a genuinely theoretical notion, corresponding to establish a definite partial order among events, in the same way as we do by using the future causal cone on Minkowski space. The causality notion is logically completely independent of the misidentified concept of "determinism", and, being a consequence of quantum theory, is ubiquitous in physics. In addition, as classical theory can be regarded as a restriction of quantum theory, causality holds also in the classical case, although the determinism of the theory trivializes it. I then conclude arguing that causality naturally establishes an arrow of time. This implies that the scenario of the "Block Universe" and the connected "Past Hypothesis" are incompatible with causality, and thus with quantum theory: they both are doomed to remain mere interpretations and, as such, not falsifiable, similar to the hypothesis of "super-determinism". This article is part of a discussion meeting issue "Foundations of quantum mechanics and their impact on contemporary society".Comment: Presented at the Royal Society of London, on 11/12/ 2017, at the conference "Foundations of quantum mechanics and their impact on contemporary society". To appear on Philosophical Transactions of the Royal Society

    Bell's local causality is a d-separation criterion

    Full text link
    This paper aims to motivate Bell's notion of local causality by means of Bayesian networks. In a locally causal theory any superluminal correlation should be screened off by atomic events localized in any so-called \textit{shielder-off region} in the past of one of the correlating events. In a Bayesian network any correlation between non-descendant random variables are screened off by any so-called \textit{d-separating set} of variables. We will argue that the shielder-off regions in the definition of local causality conform in a well defined sense to the d-separating sets in Bayesian networks.Comment: 13 pages, 8 figure

    Learning Hybrid Process Models From Events: Process Discovery Without Faking Confidence

    Full text link
    Process discovery techniques return process models that are either formal (precisely describing the possible behaviors) or informal (merely a "picture" not allowing for any form of formal reasoning). Formal models are able to classify traces (i.e., sequences of events) as fitting or non-fitting. Most process mining approaches described in the literature produce such models. This is in stark contrast with the over 25 available commercial process mining tools that only discover informal process models that remain deliberately vague on the precise set of possible traces. There are two main reasons why vendors resort to such models: scalability and simplicity. In this paper, we propose to combine the best of both worlds: discovering hybrid process models that have formal and informal elements. As a proof of concept we present a discovery technique based on hybrid Petri nets. These models allow for formal reasoning, but also reveal information that cannot be captured in mainstream formal models. A novel discovery algorithm returning hybrid Petri nets has been implemented in ProM and has been applied to several real-life event logs. The results clearly demonstrate the advantages of remaining "vague" when there is not enough "evidence" in the data or standard modeling constructs do not "fit". Moreover, the approach is scalable enough to be incorporated in industrial-strength process mining tools.Comment: 25 pages, 12 figure

    A blind deconvolution approach to recover effective connectivity brain networks from resting state fMRI data

    Full text link
    A great improvement to the insight on brain function that we can get from fMRI data can come from effective connectivity analysis, in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model. As opposed to biologically inspired models, some techniques as Granger causality (GC) are purely data-driven and rely on statistical prediction and temporal precedence. While powerful and widely applicable, this approach could suffer from two main limitations when applied to BOLD fMRI data: confounding effect of hemodynamic response function (HRF) and conditioning to a large number of variables in presence of short time series. For task-related fMRI, neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs; for resting-state fMRI on the other hand, the absence of explicit inputs makes this task more difficult, unless relying on some specific prior physiological hypothesis. In order to overcome these issues and to allow a more general approach, here we present a simple and novel blind-deconvolution technique for BOLD-fMRI signal. Coming to the second limitation, a fully multivariate conditioning with short and noisy data leads to computational problems due to overfitting. Furthermore, conceptual issues arise in presence of redundancy. We thus apply partial conditioning to a limited subset of variables in the framework of information theory, as recently proposed. Mixing these two improvements we compare the differences between BOLD and deconvolved BOLD level effective networks and draw some conclusions
    corecore