22 research outputs found

    Estimation of interventional effects of features on prediction

    Full text link
    The interpretability of prediction mechanisms with respect to the underlying prediction problem is often unclear. While several studies have focused on developing prediction models with meaningful parameters, the causal relationships between the predictors and the actual prediction have not been considered. Here, we connect the underlying causal structure of a data generation process and the causal structure of a prediction mechanism. To achieve this, we propose a framework that identifies the feature with the greatest causal influence on the prediction and estimates the necessary causal intervention of a feature such that a desired prediction is obtained. The general concept of the framework has no restrictions regarding data linearity; however, we focus on an implementation for linear data here. The framework applicability is evaluated using artificial data and demonstrated using real-world data.Comment: To appear in Proc. IEEE International Workshop on Machine Learning for Signal Processing (MLSP2017

    Unsupervised Dimensionality Reduction for Transfer Learning

    Get PDF
    Blöbaum P, Schulz A, Hammer B. Unsupervised Dimensionality Reduction for Transfer Learning. In: Verleysen M, ed. Proceedings. 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Louvain-la-Neuve: Ciaco; 2015: 507-512.We investigate the suitability of unsupervised dimensionality reduction (DR) for transfer learning in the context of different representations of the source and target domain. Essentially, unsupervised DR establishes a link of source and target domain by representing the data in a common latent space. We consider two settings: a linear DR of source and target data which establishes correspondences of the data and an according transfer, and its combination with a non-linear DR which allows to adapt to more complex data characterised by a global non-linear structure

    Analysis of cause-effect inference by comparing regression errors

    Get PDF
    We address the problem of inferring the causal direction between two variables by comparing the least-squares errors of the predictions in both possible directions. Under the assumption of an independence between the function relating cause and effect, the conditional noise distribution, and the distribution of the cause, we show that the errors are smaller in causal direction if both variables are equally scaled and the causal relation is close to deterministic. Based on this, we provide an easily applicable algorithm that only requires a regression in both possible causal directions and a comparison of the errors. The performance of the algorithm is compared with various related causal inference methods in different artificial and real-world data sets

    Dendritic cell-specific deletion of β-catenin results in fewer regulatory T-cells without exacerbating autoimmune collagen-induced arthritis

    Get PDF
    Dendritic cells (DCs) are professional antigen presenting cells that have the dual ability to stimulate immunity and maintain tolerance. However, the signalling pathways mediating tolerogenic DC function in vivo remain largely unknown. The β-catenin pathway has been suggested to promote a regulatory DC phenotype. The aim of this study was to unravel the role of β-catenin signalling to control DC function in the autoimmune collagen-induced arthritis model (CIA). Deletion of β-catenin specifically in DCs was achieved by crossing conditional knockout mice with a CD11c-Cre transgen

    Toward Falsifying Causal Graphs Using a Permutation-Based Test

    Full text link
    Understanding the causal relationships among the variables of a system is paramount to explain and control its behaviour. Inferring the causal graph from observational data without interventions, however, requires a lot of strong assumptions that are not always realistic. Even for domain experts it can be challenging to express the causal graph. Therefore, metrics that quantitatively assess the goodness of a causal graph provide helpful checks before using it in downstream tasks. Existing metrics provide an absolute number of inconsistencies between the graph and the observed data, and without a baseline, practitioners are left to answer the hard question of how many such inconsistencies are acceptable or expected. Here, we propose a novel consistency metric by constructing a surrogate baseline through node permutations. By comparing the number of inconsistencies with those on the surrogate baseline, we derive an interpretable metric that captures whether the DAG fits significantly better than random. Evaluating on both simulated and real data sets from various domains, including biology and cloud monitoring, we demonstrate that the true DAG is not falsified by our metric, whereas the wrong graphs given by a hypothetical user are likely to be falsified.Comment: 23 pages, 9 figure

    Identifiability of Cause and Effect using Regularized Regression

    Get PDF
    We consider the problem of telling apart cause from effect between two univariate continuous-valued random variables X and Y. In general, it is impossible to make definite statements about causality without making assumptions on the underlying model; one of the most important aspects of causal inference is hence to determine under which assumptions are we able to do so. In this paper we show under which general conditions we can identify cause from effect by simply choosing the direction with the best regression score. We define a general framework of identifiable regression-based scoring functions, and show how to instantiate it in practice using regression splines. Compared to existing methods that either give strong guarantees, but are hardly applicable in practice, or provide no guarantees, but do work well in practice, our instantiation combines the best of both worlds; it gives guarantees, while empirical evaluation on synthetic and real-world data shows that it performs at least as well as the state of the art

    Local truncation error of low-order fractional variational integrators

    No full text
    We study the local truncation error of the so-called fractional variational integrators, recently developed in based on previous work by Riewe and Cresson. These integrators are obtained through two main elements: the enlarging of the usual mechanical Lagrangian state space by the introduction of the fractional derivatives of the dynamical curves; and a discrete restricted variational principle, in the spirit of discrete mechanics and variational integrators. The fractional variational integrators are designed for modelling fractional dissipative systems, which, in particular cases, reduce to mechanical systems with linear damping. All these elements are introduced in the paper. In addition, as original result, we prove (Sect. 3, Theorem 2) the order of local truncation error of the fractional variational integrators with respect to the dynamics of mechanical systems with linear damping
    corecore