65 research outputs found
MERLiN: Mixture Effect Recovery in Linear Networks
Causal inference concerns the identification of cause-effect relationships
between variables, e.g. establishing whether a stimulus affects activity in a
certain brain region. The observed variables themselves often do not constitute
meaningful causal variables, however, and linear combinations need to be
considered. In electroencephalographic studies, for example, one is not
interested in establishing cause-effect relationships between electrode signals
(the observed variables), but rather between cortical signals (the causal
variables) which can be recovered as linear combinations of electrode signals.
We introduce MERLiN (Mixture Effect Recovery in Linear Networks), a family of
causal inference algorithms that implement a novel means of constructing causal
variables from non-causal variables. We demonstrate through application to EEG
data how the basic MERLiN algorithm can be extended for application to
different (neuroimaging) data modalities. Given an observed linear mixture, the
algorithms can recover a causal variable that is a linear effect of another
given variable. That is, MERLiN allows us to recover a cortical signal that is
affected by activity in a certain brain region, while not being a direct effect
of the stimulus. The Python/Matlab implementation for all presented algorithms
is available on https://github.com/sweichwald/MERLi
Unfair Utilities and First Steps Towards Improving Them
Many fairness criteria constrain the policy or choice of predictors. In this
work, we propose a different framework for thinking about fairness: Instead of
constraining the policy or choice of predictors, we consider which utility a
policy is optimizing for. We define value of information fairness and propose
to not use utilities that do not satisfy this criterion. We describe how to
modify a utility to satisfy this fairness criterion and discuss the
consequences this might have on the corresponding optimal policies.Comment: 20 page
Personalized Brain-Computer Interface Models for Motor Rehabilitation
We propose to fuse two currently separate research lines on novel therapies
for stroke rehabilitation: brain-computer interface (BCI) training and
transcranial electrical stimulation (TES). Specifically, we show that BCI
technology can be used to learn personalized decoding models that relate the
global configuration of brain rhythms in individual subjects (as measured by
EEG) to their motor performance during 3D reaching movements. We demonstrate
that our models capture substantial across-subject heterogeneity, and argue
that this heterogeneity is a likely cause of limited effect sizes observed in
TES for enhancing motor performance. We conclude by discussing how our
personalized models can be used to derive optimal TES parameters, e.g.,
stimulation site and frequency, for individual patients.Comment: 6 pages, 6 figures, conference submissio
Causal Consistency of Structural Equation Models
Complex systems can be modelled at various levels of detail. Ideally, causal
models of the same system should be consistent with one another in the sense
that they agree in their predictions of the effects of interventions. We
formalise this notion of consistency in the case of Structural Equation Models
(SEMs) by introducing exact transformations between SEMs. This provides a
general language to consider, for instance, the different levels of description
in the following three scenarios: (a) models with large numbers of variables
versus models in which the `irrelevant' or unobservable variables have been
marginalised out; (b) micro-level models versus macro-level models in which the
macro-variables are aggregate features of the micro-variables; (c) dynamical
time series models versus models of their stationary behaviour. Our analysis
stresses the importance of well specified interventions in the causal modelling
process and sheds light on the interpretation of cyclic SEMs.Comment: equal contribution between Rubenstein and Weichwald; accepted
manuscrip
Robustifying Independent Component Analysis by Adjusting for Group-Wise Stationary Noise
We introduce coroICA, confounding-robust independent component analysis, a
novel ICA algorithm which decomposes linearly mixed multivariate observations
into independent components that are corrupted (and rendered dependent) by
hidden group-wise stationary confounding. It extends the ordinary ICA model in
a theoretically sound and explicit way to incorporate group-wise (or
environment-wise) confounding. We show that our proposed general noise model
allows to perform ICA in settings where other noisy ICA procedures fail.
Additionally, it can be used for applications with grouped data by adjusting
for different stationary noise within each group. Our proposed noise model has
a natural relation to causality and we explain how it can be applied in the
context of causal inference. In addition to our theoretical framework, we
provide an efficient estimation procedure and prove identifiability of the
unmixing matrix under mild assumptions. Finally, we illustrate the performance
and robustness of our method on simulated data, provide audible and visual
examples, and demonstrate the applicability to real-world scenarios by
experiments on publicly available Antarctic ice core data as well as two EEG
data sets. We provide a scikit-learn compatible pip-installable Python package
coroICA as well as R and Matlab implementations accompanied by a documentation
at https://sweichwald.de/coroICA/Comment: equal contribution between Pfister and Weichwal
Recovery of non-linear cause-effect relationships from linearly mixed neuroimaging data
Causal inference concerns the identification of cause-effect relationships between variables. However, often only linear combinations of variables constitute meaningful causal variables. For example, recovering the signal of a cortical source from electroencephalography requires a well-tuned combination of signals recorded at multiple electrodes. We recently introduced the MERLiN (Mixture Effect Recovery in Linear Networks) algorithm that is able to recover, from an observed linear mixture, a causal variable that is a linear effect of another given variable. Here we relax the assumption of this cause-effect relationship being linear and present an extended algorithm that can pick up non-linear cause-effect relationships. Thus, the main contribution is an algorithm (and ready to use code) that has broader applicability and allows for a richer model class. Furthermore, a comparative analysis indicates that the assumption of linear cause-effect relationships is not restrictive in analysing electroencephalographic data
Simple Sorting Criteria Help Find the Causal Order in Additive Noise Models
Additive Noise Models (ANM) encode a popular functional assumption that
enables learning causal structure from observational data. Due to a lack of
real-world data meeting the assumptions, synthetic ANM data are often used to
evaluate causal discovery algorithms. Reisach et al. (2021) show that, for
common simulation parameters, a variable ordering by increasing variance is
closely aligned with a causal order and introduce var-sortability to quantify
the alignment. Here, we show that not only variance, but also the fraction of a
variable's variance explained by all others, as captured by the coefficient of
determination , tends to increase along the causal order. Simple baseline
algorithms can use -sortability to match the performance of established
methods. Since -sortability is invariant under data rescaling, these
algorithms perform equally well on standardized or rescaled data, addressing a
key limitation of algorithms exploiting var-sortability. We characterize and
empirically assess -sortability for different simulation parameters. We
show that all simulation parameters can affect -sortability and must be
chosen deliberately to control the difficulty of the causal discovery task and
the real-world plausibility of the simulated data. We provide an implementation
of the sortability measures and sortability-based algorithms in our library
CausalDisco (https://github.com/CausalDisco/CausalDisco).Comment: See https://github.com/CausalDisco/CausalDisco for implementation
- …