7 research outputs found
Beta Power May Mediate the Effect of Gamma-TACS on Motor Performance
Transcranial alternating current stimulation (tACS) is becoming an important
method in the field of motor rehabilitation because of its ability to
non-invasively influence ongoing brain oscillations at arbitrary frequencies.
However, substantial variations in its effect across individuals are reported,
making tACS a currently unreliable treatment tool. One reason for this
variability is the lack of knowledge about the exact way tACS entrains and
interacts with ongoing brain oscillations. The present crossover stimulation
study on 20 healthy subjects contributes to the understanding of
cross-frequency effects of gamma (70 Hz) tACS over the contralateral motor
cortex by providing empirical evidence which is consistent with a role of low-
(12~-20 Hz) and high- (20-~30 Hz) beta power as a mediator of gamma-tACS on
motor performance.Comment: 7 pages, 5 figures, in Proceedings of IEEE Engineering in Medicine
and Biology Conference, July 2019 (IEEE license notice
Bounding probabilities of causation through the causal marginal problem
Probabilities of Causation play a fundamental role in decision making in law,
health care and public policy. Nevertheless, their point identification is
challenging, requiring strong assumptions such as monotonicity. In the absence
of such assumptions, existing work requires multiple observations of datasets
that contain the same treatment and outcome variables, in order to establish
bounds on these probabilities. However, in many clinical trials and public
policy evaluation cases, there exist independent datasets that examine the
effect of a different treatment each on the same outcome variable. Here, we
outline how to significantly tighten existing bounds on the probabilities of
causation, by imposing counterfactual consistency between SCMs constructed from
such independent datasets ('causal marginal problem'). Next, we describe a new
information theoretic approach on falsification of counterfactual
probabilities, using conditional mutual information to quantify counterfactual
influence. The latter generalises to arbitrary discrete variables and number of
treatments, and renders the causal marginal problem more interpretable. Since
the question of 'tight enough' is left to the user, we provide an additional
method of inference when the bounds are unsatisfactory: A maximum entropy based
method that defines a metric for the space of plausible SCMs and proposes the
entropy maximising SCM for inferring counterfactuals in the absence of more
information
Toward Falsifying Causal Graphs Using a Permutation-Based Test
Understanding the causal relationships among the variables of a system is
paramount to explain and control its behaviour. Inferring the causal graph from
observational data without interventions, however, requires a lot of strong
assumptions that are not always realistic. Even for domain experts it can be
challenging to express the causal graph. Therefore, metrics that quantitatively
assess the goodness of a causal graph provide helpful checks before using it in
downstream tasks. Existing metrics provide an absolute number of
inconsistencies between the graph and the observed data, and without a
baseline, practitioners are left to answer the hard question of how many such
inconsistencies are acceptable or expected. Here, we propose a novel
consistency metric by constructing a surrogate baseline through node
permutations. By comparing the number of inconsistencies with those on the
surrogate baseline, we derive an interpretable metric that captures whether the
DAG fits significantly better than random. Evaluating on both simulated and
real data sets from various domains, including biology and cloud monitoring, we
demonstrate that the true DAG is not falsified by our metric, whereas the wrong
graphs given by a hypothetical user are likely to be falsified.Comment: 23 pages, 9 figure
Assumption violations in causal discovery and the robustness of score matching
When domain knowledge is limited and experimentation is restricted by ethical, financial, or time constraints, practitioners turn to observational causal discovery methods to recover the causal structure, exploiting the statistical properties of their data. Because causal discovery without further assumptions is an ill-posed problem, each algorithm comes with its own set of
usually untestable assumptions, some of which are hard to meet in real datasets. Motivated by these considerations, this paper extensively benchmarks the empirical performance of recent causal discovery methods on observational i.i.d. data generated under different background conditions, allowing for violations of the critical assumptions required by each selected approach. Our experimental findings show that score matching-based methods demonstrate
surprising performance in the false positive and false negative rate of the inferred graph in these challenging scenarios, and we provide theoretical insights into their performance. This work is also the first effort to benchmark the stability of causal discovery algorithms with respect to the values of their hyperparameters. Finally, we hope this paper will set a new standard for the evaluation of causal discovery methods and can serve as an accessible entry point for practitioners interested in the field, highlighting the empirical implications of different algorithm choices