2,298 research outputs found
Learning Discriminative Bayesian Networks from High-dimensional Continuous Neuroimaging Data
Due to its causal semantics, Bayesian networks (BN) have been widely employed
to discover the underlying data relationship in exploratory studies, such as
brain research. Despite its success in modeling the probability distribution of
variables, BN is naturally a generative model, which is not necessarily
discriminative. This may cause the ignorance of subtle but critical network
changes that are of investigation values across populations. In this paper, we
propose to improve the discriminative power of BN models for continuous
variables from two different perspectives. This brings two general
discriminative learning frameworks for Gaussian Bayesian networks (GBN). In the
first framework, we employ Fisher kernel to bridge the generative models of GBN
and the discriminative classifiers of SVMs, and convert the GBN parameter
learning to Fisher kernel learning via minimizing a generalization error bound
of SVMs. In the second framework, we employ the max-margin criterion and build
it directly upon GBN models to explicitly optimize the classification
performance of the GBNs. The advantages and disadvantages of the two frameworks
are discussed and experimentally compared. Both of them demonstrate strong
power in learning discriminative parameters of GBNs for neuroimaging based
brain network analysis, as well as maintaining reasonable representation
capacity. The contributions of this paper also include a new Directed Acyclic
Graph (DAG) constraint with theoretical guarantee to ensure the graph validity
of GBN.Comment: 16 pages and 5 figures for the article (excluding appendix
Active Learning for Undirected Graphical Model Selection
This paper studies graphical model selection, i.e., the problem of estimating
a graph of statistical relationships among a collection of random variables.
Conventional graphical model selection algorithms are passive, i.e., they
require all the measurements to have been collected before processing begins.
We propose an active learning algorithm that uses junction tree representations
to adapt future measurements based on the information gathered from prior
measurements. We prove that, under certain conditions, our active learning
algorithm requires fewer scalar measurements than any passive algorithm to
reliably estimate a graph. A range of numerical results validate our theory and
demonstrates the benefits of active learning.Comment: AISTATS 201
Discovering Dynamic Causal Space for DAG Structure Learning
Discovering causal structure from purely observational data (i.e., causal
discovery), aiming to identify causal relationships among variables, is a
fundamental task in machine learning. The recent invention of differentiable
score-based DAG learners is a crucial enabler, which reframes the combinatorial
optimization problem into a differentiable optimization with a DAG constraint
over directed graph space. Despite their great success, these cutting-edge DAG
learners incorporate DAG-ness independent score functions to evaluate the
directed graph candidates, lacking in considering graph structure. As a result,
measuring the data fitness alone regardless of DAG-ness inevitably leads to
discovering suboptimal DAGs and model vulnerabilities. Towards this end, we
propose a dynamic causal space for DAG structure learning, coined CASPER, that
integrates the graph structure into the score function as a new measure in the
causal space to faithfully reflect the causal distance between estimated and
ground truth DAG. CASPER revises the learning process as well as enhances the
DAG structure learning via adaptive attention to DAG-ness. Grounded by
empirical visualization, CASPER, as a space, satisfies a series of desired
properties, such as structure awareness and noise robustness. Extensive
experiments on both synthetic and real-world datasets clearly validate the
superiority of our CASPER over the state-of-the-art causal discovery methods in
terms of accuracy and robustness.Comment: Accepted by KDD 2023. Our codes are available at
https://github.com/liuff19/CASPE
D'ya like DAGs? A Survey on Structure Learning and Causal Discovery
Causal reasoning is a crucial part of science and human intelligence. In
order to discover causal relationships from data, we need structure discovery
methods. We provide a review of background theory and a survey of methods for
structure discovery. We primarily focus on modern, continuous optimization
methods, and provide reference to further resources such as benchmark datasets
and software packages. Finally, we discuss the assumptive leap required to take
us from structure to causality.Comment: 35 page
- …