1,156 research outputs found
Reasoning about Independence in Probabilistic Models of Relational Data
We extend the theory of d-separation to cases in which data instances are not
independent and identically distributed. We show that applying the rules of
d-separation directly to the structure of probabilistic models of relational
data inaccurately infers conditional independence. We introduce relational
d-separation, a theory for deriving conditional independence facts from
relational models. We provide a new representation, the abstract ground graph,
that enables a sound, complete, and computationally efficient method for
answering d-separation queries about relational models, and we present
empirical results that demonstrate effectiveness.Comment: 61 pages, substantial revisions to formalisms, theory, and related
wor
Inferring dynamic genetic networks with low order independencies
In this paper, we propose a novel inference method for dynamic genetic
networks which makes it possible to face with a number of time measurements n
much smaller than the number of genes p. The approach is based on the concept
of low order conditional dependence graph that we extend here in the case of
Dynamic Bayesian Networks. Most of our results are based on the theory of
graphical models associated with the Directed Acyclic Graphs (DAGs). In this
way, we define a minimal DAG G which describes exactly the full order
conditional dependencies given the past of the process. Then, to face with the
large p and small n estimation case, we propose to approximate DAG G by
considering low order conditional independencies. We introduce partial qth
order conditional dependence DAGs G(q) and analyze their probabilistic
properties. In general, DAGs G(q) differ from DAG G but still reflect relevant
dependence facts for sparse networks such as genetic networks. By using this
approximation, we set out a non-bayesian inference method and demonstrate the
effectiveness of this approach on both simulated and real data analysis. The
inference procedure is implemented in the R package 'G1DBN' freely available
from the CRAN archive
Massively-Parallel Feature Selection for Big Data
We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for
feature selection (FS) in Big Data settings (high dimensionality and/or sample
size). To tackle the challenges of Big Data FS PFBP partitions the data matrix
both in terms of rows (samples, training examples) as well as columns
(features). By employing the concepts of -values of conditional independence
tests and meta-analysis techniques PFBP manages to rely only on computations
local to a partition while minimizing communication costs. Then, it employs
powerful and safe (asymptotically sound) heuristics to make early, approximate
decisions, such as Early Dropping of features from consideration in subsequent
iterations, Early Stopping of consideration of features within the same
iteration, or Early Return of the winner in each iteration. PFBP provides
asymptotic guarantees of optimality for data distributions faithfully
representable by a causal network (Bayesian network or maximal ancestral
graph). Our empirical analysis confirms a super-linear speedup of the algorithm
with increasing sample size, linear scalability with respect to the number of
features and processing cores, while dominating other competitive algorithms in
its class
- …