3 research outputs found
On the Representation of Causal Background Knowledge and its Applications in Causal Inference
Causal background knowledge about the existence or the absence of causal
edges and paths is frequently encountered in observational studies. The shared
directed edges and links of a subclass of Markov equivalent DAGs refined due to
background knowledge can be represented by a causal maximally partially
directed acyclic graph (MPDAG). In this paper, we first provide a sound and
complete graphical characterization of causal MPDAGs and give a minimal
representation of a causal MPDAG. Then, we introduce a novel representation
called direct causal clause (DCC) to represent all types of causal background
knowledge in a unified form. Using DCCs, we study the consistency and
equivalency of causal background knowledge and show that any causal background
knowledge set can be equivalently decomposed into a causal MPDAG plus a minimal
residual set of DCCs. Polynomial-time algorithms are also provided for checking
the consistency, equivalency, and finding the decomposed MPDAG and residual
DCCs. Finally, with causal background knowledge, we prove a sufficient and
necessary condition to identify causal effects and surprisingly find that the
identifiability of causal effects only depends on the decomposed MPDAG. We also
develop a local IDA-type algorithm to estimate the possible values of an
unidentifiable effect. Simulations suggest that causal background knowledge can
significantly improve the identifiability of causal effects
Low Rank Directed Acyclic Graphs and Causal Structure Learning
Despite several important advances in recent years, learning causal
structures represented by directed acyclic graphs (DAGs) remains a challenging
task in high dimensional settings when the graphs to be learned are not sparse.
In particular, the recent formulation of structure learning as a continuous
optimization problem proved to have considerable advantages over the
traditional combinatorial formulation, but the performance of the resulting
algorithms is still wanting when the target graph is relatively large and
dense. In this paper we propose a novel approach to mitigate this problem, by
exploiting a low rank assumption regarding the (weighted) adjacency matrix of a
DAG causal model. We establish several useful results relating interpretable
graphical conditions to the low rank assumption, and show how to adapt existing
methods for causal structure learning to take advantage of this assumption. We
also provide empirical evidence for the utility of our low rank algorithms,
especially on graphs that are not sparse. Not only do they outperform
state-of-the-art algorithms when the low rank condition is satisfied, the
performance on randomly generated scale-free graphs is also very competitive
even though the true ranks may not be as low as is assumed