7,794 research outputs found
Inferring Regulatory Networks by Combining Perturbation Screens and Steady State Gene Expression Profiles
Reconstructing transcriptional regulatory networks is an important task in
functional genomics. Data obtained from experiments that perturb genes by
knockouts or RNA interference contain useful information for addressing this
reconstruction problem. However, such data can be limited in size and/or are
expensive to acquire. On the other hand, observational data of the organism in
steady state (e.g. wild-type) are more readily available, but their
informational content is inadequate for the task at hand. We develop a
computational approach to appropriately utilize both data sources for
estimating a regulatory network. The proposed approach is based on a three-step
algorithm to estimate the underlying directed but cyclic network, that uses as
input both perturbation screens and steady state gene expression data. In the
first step, the algorithm determines causal orderings of the genes that are
consistent with the perturbation data, by combining an exhaustive search method
with a fast heuristic that in turn couples a Monte Carlo technique with a fast
search algorithm. In the second step, for each obtained causal ordering, a
regulatory network is estimated using a penalized likelihood based method,
while in the third step a consensus network is constructed from the highest
scored ones. Extensive computational experiments show that the algorithm
performs well in reconstructing the underlying network and clearly outperforms
competing approaches that rely only on a single data source. Further, it is
established that the algorithm produces a consistent estimate of the regulatory
network.Comment: 24 pages, 4 figures, 6 table
RIDDLE: Race and ethnicity Imputation from Disease history with Deep LEarning
Anonymized electronic medical records are an increasingly popular source of
research data. However, these datasets often lack race and ethnicity
information. This creates problems for researchers modeling human disease, as
race and ethnicity are powerful confounders for many health exposures and
treatment outcomes; race and ethnicity are closely linked to
population-specific genetic variation. We showed that deep neural networks
generate more accurate estimates for missing racial and ethnic information than
competing methods (e.g., logistic regression, random forest). RIDDLE yielded
significantly better classification performance across all metrics that were
considered: accuracy, cross-entropy loss (error), and area under the curve for
receiver operating characteristic plots (all ). We made specific
efforts to interpret the trained neural network models to identify, quantify,
and visualize medical features which are predictive of race and ethnicity. We
used these characterizations of informative features to perform a systematic
comparison of differential disease patterns by race and ethnicity. The fact
that clinical histories are informative for imputing race and ethnicity could
reflect (1) a skewed distribution of blue- and white-collar professions across
racial and ethnic groups, (2) uneven accessibility and subjective importance of
prophylactic health, (3) possible variation in lifestyle, such as dietary
habits, and (4) differences in background genetic variation which predispose to
diseases
Advances in Learning Bayesian Networks of Bounded Treewidth
This work presents novel algorithms for learning Bayesian network structures
with bounded treewidth. Both exact and approximate methods are developed. The
exact method combines mixed-integer linear programming formulations for
structure learning and treewidth computation. The approximate method consists
in uniformly sampling -trees (maximal graphs of treewidth ), and
subsequently selecting, exactly or approximately, the best structure whose
moral graph is a subgraph of that -tree. Some properties of these methods
are discussed and proven. The approaches are empirically compared to each other
and to a state-of-the-art method for learning bounded treewidth structures on a
collection of public data sets with up to 100 variables. The experiments show
that our exact algorithm outperforms the state of the art, and that the
approximate approach is fairly accurate.Comment: 23 pages, 2 figures, 3 table
Layered Label Propagation: A MultiResolution Coordinate-Free Ordering for Compressing Social Networks
We continue the line of research on graph compression started with WebGraph,
but we move our focus to the compression of social networks in a proper sense
(e.g., LiveJournal): the approaches that have been used for a long time to
compress web graphs rely on a specific ordering of the nodes (lexicographical
URL ordering) whose extension to general social networks is not trivial. In
this paper, we propose a solution that mixes clusterings and orders, and devise
a new algorithm, called Layered Label Propagation, that builds on previous work
on scalable clustering and can be used to reorder very large graphs (billions
of nodes). Our implementation uses overdecomposition to perform aggressively on
multi-core architecture, making it possible to reorder graphs of more than 600
millions nodes in a few hours. Experiments performed on a wide array of web
graphs and social networks show that combining the order produced by the
proposed algorithm with the WebGraph compression framework provides a major
increase in compression with respect to all currently known techniques, both on
web graphs and on social networks. These improvements make it possible to
analyse in main memory significantly larger graphs
PReaCH: A Fast Lightweight Reachability Index using Pruning and Contraction Hierarchies
We develop the data structure PReaCH (for Pruned Reachability Contraction
Hierarchies) which supports reachability queries in a directed graph, i.e., it
supports queries that ask whether two nodes in the graph are connected by a
directed path. PReaCH adapts the contraction hierarchy speedup techniques for
shortest path queries to the reachability setting. The resulting approach is
surprisingly simple and guarantees linear space and near linear preprocessing
time. Orthogonally to that, we improve existing pruning techniques for the
search by gathering more information from a single DFS-traversal of the graph.
PReaCH-indices significantly outperform previous data structures with
comparable preprocessing cost. Methods with faster queries need significantly
more preprocessing time in particular for the most difficult instances
Improving Optimization Bounds using Machine Learning: Decision Diagrams meet Deep Reinforcement Learning
Finding tight bounds on the optimal solution is a critical element of
practical solution methods for discrete optimization problems. In the last
decade, decision diagrams (DDs) have brought a new perspective on obtaining
upper and lower bounds that can be significantly better than classical bounding
mechanisms, such as linear relaxations. It is well known that the quality of
the bounds achieved through this flexible bounding method is highly reliant on
the ordering of variables chosen for building the diagram, and finding an
ordering that optimizes standard metrics is an NP-hard problem. In this paper,
we propose an innovative and generic approach based on deep reinforcement
learning for obtaining an ordering for tightening the bounds obtained with
relaxed and restricted DDs. We apply the approach to both the Maximum
Independent Set Problem and the Maximum Cut Problem. Experimental results on
synthetic instances show that the deep reinforcement learning approach, by
achieving tighter objective function bounds, generally outperforms ordering
methods commonly used in the literature when the distribution of instances is
known. To the best knowledge of the authors, this is the first paper to apply
machine learning to directly improve relaxation bounds obtained by
general-purpose bounding mechanisms for combinatorial optimization problems.Comment: Accepted and presented at AAAI'1
Sparse Linear Identifiable Multivariate Modeling
In this paper we consider sparse and identifiable linear latent variable
(factor) and linear Bayesian network models for parsimonious analysis of
multivariate data. We propose a computationally efficient method for joint
parameter and model inference, and model comparison. It consists of a fully
Bayesian hierarchy for sparse models using slab and spike priors (two-component
delta-function and continuous mixtures), non-Gaussian latent factors and a
stochastic search over the ordering of the variables. The framework, which we
call SLIM (Sparse Linear Identifiable Multivariate modeling), is validated and
bench-marked on artificial and real biological data sets. SLIM is closest in
spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in
inference, Bayesian network structure learning and model comparison.
Experimentally, SLIM performs equally well or better than LiNGAM with
comparable computational complexity. We attribute this mainly to the stochastic
search strategy used, and to parsimony (sparsity and identifiability), which is
an explicit part of the model. We propose two extensions to the basic i.i.d.
linear framework: non-linear dependence on observed variables, called SNIM
(Sparse Non-linear Identifiable Multivariate modeling) and allowing for
correlations between latent variables, called CSLIM (Correlated SLIM), for the
temporal and/or spatial data. The source code and scripts are available from
http://cogsys.imm.dtu.dk/slim/.Comment: 45 pages, 17 figure
- …