7,324 research outputs found
Inferring gene regression networks with model trees
Background: Novel strategies are required in order to handle the huge amount of data produced by microarray
technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between
genes building the so-called gene co-expression networks. They are typically generated using correlation statistics
as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two
genes have a strong global similarity but do not detect local similarities.
Results: We propose model trees as a method to identify gene interaction networks. While correlation-based
methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the
remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into
account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to
control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two
well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are
tested. Second the E.coli transcriptional network (in the Regulon database) is used as control to compare the
results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at
detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods.
Conclusions: REGNET generates gene association networks from gene expression data, and differs from
correlation-based methods in that the relationship between one gene and others is calculated simultaneously.
Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression
functions. They are very often more precise than linear regression models because they can add just different
linear regressions to separate areas of the search space favoring to infer localized similarities over a more global
similarity. Furthermore, experimental results show the good performance of REGNET.Ministerio de Ciencia e Innovación TIN2011-68084-C02-00Ministerio de Ciencia e Innovación PCI2006-A7-0575Junta de Andalucia P07-TIC- 02611Junta de Andalucía TIC-20
Inferring gene regression networks with model trees
<p>Abstract</p> <p>Background</p> <p>Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities.</p> <p>Results</p> <p>We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named R<smcaps>EG</smcaps>N<smcaps>ET</smcaps>, is experimentally tested on two well-known data sets: <it>Saccharomyces Cerevisiae </it>and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database) is used as control to compare the results to that of a correlation-based method. This experiment shows that R<smcaps>EG</smcaps>N<smcaps>ET</smcaps> performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods.</p> <p>Conclusions</p> <p>R<smcaps>EG</smcaps>N<smcaps>ET</smcaps> generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear regressions to separate areas of the search space favoring to infer localized similarities over a more global similarity. Furthermore, experimental results show the good performance of R<smcaps>EG</smcaps>N<smcaps>ET</smcaps>.</p
Validating module network learning algorithms using simulated data
In recent years, several authors have used probabilistic graphical models to
learn expression modules and their regulatory programs from gene expression
data. Here, we demonstrate the use of the synthetic data generator SynTReN for
the purpose of testing and comparing module network learning algorithms. We
introduce a software package for learning module networks, called LeMoNe, which
incorporates a novel strategy for learning regulatory programs. Novelties
include the use of a bottom-up Bayesian hierarchical clustering to construct
the regulatory programs, and the use of a conditional entropy measure to assign
regulators to the regulation program nodes. Using SynTReN data, we test the
performance of LeMoNe in a completely controlled situation and assess the
effect of the methodological changes we made with respect to an existing
software package, namely Genomica. Additionally, we assess the effect of
various parameters, such as the size of the data set and the amount of noise,
on the inference performance. Overall, application of Genomica and LeMoNe to
simulated data sets gave comparable results. However, LeMoNe offers some
advantages, one of them being that the learning process is considerably faster
for larger data sets. Additionally, we show that the location of the regulators
in the LeMoNe regulation programs and their conditional entropy may be used to
prioritize regulators for functional validation, and that the combination of
the bottom-up clustering strategy with the conditional entropy-based assignment
of regulators improves the handling of missing or hidden regulators.Comment: 13 pages, 6 figures + 2 pages, 2 figures supplementary informatio
Reconstructing dynamical networks via feature ranking
Empirical data on real complex systems are becoming increasingly available.
Parallel to this is the need for new methods of reconstructing (inferring) the
topology of networks from time-resolved observations of their node-dynamics.
The methods based on physical insights often rely on strong assumptions about
the properties and dynamics of the scrutinized network. Here, we use the
insights from machine learning to design a new method of network reconstruction
that essentially makes no such assumptions. Specifically, we interpret the
available trajectories (data) as features, and use two independent feature
ranking approaches -- Random forest and RReliefF -- to rank the importance of
each node for predicting the value of each other node, which yields the
reconstructed adjacency matrix. We show that our method is fairly robust to
coupling strength, system size, trajectory length and noise. We also find that
the reconstruction quality strongly depends on the dynamical regime
Variable selection for BART: An application to gene regulation
We consider the task of discovering gene regulatory networks, which are
defined as sets of genes and the corresponding transcription factors which
regulate their expression levels. This can be viewed as a variable selection
problem, potentially with high dimensionality. Variable selection is especially
challenging in high-dimensional settings, where it is difficult to detect
subtle individual effects and interactions between predictors. Bayesian
Additive Regression Trees [BART, Ann. Appl. Stat. 4 (2010) 266-298] provides a
novel nonparametric alternative to parametric regression approaches, such as
the lasso or stepwise regression, especially when the number of relevant
predictors is sparse relative to the total number of available predictors and
the fundamental relationships are nonlinear. We develop a principled
permutation-based inferential approach for determining when the effect of a
selected predictor is likely to be real. Going further, we adapt the BART
procedure to incorporate informed prior information about variable importance.
We present simulations demonstrating that our method compares favorably to
existing parametric and nonparametric procedures in a variety of data settings.
To demonstrate the potential of our approach in a biological context, we apply
it to the task of inferring the gene regulatory network in yeast (Saccharomyces
cerevisiae). We find that our BART-based procedure is best able to recover the
subset of covariates with the largest signal compared to other variable
selection methods. The methods developed in this work are readily available in
the R package bartMachine.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS755 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
ABC random forests for Bayesian parameter inference
This preprint has been reviewed and recommended by Peer Community In
Evolutionary Biology (http://dx.doi.org/10.24072/pci.evolbiol.100036).
Approximate Bayesian computation (ABC) has grown into a standard methodology
that manages Bayesian inference for models associated with intractable
likelihood functions. Most ABC implementations require the preliminary
selection of a vector of informative statistics summarizing raw data.
Furthermore, in almost all existing implementations, the tolerance level that
separates acceptance from rejection of simulated parameter values needs to be
calibrated. We propose to conduct likelihood-free Bayesian inferences about
parameters with no prior selection of the relevant components of the summary
statistics and bypassing the derivation of the associated tolerance level. The
approach relies on the random forest methodology of Breiman (2001) applied in a
(non parametric) regression setting. We advocate the derivation of a new random
forest for each component of the parameter vector of interest. When compared
with earlier ABC solutions, this method offers significant gains in terms of
robustness to the choice of the summary statistics, does not depend on any type
of tolerance level, and is a good trade-off in term of quality of point
estimator precision and credible interval estimations for a given computing
time. We illustrate the performance of our methodological proposal and compare
it with earlier ABC methods on a Normal toy example and a population genetics
example dealing with human population evolution. All methods designed here have
been incorporated in the R package abcrf (version 1.7) available on CRAN.Comment: Main text: 24 pages, 6 figures Supplementary Information: 14 pages, 5
figure
Getting started in probabilistic graphical models
Probabilistic graphical models (PGMs) have become a popular tool for
computational analysis of biological data in a variety of domains. But, what
exactly are they and how do they work? How can we use PGMs to discover patterns
that are biologically relevant? And to what extent can PGMs help us formulate
new hypotheses that are testable at the bench? This note sketches out some
answers and illustrates the main ideas behind the statistical approach to
biological pattern discovery.Comment: 12 pages, 1 figur
- …