14,268 research outputs found
Parameter inference in mechanistic models of cellular regulation and signalling pathways using gradient matching
A challenging problem in systems biology is parameter inference in mechanistic models of signalling pathways. In the present article, we investigate an approach based on gradient matching and nonparametric Bayesian modelling with Gaussian processes. We evaluate the method on two biological systems, related to the regulation of PIF4/5 in Arabidopsis thaliana, and the JAK/STAT signal transduction pathway
Explicit Learning Curves for Transduction and Application to Clustering and Compression Algorithms
Inductive learning is based on inferring a general rule from a finite data
set and using it to label new data. In transduction one attempts to solve the
problem of using a labeled training set to label a set of unlabeled points,
which are given to the learner prior to learning. Although transduction seems
at the outset to be an easier task than induction, there have not been many
provably useful algorithms for transduction. Moreover, the precise relation
between induction and transduction has not yet been determined. The main
theoretical developments related to transduction were presented by Vapnik more
than twenty years ago. One of Vapnik's basic results is a rather tight error
bound for transductive classification based on an exact computation of the
hypergeometric tail. While tight, this bound is given implicitly via a
computational routine. Our first contribution is a somewhat looser but explicit
characterization of a slightly extended PAC-Bayesian version of Vapnik's
transductive bound. This characterization is obtained using concentration
inequalities for the tail of sums of random variables obtained by sampling
without replacement. We then derive error bounds for compression schemes such
as (transductive) support vector machines and for transduction algorithms based
on clustering. The main observation used for deriving these new error bounds
and algorithms is that the unlabeled test points, which in the transductive
setting are known in advance, can be used in order to construct useful data
dependent prior distributions over the hypothesis space
Bayesian causal inference of cell signal transduction from proteomics experiments
Cell signal transduction describes how a cell senses and processes signals from the environment using networks of interacting proteins. In computational systems biology, investigators apply machine learning methods for causal inference to develop causal Bayesian network models of signal transduction from experimental data. Directed edges in the network represent causal regulatory relationships, and the model can be used to predict the effects of interventions to signal transduction. Causal inference approaches applied to proteomics experiments use statistical associations between observed signaling protein concentrations to infer a causal Bayesian network model, but there is no experimental and analysis framework for applying these methods to this experimental context.
The goal of this dissertation is to provide a Bayesian experimental design and modeling framework for causal inference of signal transduction. We evaluate how different high-throughput experimental settings affect the performance of algorithms that detect conditional dependence relationships between proteins. We present a Bayesian active learning approach for designing intervention experiments that reveal the direction of causal influence between proteins. Finally, we present a Bayesian model for inferring the parameters of the conditional probability density functions in a causal Bayesian network. The parameters are directly interpretable as a function of the rate constants in the biochemical reactions between interacting proteins.
The work pays special attention to analysis of single-cell snapshot data such as mass cytometry, where each cell is a multivariate cell-level replicate of signal transduction at a single time point. We also address the role of large-scale bulk experiments such as mass-spectrometry-based proteomics, and small-scale time-course experiments in causal inference
Computational inference in systems biology
Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs), is a challenging problem. The computational costs associated with repeatedly solving the ODEs are often high. Aimed at reducing this cost, new concepts using gradient matching have been proposed. This paper combines current adaptive gradient matching approaches, using Gaussian processes, with a parallel tempering scheme, and conducts a comparative evaluation with current methods used for parameter inference in ODEs
Approximate parameter inference in systems biology using gradient matching: a comparative evaluation
Background: A challenging problem in current systems biology is that of
parameter inference in biological pathways expressed as coupled ordinary
differential equations (ODEs). Conventional methods that repeatedly numerically
solve the ODEs have large associated computational costs. Aimed at reducing this
cost, new concepts using gradient matching have been proposed, which bypass
the need for numerical integration. This paper presents a recently established
adaptive gradient matching approach, using Gaussian processes, combined with a
parallel tempering scheme, and conducts a comparative evaluation with current
state of the art methods used for parameter inference in ODEs. Among these
contemporary methods is a technique based on reproducing kernel Hilbert spaces
(RKHS). This has previously shown promising results for parameter estimation,
but under lax experimental settings. We look at a range of scenarios to test the
robustness of this method. We also change the approach of inferring the penalty
parameter from AIC to cross validation to improve the stability of the method.
Methodology: Methodology for the recently proposed adaptive gradient
matching method using Gaussian processes, upon which we build our new
method, is provided. Details of a competing method using reproducing kernel
Hilbert spaces are also described here.
Results: We conduct a comparative analysis for the methods described in this
paper, using two benchmark ODE systems. The analyses are repeated under
different experimental settings, to observe the sensitivity of the techniques.
Conclusions: Our study reveals that for known noise variance, our proposed
method based on Gaussian processes and parallel tempering achieves overall the
best performance. When the noise variance is unknown, the RKHS method
proves to be more robust
- …