6,086 research outputs found
Augmented Sparse Reconstruction of Protein Signaling Networks
The problem of reconstructing and identifying intracellular protein signaling
and biochemical networks is of critical importance in biology today. We sought
to develop a mathematical approach to this problem using, as a test case, one
of the most well-studied and clinically important signaling networks in biology
today, the epidermal growth factor receptor (EGFR) driven signaling cascade.
More specifically, we suggest a method, augmented sparse reconstruction, for
the identification of links among nodes of ordinary differential equation (ODE)
networks from a small set of trajectories with different initial conditions.
Our method builds a system of representation by using a collection of integrals
of all given trajectories and by attenuating block of terms in the
representation itself. The system of representation is then augmented with
random vectors, and minimization of the 1-norm is used to find sparse
representations for the dynamical interactions of each node. Augmentation by
random vectors is crucial, since sparsity alone is not able to handle the large
error-in-variables in the representation. Augmented sparse reconstruction
allows to consider potentially very large spaces of models and it is able to
detect with high accuracy the few relevant links among nodes, even when
moderate noise is added to the measured trajectories. After showing the
performance of our method on a model of the EGFR protein network, we sketch
briefly the potential future therapeutic applications of this approach.Comment: 24 pages, 6 figure
Reverse Engineering Time Discrete Finite Dynamical Systems: A Feasible Undertaking?
With the advent of high-throughput profiling methods, interest in reverse engineering the structure and dynamics of biochemical networks is high. Recently an algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is a top-down approach using time discrete dynamical systems. One of its key steps includes the choice of a term order, a technicality imposed by the use of Gröbner-bases calculations. The aim of this paper is to identify minimal requirements on data sets to be used with this algorithm and to characterize optimal data sets. We found minimal requirements on a data set based on how many terms the functions to be reverse engineered display. Furthermore, we identified optimal data sets, which we characterized using a geometric property called “general position”. Moreover, we developed a constructive method to generate optimal data sets, provided a codimensional condition is fulfilled. In addition, we present a generalization of their algorithm that does not depend on the choice of a term order. For this method we derived a formula for the probability of finding the correct model, provided the data set used is optimal. We analyzed the asymptotic behavior of the probability formula for a growing number of variables n (i.e. interacting chemicals). Unfortunately, this formula converges to zero as fast as , where and . Therefore, even if an optimal data set is used and the restrictions in using term orders are overcome, the reverse engineering problem remains unfeasible, unless prodigious amounts of data are available. Such large data sets are experimentally impossible to generate with today's technologies
Active Sampling-based Binary Verification of Dynamical Systems
Nonlinear, adaptive, or otherwise complex control techniques are increasingly
relied upon to ensure the safety of systems operating in uncertain
environments. However, the nonlinearity of the resulting closed-loop system
complicates verification that the system does in fact satisfy those
requirements at all possible operating conditions. While analytical proof-based
techniques and finite abstractions can be used to provably verify the
closed-loop system's response at different operating conditions, they often
produce conservative approximations due to restrictive assumptions and are
difficult to construct in many applications. In contrast, popular statistical
verification techniques relax the restrictions and instead rely upon
simulations to construct statistical or probabilistic guarantees. This work
presents a data-driven statistical verification procedure that instead
constructs statistical learning models from simulated training data to separate
the set of possible perturbations into "safe" and "unsafe" subsets. Binary
evaluations of closed-loop system requirement satisfaction at various
realizations of the uncertainties are obtained through temporal logic
robustness metrics, which are then used to construct predictive models of
requirement satisfaction over the full set of possible uncertainties. As the
accuracy of these predictive statistical models is inherently coupled to the
quality of the training data, an active learning algorithm selects additional
sample points in order to maximize the expected change in the data-driven model
and thus, indirectly, minimize the prediction error. Various case studies
demonstrate the closed-loop verification procedure and highlight improvements
in prediction error over both existing analytical and statistical verification
techniques.Comment: 23 page
- …