28,475 research outputs found
Efficient parametric analysis of the chemical master equation through model order reduction
Background: Stochastic biochemical reaction networks are commonly modelled by
the chemical master equation, and can be simulated as first order linear
differential equations through a finite state projection. Due to the very high
state space dimension of these equations, numerical simulations are
computationally expensive. This is a particular problem for analysis tasks
requiring repeated simulations for different parameter values. Such tasks are
computationally expensive to the point of infeasibility with the chemical
master equation. Results: In this article, we apply parametric model order
reduction techniques in order to construct accurate low-dimensional parametric
models of the chemical master equation. These surrogate models can be used in
various parametric analysis task such as identifiability analysis, parameter
estimation, or sensitivity analysis. As biological examples, we consider two
models for gene regulation networks, a bistable switch and a network displaying
stochastic oscillations. Conclusions: The results show that the parametric
model reduction yields efficient models of stochastic biochemical reaction
networks, and that these models can be useful for systems biology applications
involving parametric analysis problems such as parameter exploration,
optimization, estimation or sensitivity analysis.Comment: 23 pages, 8 figures, 2 table
Stochastic Representations of Ion Channel Kinetics and Exact Stochastic Simulation of Neuronal Dynamics
In this paper we provide two representations for stochastic ion channel
kinetics, and compare the performance of exact simulation with a commonly used
numerical approximation strategy. The first representation we present is a
random time change representation, popularized by Thomas Kurtz, with the second
being analogous to a "Gillespie" representation. Exact stochastic algorithms
are provided for the different representations, which are preferable to either
(a) fixed time step or (b) piecewise constant propensity algorithms, which
still appear in the literature. As examples, we provide versions of the exact
algorithms for the Morris-Lecar conductance based model, and detail the error
induced, both in a weak and a strong sense, by the use of approximate
algorithms on this model. We include ready-to-use implementations of the random
time change algorithm in both XPP and Matlab. Finally, through the
consideration of parametric sensitivity analysis, we show how the
representations presented here are useful in the development of further
computational methods. The general representations and simulation strategies
provided here are known in other parts of the sciences, but less so in the
present setting.Comment: 39 pages, 6 figures, appendix with XPP and Matlab cod
An overview of the proper generalized decomposition with applications in computational rheology
We review the foundations and applications of the proper generalized decomposition (PGD), a powerful model reduction technique that computes a priori by means of successive enrichment a separated representation of the unknown field. The computational complexity of the PGD scales linearly with the dimension of the space wherein the model is defined, which is in marked contrast with the exponential scaling of standard grid-based methods. First introduced in the context of computational rheology by Ammar et al. [3] and [4], the PGD has since been further developed and applied in a variety of applications ranging from the solution of the Schrödinger equation of quantum mechanics to the analysis of laminate composites. In this paper, we illustrate the use of the PGD in four problem categories related to computational rheology: (i) the direct solution of the Fokker-Planck equation for complex fluids in configuration spaces of high dimension, (ii) the development of very efficient non-incremental algorithms for transient problems, (iii) the fully three-dimensional solution of problems defined in degenerate plate or shell-like domains often encountered in polymer processing or composites manufacturing, and finally (iv) the solution of multidimensional parametric models obtained by introducing various sources of problem variability as additional coordinates
Targeting Conservation Investments in Heterogeneous Landscapes: A distance function approach and application to watershed management
To achieve a given level of an environmental amenity at least cost, decision-makers must integrate information about spatially variable biophysical and economic conditions. Although the biophysical attributes that contribute to supplying an environmental amenity are often known, the way in which these attributes interact to produce the amenity is often unknown. Given the difficulty in converting multiple attributes into a unidimensional physical measure of an environmental amenity (e.g., habitat quality), analyses in the academic literature tend to use a single biophysical attribute as a proxy for the environmental amenity (e.g., species richness). A narrow focus on a single attribute, however, fails to consider the full range of biophysical attributes that are critical to the supply of an environmental amenity. Drawing on the production efficiency literature, we introduce an alternative conservation targeting approach that relies on distance functions to cost-efficiently allocate conservation funds across a spatially heterogeneous landscape. An approach based on distance functions has the advantage of not requiring a parametric specification of the amenity function (or cost function), but rather only requiring that the decision-maker identify important biophysical and economic attributes. We apply the distance-function approach empirically to an increasingly common, but little studied, conservation initiative: conservation contracting for water quality objectives. The contract portfolios derived from the distance-function application have many desirable properties, including intuitive appeal, robust performance across plausible parametric amenity measures, and the generation of ranking measures that can be easily used by field practitioners in complex decision-making environments that cannot be completely modeled. Working Paper # 2002-01
Stochastic focusing coupled with negative feedback enables robust regulation in biochemical reaction networks
Nature presents multiple intriguing examples of processes which proceed at
high precision and regularity. This remarkable stability is frequently counter
to modelers' experience with the inherent stochasticity of chemical reactions
in the regime of low copy numbers. Moreover, the effects of noise and
nonlinearities can lead to "counter-intuitive" behavior, as demonstrated for a
basic enzymatic reaction scheme that can display stochastic focusing (SF).
Under the assumption of rapid signal fluctuations, SF has been shown to convert
a graded response into a threshold mechanism, thus attenuating the detrimental
effects of signal noise. However, when the rapid fluctuation assumption is
violated, this gain in sensitivity is generally obtained at the cost of very
large product variance, and this unpredictable behavior may be one possible
explanation of why, more than a decade after its introduction, SF has still not
been observed in real biochemical systems.
In this work we explore the noise properties of a simple enzymatic reaction
mechanism with a small and fluctuating number of active enzymes that behaves as
a high-gain, noisy amplifier due to SF caused by slow enzyme fluctuations. We
then show that the inclusion of a plausible negative feedback mechanism turns
the system from a noisy signal detector to a strong homeostatic mechanism by
exchanging high gain with strong attenuation in output noise and robustness to
parameter variations. Moreover, we observe that the discrepancy between
deterministic and stochastic descriptions of stochastically focused systems in
the evolution of the means almost completely disappears, despite very low
molecule counts and the additional nonlinearity due to feedback.
The reaction mechanism considered here can provide a possible resolution to
the apparent conflict between intrinsic noise and high precision in critical
intracellular processes
A literature survey of low-rank tensor approximation techniques
During the last years, low-rank tensor approximation has been established as
a new tool in scientific computing to address large-scale linear and
multilinear algebra problems, which would be intractable by classical
techniques. This survey attempts to give a literature overview of current
developments in this area, with an emphasis on function-related tensors
Data-driven modelling of biological multi-scale processes
Biological processes involve a variety of spatial and temporal scales. A
holistic understanding of many biological processes therefore requires
multi-scale models which capture the relevant properties on all these scales.
In this manuscript we review mathematical modelling approaches used to describe
the individual spatial scales and how they are integrated into holistic models.
We discuss the relation between spatial and temporal scales and the implication
of that on multi-scale modelling. Based upon this overview over
state-of-the-art modelling approaches, we formulate key challenges in
mathematical and computational modelling of biological multi-scale and
multi-physics processes. In particular, we considered the availability of
analysis tools for multi-scale models and model-based multi-scale data
integration. We provide a compact review of methods for model-based data
integration and model-based hypothesis testing. Furthermore, novel approaches
and recent trends are discussed, including computation time reduction using
reduced order and surrogate models, which contribute to the solution of
inference problems. We conclude the manuscript by providing a few ideas for the
development of tailored multi-scale inference methods.Comment: This manuscript will appear in the Journal of Coupled Systems and
Multiscale Dynamics (American Scientific Publishers
- …