951 research outputs found
Universally Sloppy Parameter Sensitivities in Systems Biology
Quantitative computational models play an increasingly important role in
modern biology. Such models typically involve many free parameters, and
assigning their values is often a substantial obstacle to model development.
Directly measuring \emph{in vivo} biochemical parameters is difficult, and
collectively fitting them to other data often yields large parameter
uncertainties. Nevertheless, in earlier work we showed in a
growth-factor-signaling model that collective fitting could yield
well-constrained predictions, even when it left individual parameters very
poorly constrained. We also showed that the model had a `sloppy' spectrum of
parameter sensitivities, with eigenvalues roughly evenly distributed over many
decades. Here we use a collection of models from the literature to test whether
such sloppy spectra are common in systems biology. Strikingly, we find that
every model we examine has a sloppy spectrum of sensitivities. We also test
several consequences of this sloppiness for building predictive models. In
particular, sloppiness suggests that collective fits to even large amounts of
ideal time-series data will often leave many parameters poorly constrained.
Tests over our model collection are consistent with this suggestion. This
difficulty with collective fits may seem to argue for direct parameter
measurements, but sloppiness also implies that such measurements must be
formidably precise and complete to usefully constrain many model predictions.
We confirm this implication in our signaling model. Our results suggest that
sloppy sensitivity spectra are universal in systems biology models. The
prevalence of sloppiness highlights the power of collective fits and suggests
that modelers should focus on predictions rather than on parameters.Comment: Submitted to PLoS Computational Biology. Supplementary Information
available in "Other Formats" bundle. Discussion slightly revised to add
historical contex
Computational and Theoretical Issues of Multiparameter Persistent Homology for Data Analysis
The basic goal of topological data analysis is to apply topology-based descriptors
to understand and describe the shape of data. In this context, homology is one of
the most relevant topological descriptors, well-appreciated for its discrete nature,
computability and dimension independence. A further development is provided
by persistent homology, which allows to track homological features along a oneparameter
increasing sequence of spaces. Multiparameter persistent homology, also
called multipersistent homology, is an extension of the theory of persistent homology
motivated by the need of analyzing data naturally described by several parameters,
such as vector-valued functions. Multipersistent homology presents several issues in
terms of feasibility of computations over real-sized data and theoretical challenges
in the evaluation of possible descriptors. The focus of this thesis is in the interplay
between persistent homology theory and discrete Morse Theory. Discrete Morse
theory provides methods for reducing the computational cost of homology and persistent
homology by considering the discrete Morse complex generated by the discrete
Morse gradient in place of the original complex. The work of this thesis addresses
the problem of computing multipersistent homology, to make such tool usable in real
application domains. This requires both computational optimizations towards the
applications to real-world data, and theoretical insights for finding and interpreting
suitable descriptors. Our computational contribution consists in proposing a new
Morse-inspired and fully discrete preprocessing algorithm. We show the feasibility
of our preprocessing over real datasets, and evaluate the impact of the proposed
algorithm as a preprocessing for computing multipersistent homology. A theoretical
contribution of this thesis consists in proposing a new notion of optimality for such
a preprocessing in the multiparameter context. We show that the proposed notion
generalizes an already known optimality notion from the one-parameter case. Under
this definition, we show that the algorithm we propose as a preprocessing is optimal
in low dimensional domains. In the last part of the thesis, we consider preliminary
applications of the proposed algorithm in the context of topology-based multivariate
visualization by tracking critical features generated by a discrete gradient field compatible
with the multiple scalar fields under study. We discuss (dis)similarities of such
critical features with the state-of-the-art techniques in topology-based multivariate
data visualization
Computing multiparameter persistent homology through a discrete Morse-based approach
Persistent homology allows for tracking topological features, like loops, holes and their higher-dimensional analogues, along a single-parameter family of nested shapes. Computing descriptors for complex data characterized by multiple parameters is becoming a major challenging task in several applications, including physics, chemistry, medicine, and geography. Multiparameter persistent homology generalizes persistent homology to allow for the exploration and analysis of shapes endowed with multiple filtering functions. Still, computational constraints prevent multiparameter persistent homology to be a feasible tool for analyzing large size data sets. We consider discrete Morse theory as a strategy to reduce the computation of multiparameter persistent homology by working on a reduced dataset. We propose a new preprocessing algorithm, well suited for parallel and distributed implementations, and we provide the first evaluation of the impact of multiparameter persistent homology on computations
Adaptive Detection of Instabilities: An Experimental Feasibility Study
We present an example of the practical implementation of a protocol for
experimental bifurcation detection based on on-line identification and feedback
control ideas. The idea is to couple the experiment with an on-line
computer-assisted identification/feedback protocol so that the closed-loop
system will converge to the open-loop bifurcation points. We demonstrate the
applicability of this instability detection method by real-time,
computer-assisted detection of period doubling bifurcations of an electronic
circuit; the circuit implements an analog realization of the Roessler system.
The method succeeds in locating the bifurcation points even in the presence of
modest experimental uncertainties, noise and limited resolution. The results
presented here include bifurcation detection experiments that rely on
measurements of a single state variable and delay-based phase space
reconstruction, as well as an example of tracing entire segments of a
codimension-1 bifurcation boundary in two parameter space.Comment: 29 pages, Latex 2.09, 10 figures in encapsulated postscript format
(eps), need psfig macro to include them. Submitted to Physica
Calculating the Expected Value of Sample Information in Practice: Considerations from Three Case Studies
Investing efficiently in future research to improve policy decisions is an
important goal. Expected Value of Sample Information (EVSI) can be used to
select the specific design and sample size of a proposed study by assessing the
benefit of a range of different studies. Estimating EVSI with the standard
nested Monte Carlo algorithm has a notoriously high computational burden,
especially when using a complex decision model or when optimizing over study
sample sizes and designs. Therefore, a number of more efficient EVSI
approximation methods have been developed. However, these approximation methods
have not been compared and therefore their relative advantages and
disadvantages are not clear. A consortium of EVSI researchers, including the
developers of several approximation methods, compared four EVSI methods using
three previously published health economic models. The examples were chosen to
represent a range of real-world contexts, including situations with multiple
study outcomes, missing data, and data from an observational rather than a
randomized study. The computational speed and accuracy of each method were
compared, and the relative advantages and implementation challenges of the
methods were highlighted. In each example, the approximation methods took
minutes or hours to achieve reasonably accurate EVSI estimates, whereas the
traditional Monte Carlo method took weeks. Specific methods are particularly
suited to problems where we wish to compare multiple proposed sample sizes,
when the proposed sample size is large, or when the health economic model is
computationally expensive. All the evaluated methods gave estimates similar to
those given by traditional Monte Carlo, suggesting that EVSI can now be
efficiently computed with confidence in realistic examples.Comment: 11 pages, 3 figure
Hybrid computer Monte-Carlo techniques
Hybrid analog-digital computer systems for Monte Carlo method application
- âŠ