17,702 research outputs found
Efficient Finite Difference Method for Computing Sensitivities of Biochemical Reactions
Sensitivity analysis of biochemical reactions aims at quantifying the
dependence of the reaction dynamics on the reaction rates. The computation of
the parameter sensitivities, however, poses many computational challenges when
taking stochastic noise into account. This paper proposes a new finite
difference method for efficiently computing sensitivities of biochemical
reactions. We employ propensity bounds of reactions to couple the simulation of
the nominal and perturbed processes. The exactness of the simulation is
reserved by applying the rejection-based mechanism. For each simulation step,
the nominal and perturbed processes under our coupling strategy are
synchronized and often jump together, increasing their positive correlation and
hence reducing the variance of the estimator. The distinctive feature of our
approach in comparison with existing coupling approaches is that it only needs
to maintain a single data structure storing propensity bounds of reactions
during the simulation of the nominal and perturbed processes. Our approach
allows to computing sensitivities of many reaction rates simultaneously.
Moreover, the data structure does not require to be updated frequently, hence
improving the computational cost. This feature is especially useful when
applied to large reaction networks. We benchmark our method on biological
reaction models to prove its applicability and efficiency.Comment: 29 pages with 6 figures, 2 table
How Many Subpopulations is Too Many? Exponential Lower Bounds for Inferring Population Histories
Reconstruction of population histories is a central problem in population
genetics. Existing coalescent-based methods, like the seminal work of Li and
Durbin (Nature, 2011), attempt to solve this problem using sequence data but
have no rigorous guarantees. Determining the amount of data needed to correctly
reconstruct population histories is a major challenge. Using a variety of tools
from information theory, the theory of extremal polynomials, and approximation
theory, we prove new sharp information-theoretic lower bounds on the problem of
reconstructing population structure -- the history of multiple subpopulations
that merge, split and change sizes over time. Our lower bounds are exponential
in the number of subpopulations, even when reconstructing recent histories. We
demonstrate the sharpness of our lower bounds by providing algorithms for
distinguishing and learning population histories with matching dependence on
the number of subpopulations. Along the way and of independent interest, we
essentially determine the optimal number of samples needed to learn an
exponential mixture distribution information-theoretically, proving the upper
bound by analyzing natural (and efficient) algorithms for this problem.Comment: 38 pages, Appeared in RECOMB 201
Can one hear the shape of a population history?
Reconstructing past population size from present day genetic data is a major
goal of population genetics. Recent empirical studies infer population size
history using coalescent-based models applied to a small number of individuals.
Here we provide tight bounds on the amount of exact coalescence time data
needed to recover the population size history of a single, panmictic population
at a certain level of accuracy. In practice, coalescence times are estimated
from sequence data and so our lower bounds should be taken as rather
conservative.Comment: 22 pages, 7 figures; v2 is significantly revised from v
- …