101 research outputs found
Effect of Dedifferentiation on Time to Mutation Acquisition in Stem Cell-Driven Cancers
Accumulating evidence suggests that many tumors have a hierarchical
organization, with the bulk of the tumor composed of relatively differentiated
short-lived progenitor cells that are maintained by a small population of
undifferentiated long-lived cancer stem cells. It is unclear, however, whether
cancer stem cells originate from normal stem cells or from dedifferentiated
progenitor cells. To address this, we mathematically modeled the effect of
dedifferentiation on carcinogenesis. We considered a hybrid
stochastic-deterministic model of mutation accumulation in both stem cells and
progenitors, including dedifferentiation of progenitor cells to a stem
cell-like state. We performed exact computer simulations of the emergence of
tumor subpopulations with two mutations, and we derived semi-analytical
estimates for the waiting time distribution to fixation. Our results suggest
that dedifferentiation may play an important role in carcinogenesis, depending
on how stem cell homeostasis is maintained. If the stem cell population size is
held strictly constant (due to all divisions being asymmetric), we found that
dedifferentiation acts like a positive selective force in the stem cell
population and thus speeds carcinogenesis. If the stem cell population size is
allowed to vary stochastically with density-dependent reproduction rates
(allowing both symmetric and asymmetric divisions), we found that
dedifferentiation beyond a critical threshold leads to exponential growth of
the stem cell population. Thus, dedifferentiation may play a crucial role, the
common modeling assumption of constant stem cell population size may not be
adequate, and further progress in understanding carcinogenesis demands a more
detailed mechanistic understanding of stem cell homeostasis
Diffusion Approximations for Demographic Inference: DaDi
Models of demographic history (population sizes, migration rates, and divergence times) inferred from genetic data complement archeology and serve as null models in genome scans for selection. Most current inference methods are computationally limited to considering simple models or non-recombining data. We introduce a method based on a diffusion approximation to the joint frequency spectrum of genetic variation between populations. Our implementation, DaDi, can model up to three interacting populations and scales well to genome-wide data. We have applied DaDi to human data from Africa, Europe, and East Asia, building the most complex statistically well-characterized model of human migration out of Africa to date
Recommended from our members
Inferring the Joint Demographic History of Multiple Populations from Multidimensional SNP Frequency Data
Demographic models built from genetic data play important roles in illuminating prehistorical events and serving as null models in genome scans for selection. We introduce an inference method based on the joint frequency spectrum of genetic variants within and between populations. For candidate models we numerically compute the expected spectrum using a diffusion approximation to the one-locus, two-allele Wright-Fisher process, involving up to three simultaneous populations. Our approach is a composite likelihood scheme, since linkage between neutral loci alters the variance but not the expectation of the frequency spectrum. We thus use bootstraps incorporating linkage to estimate uncertainties for parameters and significance values for hypothesis tests. Our method can also incorporate selection on single sites, predicting the joint distribution of selected alleles among populations experiencing a bevy of evolutionary forces, including expansions, contractions, migrations, and admixture. We model human expansion out of Africa and the settlement of the New World, using 5 Mb of noncoding DNA resequenced in 68 individuals from 4 populations (YRI, CHB, CEU, and MXL) by the Environmental Genome Project. We infer divergence between West African and Eurasian populations 140 thousand years ago (95% confidence interval: 40â270 kya). This is earlier than other genetic studies, in part because we incorporate migration. We estimate the European (CEU) and East Asian (CHB) divergence time to be 23 kya (95% c.i.: 17â43 kya), long after archeological evidence places modern humans in Europe. Finally, we estimate divergence between East Asians (CHB) and Mexican-Americans (MXL) of 22 kya (95% c.i.: 16.3â26.9 kya), and our analysis yields no evidence for subsequent migration. Furthermore, combining our demographic model with a previously estimated distribution of selective effects among newly arising amino acid mutations accurately predicts the frequency spectrum of nonsynonymous variants across three continental populations (YRI, CHB, CEU).</p
Universally Sloppy Parameter Sensitivities in Systems Biology
Quantitative computational models play an increasingly important role in
modern biology. Such models typically involve many free parameters, and
assigning their values is often a substantial obstacle to model development.
Directly measuring \emph{in vivo} biochemical parameters is difficult, and
collectively fitting them to other data often yields large parameter
uncertainties. Nevertheless, in earlier work we showed in a
growth-factor-signaling model that collective fitting could yield
well-constrained predictions, even when it left individual parameters very
poorly constrained. We also showed that the model had a `sloppy' spectrum of
parameter sensitivities, with eigenvalues roughly evenly distributed over many
decades. Here we use a collection of models from the literature to test whether
such sloppy spectra are common in systems biology. Strikingly, we find that
every model we examine has a sloppy spectrum of sensitivities. We also test
several consequences of this sloppiness for building predictive models. In
particular, sloppiness suggests that collective fits to even large amounts of
ideal time-series data will often leave many parameters poorly constrained.
Tests over our model collection are consistent with this suggestion. This
difficulty with collective fits may seem to argue for direct parameter
measurements, but sloppiness also implies that such measurements must be
formidably precise and complete to usefully constrain many model predictions.
We confirm this implication in our signaling model. Our results suggest that
sloppy sensitivity spectra are universal in systems biology models. The
prevalence of sloppiness highlights the power of collective fits and suggests
that modelers should focus on predictions rather than on parameters.Comment: Submitted to PLoS Computational Biology. Supplementary Information
available in "Other Formats" bundle. Discussion slightly revised to add
historical contex
RuleMonkey: software for stochastic simulation of rule-based models
<p>Abstract</p> <p>Background</p> <p>The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems.</p> <p>Results</p> <p>Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods.</p> <p>Conclusions</p> <p>RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application <url>http://public.tgen.org/rulemonkey</url>. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models.</p
- âŚ