101 research outputs found

    Effect of Dedifferentiation on Time to Mutation Acquisition in Stem Cell-Driven Cancers

    Full text link
    Accumulating evidence suggests that many tumors have a hierarchical organization, with the bulk of the tumor composed of relatively differentiated short-lived progenitor cells that are maintained by a small population of undifferentiated long-lived cancer stem cells. It is unclear, however, whether cancer stem cells originate from normal stem cells or from dedifferentiated progenitor cells. To address this, we mathematically modeled the effect of dedifferentiation on carcinogenesis. We considered a hybrid stochastic-deterministic model of mutation accumulation in both stem cells and progenitors, including dedifferentiation of progenitor cells to a stem cell-like state. We performed exact computer simulations of the emergence of tumor subpopulations with two mutations, and we derived semi-analytical estimates for the waiting time distribution to fixation. Our results suggest that dedifferentiation may play an important role in carcinogenesis, depending on how stem cell homeostasis is maintained. If the stem cell population size is held strictly constant (due to all divisions being asymmetric), we found that dedifferentiation acts like a positive selective force in the stem cell population and thus speeds carcinogenesis. If the stem cell population size is allowed to vary stochastically with density-dependent reproduction rates (allowing both symmetric and asymmetric divisions), we found that dedifferentiation beyond a critical threshold leads to exponential growth of the stem cell population. Thus, dedifferentiation may play a crucial role, the common modeling assumption of constant stem cell population size may not be adequate, and further progress in understanding carcinogenesis demands a more detailed mechanistic understanding of stem cell homeostasis

    Diffusion Approximations for Demographic Inference: DaDi

    Get PDF
    Models of demographic history (population sizes, migration rates, and divergence times) inferred from genetic data complement archeology and serve as null models in genome scans for selection. Most current inference methods are computationally limited to considering simple models or non-recombining data. We introduce a method based on a diffusion approximation to the joint frequency spectrum of genetic variation between populations. Our implementation, DaDi, can model up to three interacting populations and scales well to genome-wide data. We have applied DaDi to human data from Africa, Europe, and East Asia, building the most complex statistically well-characterized model of human migration out of Africa to date

    Universally Sloppy Parameter Sensitivities in Systems Biology

    Get PDF
    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring \emph{in vivo} biochemical parameters is difficult, and collectively fitting them to other data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a `sloppy' spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.Comment: Submitted to PLoS Computational Biology. Supplementary Information available in "Other Formats" bundle. Discussion slightly revised to add historical contex

    RuleMonkey: software for stochastic simulation of rule-based models

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The system-level dynamics of many molecular interactions, particularly protein-protein interactions, can be conveniently represented using reaction rules, which can be specified using model-specification languages, such as the BioNetGen language (BNGL). A set of rules implicitly defines a (bio)chemical reaction network. The reaction network implied by a set of rules is often very large, and as a result, generation of the network implied by rules tends to be computationally expensive. Moreover, the cost of many commonly used methods for simulating network dynamics is a function of network size. Together these factors have limited application of the rule-based modeling approach. Recently, several methods for simulating rule-based models have been developed that avoid the expensive step of network generation. The cost of these "network-free" simulation methods is independent of the number of reactions implied by rules. Software implementing such methods is now needed for the simulation and analysis of rule-based models of biochemical systems.</p> <p>Results</p> <p>Here, we present a software tool called RuleMonkey, which implements a network-free method for simulation of rule-based models that is similar to Gillespie's method. The method is suitable for rule-based models that can be encoded in BNGL, including models with rules that have global application conditions, such as rules for intramolecular association reactions. In addition, the method is rejection free, unlike other network-free methods that introduce null events, i.e., steps in the simulation procedure that do not change the state of the reaction system being simulated. We verify that RuleMonkey produces correct simulation results, and we compare its performance against DYNSTOC, another BNGL-compliant tool for network-free simulation of rule-based models. We also compare RuleMonkey against problem-specific codes implementing network-free simulation methods.</p> <p>Conclusions</p> <p>RuleMonkey enables the simulation of rule-based models for which the underlying reaction networks are large. It is typically faster than DYNSTOC for benchmark problems that we have examined. RuleMonkey is freely available as a stand-alone application <url>http://public.tgen.org/rulemonkey</url>. It is also available as a simulation engine within GetBonNie, a web-based environment for building, analyzing and sharing rule-based models.</p
    • …
    corecore