742 research outputs found

    Effect of Dedifferentiation on Time to Mutation Acquisition in Stem Cell-Driven Cancers

    Full text link
    Accumulating evidence suggests that many tumors have a hierarchical organization, with the bulk of the tumor composed of relatively differentiated short-lived progenitor cells that are maintained by a small population of undifferentiated long-lived cancer stem cells. It is unclear, however, whether cancer stem cells originate from normal stem cells or from dedifferentiated progenitor cells. To address this, we mathematically modeled the effect of dedifferentiation on carcinogenesis. We considered a hybrid stochastic-deterministic model of mutation accumulation in both stem cells and progenitors, including dedifferentiation of progenitor cells to a stem cell-like state. We performed exact computer simulations of the emergence of tumor subpopulations with two mutations, and we derived semi-analytical estimates for the waiting time distribution to fixation. Our results suggest that dedifferentiation may play an important role in carcinogenesis, depending on how stem cell homeostasis is maintained. If the stem cell population size is held strictly constant (due to all divisions being asymmetric), we found that dedifferentiation acts like a positive selective force in the stem cell population and thus speeds carcinogenesis. If the stem cell population size is allowed to vary stochastically with density-dependent reproduction rates (allowing both symmetric and asymmetric divisions), we found that dedifferentiation beyond a critical threshold leads to exponential growth of the stem cell population. Thus, dedifferentiation may play a crucial role, the common modeling assumption of constant stem cell population size may not be adequate, and further progress in understanding carcinogenesis demands a more detailed mechanistic understanding of stem cell homeostasis

    Diffusion Approximations for Demographic Inference: DaDi

    Get PDF
    Models of demographic history (population sizes, migration rates, and divergence times) inferred from genetic data complement archeology and serve as null models in genome scans for selection. Most current inference methods are computationally limited to considering simple models or non-recombining data. We introduce a method based on a diffusion approximation to the joint frequency spectrum of genetic variation between populations. Our implementation, DaDi, can model up to three interacting populations and scales well to genome-wide data. We have applied DaDi to human data from Africa, Europe, and East Asia, building the most complex statistically well-characterized model of human migration out of Africa to date

    Modelling and analysis of nonlinear multibody dynamic systems

    Get PDF
    Modular self-reconfigurable robots are characterised by a high versatility. While most robots are designed for a special purpose these robots are designed to be multi-talented and adaptive. This thesis uses a modern framework for the automatic model generation of these modular self-reconfigurable robots. Based on a two-step Newton-Euler approach in combination with elements of screw-theory, this framework provides an elegant way to automatically calculate the equations of motion in a closed form. To simulate the kinematics and dynamics the framework has been implemented in MATLAB. To verify both the implementation and modelling, the equations of motion of two robot structures have been calculated using the framework and compared to the equations obtained by a Lagrangian approach. Finally it is shown that the linearisation and decoupling of the systems concerned in this thesis can be done easily using feedback linearisation. For completion two ways to control these linearised systems are demonstrated

    Prädiktive Routenenergieberechnung eines Elektrofahrzeugs

    Get PDF

    A divide-and-conquer approach to analyze underdetermined biochemical models

    Get PDF
    Motivation: To obtain meaningful predictions from dynamic computational models, their uncertain parameter values need to be estimated from experimental data. Due to the usually large number of parameters compared to the available measurement data, these estimation problems are often underdetermined meaning that the solution is a multidimensional space. In this case, the challenge is yet to obtain a sound system understanding despite non-identifiable parameter values, e.g. through identifying those parameters that most sensitively determine the model’s behavior. Results: Here, we present the so-called divide-and-conquer approach—a strategy to analyze underdetermined biochemical models. The approach draws on steady state omics measurement data and exploits a decomposition of the global estimation problem into independent subproblems. The solutions to these subproblems are joined to the complete space of global optima, which can be easily analyzed. We derive the conditions at which the decomposition occurs, outline strategies to fulfill these conditions and—using an example model—illustrate how the approach uncovers the most important parameters and suggests targeted experiments without knowing the exact parameter values.

    Sloppiness, Modeling, and Evolution in Biochemical Networks

    Full text link
    The wonderful complexity of livings cells cannot be understood solely by studying one gene or protein at a time. Instead, we must consider their interactions and study the complex biochemical networks they function in. Quantitative computational models are important tools for understanding the dynamics of such biochemical networks, and we begin in Chapter 2 by showing that the sensitivities of such models to parameter changes are generically `sloppy', with eigenvalues roughly evenly spaced over many decades. This sloppiness has practical consequences for the modeling process. In particular, we argue that if one's goal is to make experimentally testable predictions, sloppiness suggests that collectively fitting model parameters to system-level data will often be much more efficient that directly measuring them. In Chapter 3 we apply some of the lessons of sloppiness to a specific modeling project involving in vitro experiments on the activation of the heterotrimeric G protein transducin. We explore how well time-series activation experiments can constrain model parameters, and we show quantitatively that the T177A mutant of transducin exhibits a much slower rate of rhodopsin-mediated activation than the wild-type. All the preceding biochemical modeling work is performed using the SloppyCell modeling environment, and Chapter 4 briefly introduces SloppyCell and some of the analyses it implements. Additionally, the two appendices of this thesis contain preliminary user and developer documentation for SloppyCell. Modelers tweak network parameters with their computers, and nature tweaks such parameters through evolution. We study evolution in Chapter 5 using a version of Fisher's geometrical model with minimal pleiotropy, appropriate for the evolution of biochemical parameters. The model predicts a striking pattern of cusps in the distribution of fitness effects of fixed mutations, and using extreme value theory we show that the consequences of these cusps should be observable in feasible experiments. Finally, this thesis closes in Chapter 6 by briefly considering several topics: sloppiness in two non-biochemical models, two technical issues with building models, and the effect of sloppiness on evolution beyond the first fixed mutation

    Universally Sloppy Parameter Sensitivities in Systems Biology

    Get PDF
    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring \emph{in vivo} biochemical parameters is difficult, and collectively fitting them to other data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a `sloppy' spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.Comment: Submitted to PLoS Computational Biology. Supplementary Information available in "Other Formats" bundle. Discussion slightly revised to add historical contex
    • …
    corecore