220 research outputs found

    Evolution of Robustness to Noise and Mutation in Gene Expression Dynamics

    Get PDF
    Phenotype of biological systems needs to be robust against mutation in order to sustain themselves between generations. On the other hand, phenotype of an individual also needs to be robust against fluctuations of both internal and external origins that are encountered during growth and development. Is there a relationship between these two types of robustness, one during a single generation and the other during evolution? Could stochasticity in gene expression have any relevance to the evolution of these robustness? Robustness can be defined by the sharpness of the distribution of phenotype; the variance of phenotype distribution due to genetic variation gives a measure of `genetic robustness' while that of isogenic individuals gives a measure of `developmental robustness'. Through simulations of a simple stochastic gene expression network that undergoes mutation and selection, we show that in order for the network to acquire both types of robustness, the phenotypic variance induced by mutations must be smaller than that observed in an isogenic population. As the latter originates from noise in gene expression, this signifies that the genetic robustness evolves only when the noise strength in gene expression is larger than some threshold. In such a case, the two variances decrease throughout the evolutionary time course, indicating increase in robustness. The results reveal how noise that cells encounter during growth and development shapes networks' robustness to stochasticity in gene expression, which in turn shapes networks' robustness to mutation. The condition for evolution of robustness as well as relationship between genetic and developmental robustness is derived through the variance of phenotypic fluctuations, which are measurable experimentally.Comment: 25 page

    Canalization effect in the coagulation cascade and the interindividual variability of oral anticoagulant response. a simulation Study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Increasing the predictability and reducing the rate of side effects of oral anticoagulant treatment (OAT) requires further clarification of the cause of about 50% of the interindividual variability of OAT response that is currently unaccounted for. We explore numerically the hypothesis that the effect of the interindividual expression variability of coagulation proteins, which does not usually result in a variability of the coagulation times in untreated subjects, is unmasked by OAT.</p> <p>Results</p> <p>We developed a stochastic variant of the Hockin-Mann model of the tissue factor coagulation pathway, using literature data for the variability of coagulation protein levels in the blood of normal subjects. We simulated <it>in vitro </it>coagulation and estimated the Prothrombin Time and the INR across a model population. In a model of untreated subjects a "canalization effect" can be observed in that a coefficient of variation of up to 33% of each protein level results in a simulated INR of 1 with a clinically irrelevant dispersion of 0.12. When the mean and the standard deviation of vitamin-K dependent protein levels were reduced by 80%, corresponding to the usual Warfarin treatment intensity, the simulated INR was 2.98 ± 0.48, a clinically relevant dispersion, corresponding to a reduction of the canalization effect.</p> <p>Then we combined the Hockin-Mann stochastic model with our previously published model of population response to Warfarin, that takes into account the genetical and the phenotypical variability of Warfarin pharmacokinetics and pharmacodynamics. We used the combined model to evaluate the coagulation protein variability effect on the variability of the Warfarin dose required to reach an INR target of 2.5. The dose variance when removing the coagulation protein variability was 30% lower. The dose was mostly related to the pretreatment levels of factors VII, X, and the tissue factor pathway inhibitor (TFPI).</p> <p>Conclusions</p> <p>It may be worth exploring in experimental studies whether the pretreatment levels of coagulation proteins, in particular VII, X and TFPI, are predictors of the individual warfarin dose, even though, maybe due to a canalization-type effect, their effect on the INR variance in untreated subjects appears low.</p

    Sensitivity and specificity of blood-fluid levels for oral anticoagulant-associated intracerebral haemorrhage

    Get PDF
    Intracerebral haemorrhage (ICH) is a life-threatening emergency, the incidence of which has increased in part due to an increase in the use of oral anticoagulants. A blood-fluid level within the haematoma, as revealed by computed tomography (CT), has been suggested as a marker for oral anticoagulant-associated ICH (OAC-ICH), but the diagnostic specificity and prognostic value of this finding remains unclear. In 855 patients with CT-confirmed acute ICH scanned within 48 h of symptom onset, we investigated the sensitivity and specificity of the presence of a CT-defined blood-fluid level (rated blinded to anticoagulant status) for identifying concomitant anticoagulant use. We also investigated the association of the presence of a blood-fluid level with six-month case fatality. Eighteen patients (2.1%) had a blood-fluid level identified on CT; of those with a blood-fluid level, 15 (83.3%) were taking anticoagulants. The specificity of blood-fluid level for OAC-ICH was 99.4%; the sensitivity was 4.2%. We could not detect an association between the presence of a blood-fluid level and an increased risk of death at six months (OR = 1.21, 95% CI 0.28–3.88, p = 0.769). The presence of a blood-fluid level should alert clinicians to the possibility of OAC-ICH, but absence of a blood-fluid level is not useful in excluding OAC-ICH

    Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems

    Get PDF
    A generic mechanism - networked buffering - is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems. \ud \u

    Most Networks in Wagner's Model Are Cycling

    Get PDF
    In this paper we study a model of gene networks introduced by Andreas Wagner in the 1990s that has been used extensively to study the evolution of mutational robustness. We investigate a range of model features and parameters and evaluate the extent to which they influence the probability that a random gene network will produce a fixed point steady state expression pattern. There are many different types of models used in the literature, (discrete/continuous, sparse/dense, small/large network) and we attempt to put some order into this diversity, motivated by the fact that many properties are qualitatively the same in all the models. Our main result is that random networks in all models give rise to cyclic behavior more often than fixed points. And although periodic orbits seem to dominate network dynamics, they are usually considered unstable and not allowed to survive in previous evolutionary studies. Defining stability as the probability of fixed points, we show that the stability distribution of these networks is highly robust to changes in its parameters. We also find sparser networks to be more stable, which may help to explain why they seem to be favored by evolution. We have unified several disconnected previous studies of this class of models under the framework of stability, in a way that had not been systematically explored before

    Can we apply the Mendelian randomization methodology without considering epigenetic effects?

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>Instrumental variable (IV) methods have been used in econometrics for several decades now, but have only recently been introduced into the epidemiologic research frameworks. Similarly, Mendelian randomization studies, which use the IV methodology for analysis and inference in epidemiology, were introduced into the epidemiologist's toolbox only in the last decade.</p> <p>Analysis</p> <p>Mendelian randomization studies using instrumental variables (IVs) have the potential to avoid some of the limitations of observational epidemiology (confounding, reverse causality, regression dilution bias) for making causal inferences. Certain limitations of randomized controlled trials, such as problems with generalizability, feasibility and ethics for some exposures, and high costs, also make the use of Mendelian randomization in observational studies attractive. Unlike conventional randomized controlled trials (RCTs), Mendelian randomization studies can be conducted in a representative sample without imposing any exclusion criteria or requiring volunteers to be amenable to random treatment allocation.</p> <p>Within the last decade, epigenetics has gained recognition as an independent field of study, and appears to be the new direction for future research into the genetics of complex diseases. Although previous articles have addressed some of the limitations of Mendelian randomization (such as the lack of suitable genetic variants, unreliable associations, population stratification, linkage disequilibrium (LD), pleiotropy, developmental canalization, the need for large sample sizes and some potential problems with binary outcomes), none has directly characterized the impact of epigenetics on Mendelian randomization. The possibility of epigenetic effects (non-Mendelian, heritable changes in gene expression not accompanied by alterations in DNA sequence) could alter the core instrumental variable assumptions of Mendelian randomization.</p> <p>This paper applies conceptual considerations, algebraic derivations and data simulations to question the appropriateness of Mendelian randomization methods when epigenetic modifications are present.</p> <p>Conclusion</p> <p>Given an inheritance of gene expression from parents, Mendelian randomization studies not only need to assume a random distribution of alleles in the offspring, but also a random distribution of epigenetic changes (e.g. gene expression) at conception, in order for the core assumptions of the Mendelian randomization methodology to remain valid. As an increasing number of epidemiologists employ Mendelian randomization methods in their research, caution is therefore needed in drawing conclusions from these studies if these assumptions are not met.</p

    Proportionality between variances in gene expression induced by noise and mutation: consequence of evolutionary robustness

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Characterization of robustness and plasticity of phenotypes is a basic issue in evolutionary and developmental biology. The robustness and plasticity are concerned with changeability of a biological system against external perturbations. The perturbations are either genetic, i.e., due to mutations in genes in the population, or epigenetic, i.e., due to noise during development or environmental variations. Thus, the variances of phenotypes due to genetic and epigenetic perturbations provide quantitative measures for such changeability during evolution and development, respectively.</p> <p>Results</p> <p>Using numerical models simulating the evolutionary changes in the gene regulation network required to achieve a particular expression pattern, we first confirmed that gene expression dynamics robust to mutation evolved in the presence of a sufficient level of transcriptional noise. Under such conditions, the two types of variances in the gene expression levels, i.e. those due to mutations to the gene regulation network and those due to noise in gene expression dynamics were found to be proportional over a number of genes. The fraction of such genes with a common proportionality coefficient increased with an increase in the robustness of the evolved network. This proportionality was generally confirmed, also under the presence of environmental fluctuations and sexual recombination in diploids, and was explained from an evolutionary robustness hypothesis, in which an evolved robust system suppresses the so-called error catastrophe - the destabilization of the single-peaked distribution in gene expression levels. Experimental evidences for the proportionality of the variances over genes are also discussed.</p> <p>Conclusions</p> <p>The proportionality between the genetic and epigenetic variances of phenotypes implies the correlation between the robustness (or plasticity) against genetic changes and against noise in development, and also suggests that phenotypic traits that are more variable epigenetically have a higher evolutionary potential.</p

    Effects of Ploidy and Recombination on Evolution of Robustness in a Model of the Segment Polarity Network

    Get PDF
    Many genetic networks are astonishingly robust to quantitative variation, allowing these networks to continue functioning in the face of mutation and environmental perturbation. However, the evolution of such robustness remains poorly understood for real genetic networks. Here we explore whether and how ploidy and recombination affect the evolution of robustness in a detailed computational model of the segment polarity network. We introduce a novel computational method that predicts the quantitative values of biochemical parameters from bit sequences representing genotype, allowing our model to bridge genotype to phenotype. Using this, we simulate 2,000 generations of evolution in a population of individuals under stabilizing and truncation selection, selecting for individuals that could sharpen the initial pattern of engrailed and wingless expression. Robustness was measured by simulating a mutation in the network and measuring the effect on the engrailed and wingless patterns; higher robustness corresponded to insensitivity of this pattern to perturbation. We compared robustness in diploid and haploid populations, with either asexual or sexual reproduction. In all cases, robustness increased, and the greatest increase was in diploid sexual populations; diploidy and sex synergized to evolve greater robustness than either acting alone. Diploidy conferred increased robustness by allowing most deleterious mutations to be rescued by a working allele. Sex (recombination) conferred a robustness advantage through “survival of the compatible”: those alleles that can work with a wide variety of genetically diverse partners persist, and this selects for robust alleles
    corecore