126 research outputs found

    Evaluation of rate law approximations in bottom-up kinetic models of metabolism.

    Get PDF
    BackgroundThe mechanistic description of enzyme kinetics in a dynamic model of metabolism requires specifying the numerical values of a large number of kinetic parameters. The parameterization challenge is often addressed through the use of simplifying approximations to form reaction rate laws with reduced numbers of parameters. Whether such simplified models can reproduce dynamic characteristics of the full system is an important question.ResultsIn this work, we compared the local transient response properties of dynamic models constructed using rate laws with varying levels of approximation. These approximate rate laws were: 1) a Michaelis-Menten rate law with measured enzyme parameters, 2) a Michaelis-Menten rate law with approximated parameters, using the convenience kinetics convention, 3) a thermodynamic rate law resulting from a metabolite saturation assumption, and 4) a pure chemical reaction mass action rate law that removes the role of the enzyme from the reaction kinetics. We utilized in vivo data for the human red blood cell to compare the effect of rate law choices against the backdrop of physiological flux and concentration differences. We found that the Michaelis-Menten rate law with measured enzyme parameters yields an excellent approximation of the full system dynamics, while other assumptions cause greater discrepancies in system dynamic behavior. However, iteratively replacing mechanistic rate laws with approximations resulted in a model that retains a high correlation with the true model behavior. Investigating this consistency, we determined that the order of magnitude differences among fluxes and concentrations in the network were greatly influential on the network dynamics. We further identified reaction features such as thermodynamic reversibility, high substrate concentration, and lack of allosteric regulation, which make certain reactions more suitable for rate law approximations.ConclusionsOverall, our work generally supports the use of approximate rate laws when building large scale kinetic models, due to the key role that physiologically meaningful flux and concentration ranges play in determining network dynamics. However, we also showed that detailed mechanistic models show a clear benefit in prediction accuracy when data is available. The work here should help to provide guidance to future kinetic modeling efforts on the choice of rate law and parameterization approaches

    Regulatory perturbations of ribosome allocation in bacteria reshape the growth proteome with a trade-off in adaptation capacity

    Get PDF
    Bacteria regulate their cellular resource allocation to enable fast growth-adaptation to a variety of environmental niches. We studied the ribosomal allocation, growth, and expression profiles of two sets of fast-growing mutants of Escherichia coli K-12 MG1655. Mutants with only three of the seven copies of ribosomal RNA operons grew faster than the wild-type strain in minimal media and show similar phenotype to previously studied fast-growing rpoB mutants. Comparing these two different regulatory perturbations (rRNA promoters or rpoB mutations), we show how they reshape the proteome for growth with a concomitant fitness cost. The fast-growing mutants shared downregulation of hedging functions and upregulated growth functions. They showed longer diauxic shifts and reduced activity of gluconeogenic promoters during glucose-acetate shifts, suggesting reduced availability of the RNA polymerase for expressing hedging proteome. These results show that the regulation of ribosomal allocation underlies the growth/hedging phenotypes obtained from laboratory evolution experiments

    Filling Kinetic Gaps: Dynamic Modeling of Metabolism Where Detailed Kinetic Information Is Lacking

    Get PDF
    Integrative analysis between dynamical modeling of metabolic networks and data obtained from high throughput technology represents a worthy effort toward a holistic understanding of the link among phenotype and dynamical response. Even though the theoretical foundation for modeling metabolic network has been extensively treated elsewhere, the lack of kinetic information has limited the analysis in most of the cases. To overcome this constraint, we present and illustrate a new statistical approach that has two purposes: integrate high throughput data and survey the general dynamical mechanisms emerging for a slightly perturbed metabolic network.This paper presents a statistic framework capable to study how and how fast the metabolites participating in a perturbed metabolic network reach a steady-state. Instead of requiring accurate kinetic information, this approach uses high throughput metabolome technology to define a feasible kinetic library, which constitutes the base for identifying, statistical and dynamical properties during the relaxation. For the sake of illustration we have applied this approach to the human Red blood cell metabolism (hRBC) and its capacity to predict temporal phenomena was evaluated. Remarkable, the main dynamical properties obtained from a detailed kinetic model in hRBC were recovered by our statistical approach. Furthermore, robust properties in time scale and metabolite organization were identify and one concluded that they are a consequence of the combined performance of redundancies and variability in metabolite participation.In this work we present an approach that integrates high throughput metabolome data to define the dynamic behavior of a slightly perturbed metabolic network where kinetic information is lacking. Having information of metabolite concentrations at steady-state, this method has significant relevance due its potential scope to analyze others genome scale metabolic reconstructions. Thus, I expect this approach will significantly contribute to explore the relationship between dynamic and physiology in other metabolic reconstructions, particularly those whose kinetic information is practically nulls. For instances, I envisage that this approach can be useful in genomic medicine or pharmacogenomics, where the estimation of time scales and the identification of metabolite organization may be crucial to characterize and identify (dis)functional stages

    Signatures of arithmetic simplicity in metabolic network architecture

    Get PDF
    Metabolic networks perform some of the most fundamental functions in living cells, including energy transduction and building block biosynthesis. While these are the best characterized networks in living systems, understanding their evolutionary history and complex wiring constitutes one of the most fascinating open questions in biology, intimately related to the enigma of life's origin itself. Is the evolution of metabolism subject to general principles, beyond the unpredictable accumulation of multiple historical accidents? Here we search for such principles by applying to an artificial chemical universe some of the methodologies developed for the study of genome scale models of cellular metabolism. In particular, we use metabolic flux constraint-based models to exhaustively search for artificial chemistry pathways that can optimally perform an array of elementary metabolic functions. Despite the simplicity of the model employed, we find that the ensuing pathways display a surprisingly rich set of properties, including the existence of autocatalytic cycles and hierarchical modules, the appearance of universally preferable metabolites and reactions, and a logarithmic trend of pathway length as a function of input/output molecule size. Some of these properties can be derived analytically, borrowing methods previously used in cryptography. In addition, by mapping biochemical networks onto a simplified carbon atom reaction backbone, we find that several of the properties predicted by the artificial chemistry model hold for real metabolic networks. These findings suggest that optimality principles and arithmetic simplicity might lie beneath some aspects of biochemical complexity

    Defining genes: a computational framework

    Get PDF
    The precise elucidation of the gene concept has become the subject of intense discussion in light of results from several, large high-throughput surveys of transcriptomes and proteomes. In previous work, we proposed an approach for constructing gene concepts that combines genomic heritability with elements of function. Here, we introduce a definition of the gene within a computational framework of cellular interactions. The definition seeks to satisfy the practical requirements imposed by annotation, capture logical aspects of regulation, and encompass the evolutionary property of homology

    Investigating the metabolic capabilities of Mycobacterium tuberculosis H37Rv using the in silico strain iNJ661 and proposing alternative drug targets

    Get PDF
    <p>Abstract</p> <p>Background:</p> <p><it>Mycobacterium tuberculosis </it>continues to be a major pathogen in the third world, killing almost 2 million people a year by the most recent estimates. Even in industrialized countries, the emergence of multi-drug resistant (MDR) strains of tuberculosis hails the need to develop additional medications for treatment. Many of the drugs used for treatment of tuberculosis target metabolic enzymes. Genome-scale models can be used for analysis, discovery, and as hypothesis generating tools, which will hopefully assist the rational drug development process. These models need to be able to assimilate data from large datasets and analyze them.</p> <p>Results:</p> <p>We completed a bottom up reconstruction of the metabolic network of <it>Mycobacterium tuberculosis </it>H37Rv. This functional <it>in silico </it>bacterium, <it>iNJ</it>661, contains 661 genes and 939 reactions and can produce many of the complex compounds characteristic to tuberculosis, such as mycolic acids and mycocerosates. We grew this bacterium <it>in silico </it>on various media, analyzed the model in the context of multiple high-throughput data sets, and finally we analyzed the network in an 'unbiased' manner by calculating the Hard Coupled Reaction (HCR) sets, groups of reactions that are forced to operate in unison due to mass conservation and connectivity constraints.</p> <p>Conclusion:</p> <p>Although we observed growth rates comparable to experimental observations (doubling times ranging from about 12 to 24 hours) in different media, comparisons of gene essentiality with experimental data were less encouraging (generally about 55%). The reasons for the often conflicting results were multi-fold, including gene expression variability under different conditions and lack of complete biological knowledge. Some of the inconsistencies between <it>in vitro </it>and <it>in silico </it>or <it>in vivo </it>and <it>in silico </it>results highlight specific loci that are worth further experimental investigations. Finally, by considering the HCR sets in the context of known drug targets for tuberculosis treatment we proposed new alternative, but equivalent drug targets.</p

    A new computational method to split large biochemical networks into coherent subnets

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Compared to more general networks, biochemical networks have some special features: while generally sparse, there are a small number of highly connected metabolite nodes; and metabolite nodes can also be divided into two classes: internal nodes with associated mass balance constraints and external ones without. Based on these features, reclassifying selected internal nodes (separators) to external ones can be used to divide a large complex metabolic network into simpler subnetworks. Selection of separators based on node connectivity is commonly used but affords little detailed control and tends to produce excessive fragmentation.</p> <p>The method proposed here (Netsplitter) allows the user to control separator selection. It combines local connection degree partitioning with global connectivity derived from random walks on the network, to produce a more even distribution of subnetwork sizes. Partitioning is performed progressively and the interactive visual matrix presentation used allows the user considerable control over the process, while incorporating special strategies to maintain the network integrity and minimise the information loss due to partitioning.</p> <p>Results</p> <p>Partitioning of a genome scale network of 1348 metabolites and 1468 reactions for <it>Arabidopsis thaliana </it>encapsulates 66% of the network into 10 medium sized subnets. Applied to the flavonoid subnetwork extracted in this way, it is shown that Netsplitter separates this naturally into four subnets with recognisable functionality, namely synthesis of lignin precursors, flavonoids, coumarin and benzenoids. A quantitative quality measure called <it>efficacy </it>is constructed and shows that the new method gives improved partitioning for several metabolic networks, including bacterial, plant and mammal species.</p> <p>Conclusions</p> <p>For the examples studied the Netsplitter method is a considerable improvement on the performance of connection degree partitioning, giving a better balance of subnet sizes with the removal of fewer mass balance constraints. In addition, the user can interactively control which metabolite nodes are selected for cutting and when to stop further partitioning as the desired granularity has been reached. Finally, the blocking transformation at the heart of the procedure provides a powerful visual display of network structure that may be useful for its exploration independent of whether partitioning is required.</p

    CytoSolve: A Scalable Computational Method for Dynamic Integration of Multiple Molecular Pathway Models

    Get PDF
    A grand challenge of computational systems biology is to create a molecular pathway model of the whole cell. Current approaches involve merging smaller molecular pathway models’ source codes to create a large monolithic model (computer program) that runs on a single computer. Such a larger model is difficult, if not impossible, to maintain given ongoing updates to the source codes of the smaller models. This paper describes a new system called CytoSolve that dynamically integrates computations of smaller models that can run in parallel across different machines without the need to merge the source codes of the individual models. This approach is demonstrated on the classic Epidermal Growth Factor Receptor (EGFR) model of Kholodenko. The EGFR model is split into four smaller models and each smaller model is distributed on a different machine. Results from four smaller models are dynamically integrated to generate identical results to the monolithic EGFR model running on a single machine. The overhead for parallel and dynamic computation is approximately twice that of a monolithic model running on a single machine. The CytoSolve approach provides a scalable method since smaller models may reside on any computer worldwide, where the source code of each model can be independently maintained and updated
    corecore