22,536 research outputs found

    Environmental statistics and optimal regulation

    Get PDF
    Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies-such as constitutive expression or graded response-for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.Comment: 21 pages, 7 figure

    Systems biology of energetic and atomic costs in the yeast transcriptome, proteome, and metabolome

    Get PDF
    Proteins vary in their cost to the cell and natural selection may favour the use of proteins that are cheaper to produce. We develop a novel approach to estimate the amino acid biosynthetic cost based on genome-scale metabolic models, and directly investigate the effects of biosynthetic cost on transcriptomic, proteomic and metabolomic data in _Saccharomyces cerevisiae_. We find that our systems approach to formulating biosynthetic cost produces a novel measure that explains similar levels of variation in gene expression compared with previously reported cost measures. Regardless of the measure used, the cost of amino acid synthesis is weakly associated with transcript and protein levels, independent of codon usage bias. In contrast, energetic costs explain a large proportion of variation in levels of free amino acids. In the economy of the yeast cell, there appears to be no single currency to compute the cost of amino acid synthesis, and thus a systems approach is necessary to uncover the full effects of amino acid biosynthetic cost in complex biological systems that vary with cellular and environmental conditions

    Mapping the landscape of metabolic goals of a cell

    Get PDF
    Genome-scale flux balance models of metabolism provide testable predictions of all metabolic rates in an organism, by assuming that the cell is optimizing a metabolic goal known as the objective function. We introduce an efficient inverse flux balance analysis (invFBA) approach, based on linear programming duality, to characterize the space of possible objective functions compatible with measured fluxes. After testing our algorithm on simulated E. coli data and time-dependent S. oneidensis fluxes inferred from gene expression, we apply our inverse approach to flux measurements in long-term evolved E. coli strains, revealing objective functions that provide insight into metabolic adaptation trajectories.MURI W911NF-12-1-0390 - Army Research Office (US); MURI W911NF-12-1-0390 - Army Research Office (US); 5R01GM089978-02 - National Institutes of Health (US); IIS-1237022 - National Science Foundation (US); DE-SC0012627 - U.S. Department of Energy; HR0011-15-C-0091 - Defense Sciences Office, DARPA; National Institutes of Health; R01GM103502; 5R01DE024468; 1457695 - National Science Foundatio

    A robust and efficient method for estimating enzyme complex abundance and metabolic flux from expression data

    Get PDF
    A major theme in constraint-based modeling is unifying experimental data, such as biochemical information about the reactions that can occur in a system or the composition and localization of enzyme complexes, with highthroughput data including expression data, metabolomics, or DNA sequencing. The desired result is to increase predictive capability resulting in improved understanding of metabolism. The approach typically employed when only gene (or protein) intensities are available is the creation of tissue-specific models, which reduces the available reactions in an organism model, and does not provide an objective function for the estimation of fluxes, which is an important limitation in many modeling applications. We develop a method, flux assignment with LAD (least absolute deviation) convex objectives and normalization (FALCON), that employs metabolic network reconstructions along with expression data to estimate fluxes. In order to use such a method, accurate measures of enzyme complex abundance are needed, so we first present a new algorithm that addresses quantification of complex abundance. Our extensions to prior techniques include the capability to work with large models and significantly improved run-time performance even for smaller models, an improved analysis of enzyme complex formation logic, the ability to handle very large enzyme complex rules that may incorporate multiple isoforms, and depending on the model constraints, either maintained or significantly improved correlation with experimentally measured fluxes. FALCON has been implemented in MATLAB and ATS, and can be downloaded from: https://github.com/bbarker/FALCON. ATS is not required to compile the software, as intermediate C source code is available, and binaries are provided for Linux x86-64 systems. FALCON requires use of the COBRA Toolbox, also implemented in MATLAB.Comment: 30 pages, 12 figures, 4 table

    Key Environmental Innovations

    Get PDF
    This paper is based on empirical research on a taxonomy of technological environmental innovations. It draws on a databank with over 500 examples of new technologies (materials, products, processes and practices) which come with benign environmental effects. The approaches applied to interpreting the datasets are innovation life cycle analysis, and product chain analysis. Main results include the following: 1. Innovations merely aimed at eco-efficienc y do in most cases not represent significant contributions to improving the properties of the industrial metabolism. This can better be achieved by technologies that fulfill the criteria of eco-consistency (metabolic consistency), also called eco-effectiveness. 2. Ecological pressure of a technology is basically determined by its conceptual make-up and design. Most promising thus are technologies in earlier rather than later stages of their life cycle (i.e. during R&D and customisation in growing numbers), because it is during the stages before reaching the inflection point and maturity in a learning curve where technological environmental innovations can best contribute to improving ecological consistency of the industrial metabolism while at the same time delivering their maximum increase in efficiency as well.3. Moreover, environmental action needs to focus on early steps in the vertical manufacturing chain rather than on those in the end. Most of the ecological pressure of a technology is no rmally not caused end-of-chain in use or consumption, but in the more basic steps of the manufacturing chain (with the exception of products the use of which consumes energy, e.g. vehicles, appliances). There are conclusions to be drawn for refocusing attention from downstream to upstream in life cycles and product chains, and for a shift of emphasis in environmental policy from regulation to innovation. Ambitious environmental standards, though, continue to be an important regulative precondition of ecologically benign technological innovation.Technological innovation, Environmental innovation, Life cycle analysis, Sustainability strategies, Environmental policy

    A Substruction Approach to Assessing the Theoretical Validity of Measures

    Get PDF
    Background Validity is about the logic, meaningfulness, and evidence used to defend inferences made when interpreting results. Substruction is a heuristic or process that visually represent the hierarchical structure between theory and measures. Purpose To describe substruction as a method for assessing the toretical validity of research measures. Methods Using Fawcett\u27s Conceptual-Theoretical-Empirical Structure. an exemplar is presented of substruction from the Individual and Family Self-Management Theory to the Striving to be strong study concepts and empirical measures. Results Substruction tables display evidence supporting theoretical validity of the instruments used in the study. Conclusion A high degree of congruence between theory and measure is critical to support the validity of the theory and to support attributions made about moderating, mediating, causal relationships, and intervention effects

    Designing and interpreting 'multi-omic' experiments that may change our understanding of biology.

    Get PDF
    Most biological mechanisms involve more than one type of biomolecule, and hence operate not solely at the level of either genome, transcriptome, proteome, metabolome or ionome. Datasets resulting from single-omic analysis are rapidly increasing in throughput and quality, rendering multi-omic studies feasible. These should offer a comprehensive, structured and interactive overview of a biological mechanism. However, combining single-omic datasets in a meaningful manner has so far proved challenging, and the discovery of new biological information lags behind expectation. One reason is that experiments conducted in different laboratories can typically not to be combined without restriction. Second, the interpretation of multi-omic datasets represents a significant challenge by nature, as the biological datasets are heterogeneous not only for technical, but also for biological, chemical, and physical reasons. Here, multi-layer network theory and methods of artificial intelligence might contribute to solve these problems. For the efficient application of machine learning however, biological datasets need to become more systematic, more precise - and much larger. We conclude our review with basic guidelines for the successful set-up of a multi-omic experiment

    Basic and applied uses of genome-scale metabolic network reconstructions of Escherichia coli.

    Get PDF
    The genome-scale model (GEM) of metabolism in the bacterium Escherichia coli K-12 has been in development for over a decade and is now in wide use. GEM-enabled studies of E. coli have been primarily focused on six applications: (1) metabolic engineering, (2) model-driven discovery, (3) prediction of cellular phenotypes, (4) analysis of biological network properties, (5) studies of evolutionary processes, and (6) models of interspecies interactions. In this review, we provide an overview of these applications along with a critical assessment of their successes and limitations, and a perspective on likely future developments in the field. Taken together, the studies performed over the past decade have established a genome-scale mechanistic understanding of genotype–phenotype relationships in E. coli metabolism that forms the basis for similar efforts for other microbial species. Future challenges include the expansion of GEMs by integrating additional cellular processes beyond metabolism, the identification of key constraints based on emerging data types, and the development of computational methods able to handle such large-scale network models with sufficient accuracy
    • 

    corecore