57 research outputs found
Towards quantitative prediction of proteasomal digestion patterns of proteins
We discuss the problem of proteasomal degradation of proteins. Though
proteasomes are important for all aspects of the cellular metabolism, some
details of the physical mechanism of the process remain unknown. We introduce a
stochastic model of the proteasomal degradation of proteins, which accounts for
the protein translocation and the topology of the positioning of cleavage
centers of a proteasome from first principles. For this model we develop the
mathematical description based on a master-equation and techniques for
reconstruction of the cleavage specificity inherent to proteins and the
proteasomal translocation rates, which are a property of the proteasome specie,
from mass spectroscopy data on digestion patterns. With these properties
determined, one can quantitatively predict digestion patterns for new
experimental set-ups. Additionally we design an experimental set-up for a
synthetic polypeptide with a periodic sequence of amino acids, which enables
especially reliable determination of translocation rates.Comment: 14 pages, 4 figures, submitted to J. Stat. Mech. (Special issue for
proceedings of 5th Intl. Conf. on Unsolved Problems on Noise and Fluctuations
in Physics, Biology & High Technology, Lyon (France), June 2-6, 2008
Including metabolite concentrations into flux balance analysis: thermodynamic realizability as a constraint on flux distributions in metabolic networks
<p>Abstract</p> <p>Background</p> <p>In recent years, constrained optimization – usually referred to as flux balance analysis (FBA) – has become a widely applied method for the computation of stationary fluxes in large-scale metabolic networks. The striking advantage of FBA as compared to kinetic modeling is that it basically requires only knowledge of the stoichiometry of the network. On the other hand, results of FBA are to a large degree hypothetical because the method relies on plausible but hardly provable optimality principles that are thought to govern metabolic flux distributions.</p> <p>Results</p> <p>To augment the reliability of FBA-based flux calculations we propose an additional side constraint which assures thermodynamic realizability, i.e. that the flux directions are consistent with the corresponding changes of Gibb's free energies. The latter depend on metabolite levels for which plausible ranges can be inferred from experimental data. Computationally, our method results in the solution of a mixed integer linear optimization problem with quadratic scoring function. An optimal flux distribution together with a metabolite profile is determined which assures thermodynamic realizability with minimal deviations of metabolite levels from their expected values. We applied our novel approach to two exemplary metabolic networks of different complexity, the metabolic core network of erythrocytes (30 reactions) and the metabolic network iJR904 of <it>Escherichia coli </it>(931 reactions). Our calculations show that increasing network complexity entails increasing sensitivity of predicted flux distributions to variations of standard Gibb's free energy changes and metabolite concentration ranges. We demonstrate the usefulness of our method for assessing critical concentrations of external metabolites preventing attainment of a metabolic steady state.</p> <p>Conclusion</p> <p>Our method incorporates the thermodynamic link between flux directions and metabolite concentrations into a practical computational algorithm. The weakness of conventional FBA to rely on intuitive assumptions about the reversibility of biochemical reactions is overcome. This enables the computation of reliable flux distributions even under extreme conditions of the network (e.g. enzyme inhibition, depletion of substrates or accumulation of end products) where metabolite concentrations may be drastically altered.</p
Partial inhibition and bilevel optimization in flux balance analysis
Motivation: Within Flux Balance Analysis, the investigation of complex subtasks, such as finding the optimal perturbation of the network or finding an optimal combination of drugs, often requires to set up a bilevel optimization problem. In order to keep the linearity and convexity of these nested optimization problems, an ON/OFF description of the effect of the perturbation (i.e. Boolean variable) is normally used. This restriction may not be realistic when one wants, for instance, to describe the partial inhibition of a reaction induced by a drug.Results: In this paper we present a formulation of the bilevel optimization which overcomes the oversimplified ON/OFF modeling while preserving the linear nature of the problem. A case study is considered: the search of the best multi-drug treatment which modulates an objective reaction and has the minimal perturbation on the whole network. The drug inhibition is described and modulated through a convex combination of a fixed number of Boolean variables. The results obtained from the application of the algorithm to the core metabolism of E.coli highlight the possibility of finding a broader spectrum of drug combinations compared to a simple ON/OFF modeling.Conclusions: The method we have presented is capable of treating partial inhibition inside a bilevel optimization, without loosing the linearity property, and with reasonable computational performances also on large metabolic networks. The more fine-graded representation of the perturbation allows to enlarge the repertoire of synergistic combination of drugs for tasks such as selective perturbation of cellular metabolism. This may encourage the use of the approach also for other cases in which a more realistic modeling is required. \ua9 2013 Facchetti and Altafini; licensee BioMed Central Ltd
Metabolic adaptation of two in silico mutants of Mycobacterium tuberculosis during infection
ABSTRACT: Background: Up to date, Mycobacterium tuberculosis (Mtb) remains as the worst intracellular killer pathogen. To
establish infection, inside the granuloma, Mtb reprograms its metabolism to support both growth and survival,
keeping a balance between catabolism, anabolism and energy supply. Mtb knockouts with the faculty of being
essential on a wide range of nutritional conditions are deemed as target candidates for tuberculosis (TB) treatment.
Constraint-based genome-scale modeling is considered as a promising tool for evaluating genetic and nutritional perturbations on Mtb metabolic reprogramming. Nonetheless, few in silico assessments of the effect of nutritional conditions on Mtb’s vulnerability and metabolic adaptation have been carried out.
Results: A genome-scale model (GEM) of Mtb, modified from the H37Rv iOSDD890, was used to explore the
metabolic reprogramming of two Mtb knockout mutants (pfkA- and icl-mutants), lacking key enzymes of central
carbon metabolism, while exposed to changing nutritional conditions (oxygen, and carbon and nitrogen sources).
A combination of shadow pricing, sensitivity analysis, and flux distributions patterns allowed us to identify
metabolic behaviors that are in agreement with phenotypes reported in the literature. During hypoxia, at high
glucose consumption, the Mtb pfkA-mutant showed a detrimental growth effect derived from the accumulation of toxic sugar phosphate intermediates (glucose-6-phosphate and fructose-6-phosphate) along with an increment of carbon fluxes towards the reductive direction of the tricarboxylic acid cycle (TCA). Furthermore, metabolic reprogramming of the icl-mutant (icl1&icl2) showed the importance of the methylmalonyl pathway for the detoxification of propionyl-CoA, during growth at high fatty acid consumption rates and aerobic conditions. At elevated levels of fatty acid uptake and hypoxia, we found a drop in TCA cycle intermediate accumulation that might create redox imbalance. Finally, findings regarding Mtb-mutant metabolic adaptation associated with
asparagine consumption and acetate, succinate and alanine production, were in agreement with literature reports.
Conclusions: This study demonstrates the potential application of genome-scale modeling, flux balance analysis (FBA), phenotypic phase plane (PhPP) analysis and shadow pricing to generate valuable insights about Mtb metabolic reprogramming in the context of human granulomas
Computational Lipidology: Predicting Lipoprotein Density Profiles in Human Blood Plasma
Monitoring cholesterol levels is strongly recommended to identify patients at risk for myocardial infarction. However, clinical markers beyond “bad” and “good” cholesterol are needed to precisely predict individual lipid disorders. Our work contributes to this aim by bringing together experiment and theory. We developed a novel computer-based model of the human plasma lipoprotein metabolism in order to simulate the blood lipid levels in high resolution. Instead of focusing on a few conventionally used predefined lipoprotein density classes (LDL, HDL), we consider the entire protein and lipid composition spectrum of individual lipoprotein complexes. Subsequently, their distribution over density (which equals the lipoprotein profile) is calculated. As our main results, we (i) successfully reproduced clinically measured lipoprotein profiles of healthy subjects; (ii) assigned lipoproteins to narrow density classes, named high-resolution density sub-fractions (hrDS), revealing heterogeneous lipoprotein distributions within the major lipoprotein classes; and (iii) present model-based predictions of changes in the lipoprotein distribution elicited by disorders in underlying molecular processes. In its present state, the model offers a platform for many future applications aimed at understanding the reasons for inter-individual variability, identifying new sub-fractions of potential clinical relevance and a patient-oriented diagnosis of the potential molecular causes for individual dyslipidemia
A kinetic model of vertebrate 20S proteasome accounting for the generation of major proteolytic fragments from oligomeric peptide substrates.
There is now convincing evidence that the proteasome contributes to the generation of most of the peptides presented by major histocompatibility complex class I molecules. Here we present a model-based kinetic analysis of fragment patterns generated by the 20S proteasome from 20 to 40 residues long oligomeric substrates. The model consists of ordinary first-order differential equations describing the time evolution of the average probabilities with which fragments can be generated from a given initial substrate. First-order rate laws are used to describe the cleavage of peptide bonds and the release of peptides from the interior of the proteasome to the external space. Numerical estimates for the 27 unknown model parameters are determined across a set of five different proteins with known cleavage patterns. Testing the validity of the model by a jack knife procedure, about 80% of the observed fragments can be correctly identified, whereas the abundance of false-positive classifications is below 10%. From our theoretical approach, it is inferred that double-cleavage fragments of length 7-13 are predominantly cut out in "C-N-order" in that the C-terminus is generated first. This is due to striking differences in the further processing of the two fragments generated by the first cleavage. The upstream fragment exhibits a pronounced tendency to escape from second cleavage as indicated by a large release rate and a monotone exponential decline of peptide bond accessibility with increasing distance from the first scissile bond. In contrast, the release rate of the downstream fragment is about four orders of magnitude lower and the accessibility of peptide bonds shows a sharp peak in a distance of about nine residues from the first scissile bond. This finding strongly supports the idea that generation of fragments with well-defined lengths is favored in that temporary immobilization of the downstream fragment after the first cleavage renders it susceptible for a second cleavage
- …