6 research outputs found

    The aggregation of cytochrome C may be linked to its flexibility during refolding

    Get PDF
    Large-scale expression of biopharmaceutical proteins in cellular hosts results in production of large insoluble mass aggregates. In order to generate functional product, these aggregates require further processing through refolding with denaturant, a process in itself that can result in aggregation. Using a model folding protein, cytochrome C, we show how an increase in final denaturant concentration decreases the propensity of the protein to aggregate during refolding. Using polarised fluorescence anisotropy, we show how reduced levels of aggregation can be achieved by increasing the period of time the protein remains flexible during refolding, mediated through dilution ratios. This highlights the relationship between the flexibility of a protein and its propensity to aggregate. We attribute this behaviour to the preferential urea-residue interaction, over self-association between molecules

    Deep Learning Models on CPUs: A Methodology for Efficient Training

    Full text link
    GPUs have been favored for training deep learning models due to their highly parallelized architecture. As a result, most studies on training optimization focus on GPUs. There is often a trade-off, however, between cost and efficiency when deciding on how to choose the proper hardware for training. In particular, CPU servers can be beneficial if training on CPUs was more efficient, as they incur fewer hardware update costs and better utilizing existing infrastructure. This paper makes several contributions to research on training deep learning models using CPUs. First, it presents a method for optimizing the training of deep learning models on Intel CPUs and a toolkit called ProfileDNN, which we developed to improve performance profiling. Second, we describe a generic training optimization method that guides our workflow and explores several case studies where we identified performance issues and then optimized the Intel Extension for PyTorch, resulting in an overall 2x training performance increase for the RetinaNet-ResNext50 model. Third, we show how to leverage the visualization capabilities of ProfileDNN, which enabled us to pinpoint bottlenecks and create a custom focal loss kernel that was two times faster than the official reference PyTorch implementation

    Backbone and Side-Chain Contributions in Protein Denaturation by Urea

    Get PDF
    Urea is a commonly used protein denaturant, and it is of great interest to determine its interaction with various protein groups to elucidate the molecular basis of its effect on protein stability. Using the Trp-cage miniprotein as a model system, we report what we believe to be the first computation of changes in the preferential interaction coefficient of the protein upon urea denaturation from molecular-dynamics simulations and examine the contributions from the backbone and the side-chain groups. The preferential interaction is obtained from reversible folding/unfolding replica exchange molecular-dynamics simulations of Trp-cage in presence of urea, over a wide range of urea concentration. The increase in preferential interaction upon unfolding is dominated by the side-chain contribution, rather than the backbone. Similar trends are observed in simulations using two different force fields, Amber94 and Amber99sb, for the protein. The magnitudes of the side-chain and backbone contributions differ in the two force fields, despite containing identical protein-solvent interaction terms. The differences arise from the unfolded ensembles sampled, with Amber99sb favoring conformations with larger surface area and lower helical content. These results emphasize the importance of the side-chain interactions with urea in protein denaturation, and highlight the dependence of the computed driving forces on the unfolded ensemble sampled
    corecore