31 research outputs found

    Intrinsically Disordered Regions May Lower the Hydration Free Energy in Proteins: A Case Study of Nudix Hydrolase in the Bacterium Deinococcus radiodurans

    Get PDF
    The proteome of the radiation- and desiccation-resistant bacterium D. radiodurans features a group of proteins that contain significant intrinsically disordered regions that are not present in non-extremophile homologues. Interestingly, this group includes a number of housekeeping and repair proteins such as DNA polymerase III, nudix hydrolase and rotamase. Here, we focus on a member of the nudix hydrolase family from D. radiodurans possessing low-complexity N- and C-terminal tails, which exhibit sequence signatures of intrinsic disorder and have unknown function. The enzyme catalyzes the hydrolysis of oxidatively damaged and mutagenic nucleotides, and it is thought to play an important role in D. radiodurans during the recovery phase after exposure to ionizing radiation or desiccation. We use molecular dynamics simulations to study the dynamics of the protein, and study its hydration free energy using the GB/SA formalism. We show that the presence of disordered tails significantly decreases the hydration free energy of the whole protein. We hypothesize that the tails increase the chances of the protein to be located in the remaining water patches in the desiccated cell, where it is protected from the desiccation effects and can function normally. We extrapolate this to other intrinsically disordered regions in proteins, and propose a novel function for them: intrinsically disordered regions increase the “surface-properties” of the folded domains they are attached to, making them on the whole more hydrophilic and potentially influencing, in this way, their localization and cellular activity

    STEPS 4.0: Fast and memory-efficient molecular simulations of neurons at the nanoscale

    Get PDF
    Recent advances in computational neuroscience have demonstrated the usefulness and importance of stochastic, spatial reaction-diffusion simulations. However, ever increasing model complexity renders traditional serial solvers, as well as naive parallel implementations, inadequate. This paper introduces a new generation of the STochastic Engine for Pathway Simulation (STEPS) project (http://steps.sourceforge.net/), denominated STEPS 4.0, and its core components which have been designed for improved scalability, performance, and memory efficiency. STEPS 4.0 aims to enable novel scientific studies of macroscopic systems such as whole cells while capturing their nanoscale details. This class of models is out of reach for serial solvers due to the vast quantity of computation in such detailed models, and also out of reach for naive parallel solvers due to the large memory footprint. Based on a distributed mesh solution, we introduce a new parallel stochastic reaction-diffusion solver and a deterministic membrane potential solver in STEPS 4.0. The distributed mesh, together with improved data layout and algorithm designs, significantly reduces the memory footprint of parallel simulations in STEPS 4.0. This enables massively parallel simulations on modern HPC clusters and overcomes the limitations of the previous parallel STEPS implementation. Current and future improvements to the solver are not sustainable without following proper software engineering principles. For this reason, we also give an overview of how the STEPS codebase and the development environment have been updated to follow modern software development practices. We benchmark performance improvement and memory footprint on three published models with different complexities, from a simple spatial stochastic reaction-diffusion model, to a more complex one that is coupled to a deterministic membrane potential solver to simulate the calcium burst activity of a Purkinje neuron. Simulation results of these models suggest that the new solution dramatically reduces the per-core memory consumption by more than a factor of 30, while maintaining similar or better performance and scalability

    IT Lightning Talks: session #12

    No full text
    Critical thinking and constructive communication are becoming essential skills not only for academics and scientists, but also for the general public. As we are being inundated with news, being able to understand, interpret, and filter information becomes increasingly important. Furthermore, we are more often confronted with profound scientific, technical, economic, and ethical questions on which we have to base decisions: Should GMO plants be approved? Why should we care about encrypted communication? Is a globalized or isolationist economy more advantageous to us? At the openlab HTCC collaboration, we have started a reading club in the context of academic articles on scientific computing to practice precisely these skills and to gain more knowledge in this field. This is especially important for the doctoral students in our collaboration, allowing them to better set their project into a greater context and compare to related works

    An efficient low-rank Kalman filter for modern SIMD architectures

    No full text
    The Kalman filter is a fundamental process in the reconstruction of particle collisions in high-energy physics detectors. At the LHCb detector in the Large Hadron Collider, this reconstruction happens at an average rate of 30 million times per second. Due to iterative enhancements in the detector's technology, together with the projected removal of the hardware filter, the rate of particles that will need to be processed in software in real-time is expected to increase in the coming years by a factor 40. In order to cope with the projected data rate, processing and filtering software must be adapted to take into account cutting-edge hardware technologies. We present Cross Kalman, a cross-architecture Kalman filter optimized for low-rank problems and SIMD architectures. We explore multi- and many-core architectures and compare their performance on single and double precision configurations. We show that under the constraints of our mathematical formulation, we saturate the architectures under study. We validate our results and integrate our filter in the LHCb framework. Our work will allow to better use the available resources at the LHCb experiment and enables us to evaluate other computing platforms for future hardware upgrades. Finally, we expect that the presented algorithm and data structures can be easily adapted to other applications of low-rank Kalman filters

    An efficient low-rank Kalman filter for modern SIMD architectures

    No full text
    The Kalman filter is a fundamental process in the reconstruction of particle collisions in high-energy physics detectors. At the LHCb detector in the Large Hadron Collider, this reconstruction happens at an average rate of 30 million times per second. Due to iterative enhancements in the detector's technology, together with the projected removal of the hardware filter, the rate of particles that will need to be processed in software in real-time is expected to increase in the coming years by a factor 40. In order to cope with the projected data rate, processing and filtering software must be adapted to take into account cutting-edge hardware technologies. We present Cross Kalman, a cross-architecture Kalman filter optimized for low-rank problems and SIMD architectures. We explore multi- and many-core architectures and compare their performance on single and double precision configurations. We show that under the constraints of our mathematical formulation, we saturate the architectures under study. We validate our results and integrate our filter in the LHCb framework. Our work will allow to better use the available resources at the LHCb experiment and enables us to evaluate other computing platforms for future hardware upgrades. Finally, we expect that the presented algorithm and data structures can be easily adapted to other applications of low-rank Kalman filters
    corecore