37 research outputs found

    Developing improved MD codes for understanding processive cellulases

    Full text link
    "The mechanism of action of cellulose-degrading enzymes is illuminated through a multidisciplinary collaboration that uses molecular dynamics (MD) simulations and expands the capabilities of MD codes to allow simulations of enzymes and substrates on petascale computational facilities. There is a class of glycoside hydrolase enzymes called cellulases that are thought to decrystallize and processively depolymerize cellulose using biochemical processes that are largely not understood. Understanding the mechanisms involved and improving the efficiency of this hydrolysis process through computational models and protein engineering presents a compelling grand challenge. A detailed understanding of cellulose structure, dynamics and enzyme function at the molecular level is required to direct protein engineers to the right modifications or to understand if natural thermodynamic or kinetic limits are in play. Much can be learned about processivity by conducting carefully designed molecular dynamics (MD) simulations of the binding and catalytic domains of cellulases with various substrate configurations, solvation models and thermodynamic protocols. Most of these numerical experiments, however, will require significant modification of existing code and algorithms in order to efficiently use current (terascale) and future (petascale) hardware to the degree of parallelism necessary to simulate a system of the size proposed here. This work will develop MD codes that can efficiently use terascale and petascale systems, not just for simple classical MD simulations, but also for more advanced methods, including umbrella sampling with complex restraints and reaction coordinates, transition path sampling, steered molecular dynamics, and quantum mechanical/molecular mechanical simulations of systems the size of cellulose degrading enzymes acting on cellulose."http://deepblue.lib.umich.edu/bitstream/2027.42/64203/1/jpconf8_125_012049.pd

    Creating science-driven computer architecture: A new path to scientific leadership

    Full text link

    Computers and Liquid State Statistical Mechanics

    Full text link
    The advent of electronic computers has revolutionised the application of statistical mechanics to the liquid state. Computers have permitted, for example, the calculation of the phase diagram of water and ice and the folding of proteins. The behaviour of alkanes adsorbed in zeolites, the formation of liquid crystal phases and the process of nucleation. Computer simulations provide, on one hand, new insights into the physical processes in action, and on the other, quantitative results of greater and greater precision. Insights into physical processes facilitate the reductionist agenda of physics, whilst large scale simulations bring out emergent features that are inherent (although far from obvious) in complex systems consisting of many bodies. It is safe to say that computer simulations are now an indispensable tool for both the theorist and the experimentalist, and in the future their usefulness will only increase. This chapter presents a selective review of some of the incredible advances in condensed matter physics that could only have been achieved with the use of computers.Comment: 22 pages, 2 figures. Chapter for a boo

    Progress Towards Petascale Applications in Biology: Status in 2006

    Get PDF
    Petascale computing is currently a common topic of discussion in the high performance computing community. Biological applications, particularly protein folding, are often given as examples of the need for petascale computing. There are at present biological applications that scale to execution rates of approximately 55 teraflops on a special-purpose supercomputer and 2.2 teraflops on a general-purpose supercomputer. In comparison, Qbox, a molecular dynamics code used to model metals, has an achieved performance of 207.3 teraflops. It may be useful to increase the extent to which operation rates and total calculations are reported in discussion of biological applications, and use total operations (integer and floating point combined) rather than (or in addition to) floating point operations as the unit of measure. Increased reporting of such metrics will enable better tracking of progress as the research community strives for the insights that will be enabled by petascale computing.This research was supported in part by the Indiana Genomics Initiative and the Indiana Metabolomics and Cytomics Initiative. The Indiana Genomics Initiative of Indiana University and the Indiana Metabolomics and Cytomics Initiative of Indiana University are supported in part by Lilly Endowment, Inc. The authors also wish to thank IBM, Inc. for support via Shared University Research Grants and partnerships via IU’s relationship as an IBM Life Sciences Institute of Innovation. Indiana University also thanks the TeraGrid partners; IU’s participation in the TeraGrid is funded by National Science Foundation grant numbers 0338618, 0504075, and 0451237. The early development of this paper was supported by a Fulbright Senior Scholars award from the Council for International Exchange of Scholars (CIES) and the United States Department of State to Dr. Craig A. Stewart; Matthias Mueller and the Technische Universität Dresden were hosts. Many reviewers contributed to the improvement of the ideas expressed in this paper and are gratefully appreciated; Thom Dunning, Robert Germain, Chris Mueller, Jim Phillips, Richard Repasky, Ralph Roskies, and Allan Snavely are thanked particularly for their insights

    Validating DOE's Office of Science "capability" computing needs.

    Full text link

    A Tuned and Scalable Fast Multipole Method as a Preeminent Algorithm for Exascale Systems

    Full text link
    Among the algorithms that are likely to play a major role in future exascale computing, the fast multipole method (FMM) appears as a rising star. Our previous recent work showed scaling of an FMM on GPU clusters, with problem sizes in the order of billions of unknowns. That work led to an extremely parallel FMM, scaling to thousands of GPUs or tens of thousands of CPUs. This paper reports on a a campaign of performance tuning and scalability studies using multi-core CPUs, on the Kraken supercomputer. All kernels in the FMM were parallelized using OpenMP, and a test using 10^7 particles randomly distributed in a cube showed 78% efficiency on 8 threads. Tuning of the particle-to-particle kernel using SIMD instructions resulted in 4x speed-up of the overall algorithm on single-core tests with 10^3 - 10^7 particles. Parallel scalability was studied in both strong and weak scaling. The strong scaling test used 10^8 particles and resulted in 93% parallel efficiency on 2048 processes for the non-SIMD code and 54% for the SIMD-optimized code (which was still 2x faster). The weak scaling test used 10^6 particles per process, and resulted in 72% efficiency on 32,768 processes, with the largest calculation taking about 40 seconds to evaluate more than 32 billion unknowns. This work builds up evidence for our view that FMM is poised to play a leading role in exascale computing, and we end the paper with a discussion of the features that make it a particularly favorable algorithm for the emerging heterogeneous and massively parallel architectural landscape

    Towards a Unification of Supercomputing, Molecular Dynamics Simulation and Experimental Neutron and X-ray Scattering Techniques

    Get PDF
    Molecular dynamics simulation has become an essential tool for scientific discovery and investigation. The ability to evaluate every atomic coordinate for each time instant sets it apart from other methodologies, which can only access experimental observables as an outcome of the atomic coordinates. Here, the utility of molecular dynamics is illustrated by investigating the structure and dynamics of fundamental models of cellulose fibers. For that, a highly parallel code has been developed to compute static and dynamical scattering functions efficiently on modern supercomputing architectures. Using state of the art supercomputing facilities, molecular dynamics code and parallelization strategies, this work also provides insight into the relationship between cellulose crystallinity and cellulose-lignin aggregation by performing multi-million atom simulations. Finally, this work introduces concepts to augment the ability of molecular dynamics to interpret experimental observables with the help of Markov modeling, which allows for a convenient description of complex molecule dynamics as transitions between well defined conformations. The work presented here suggests that molecular dynamics will continue to evolve and integrate with experimental techniques, like neutron and X-ray scattering, and stochastic models, like Markov modeling, to yield unmatched descriptions of molecule dynamics and interpretations of experimental data, facilitated by the growing computational power available to scientists

    WTEC Panel Report on International Assessment of Research and Development in Simulation-Based Engineering and Science

    Full text link
    corecore