13 research outputs found

    Algorithmische und Code-Optimierungen Molekulardynamiksimulationen für Verfahrenstechnik

    Get PDF
    The focus of this work lies on implementational improvements and, in particular, node-level performance optimization of the simulation software ls1-mardyn. Through data structure improvements, SIMD vectorization and, especially, OpenMP parallelization, the world’s first simulation of 2*1013 molecules at over 1 PFLOP/sec was enabled. To allow for long-range interactions, the Fast Multipole Method was introduced to ls1-mardyn. The algorithm was optimized for sequential, shared-memory, and distributed-memory execution on up to 32,768 MPI processes.Der Fokus dieser Arbeit liegt auf Code-Optimierungen und insbesondere Leistungsoptimierung auf Knoten-Ebene für die Simulationssoftware ls1-mardyn. Durch verbesserte Datenstrukturen, SIMD-Vektorisierung und vor allem OpenMP-Parallelisierung wurde die weltweit erste Petaflop-Simulation von 2*1013 Molekülen ermöglicht. Zur Simulation von langreichweitigen Wechselwirkungen wurde die Fast-Multipole-Methode in ls1-mardyn eingeführt. Sequenzielle, Shared- und Distributed-Memory-Optimierungen wurden angewandt und erlaubten eine Ausführung auf bis zu 32768 MPI-Prozessen

    Modelling solid/fluid interactions in hydrodynamic flows: a hybrid multiscale approach

    Get PDF
    With the advent of high performance computing (HPC), we can simulate nature at time and length scales that we could only dream of a few decades ago. Through the development of theory and numerical methods in the last fifty years, we have at our disposal a plethora of mathematical and computational tools to make powerful predictions about the world which surrounds us. From quantum methods like Density Functional Theory (DFT); going through atomistic methods such as Molecular Dynamics (MD) and Monte Carlo (MC), right up to more traditional macroscopic techniques based on Partial Differential Equations (PDEs) discretization like the Finite Element Method (FEM) or Finite Volume Method (FVM), which are respectively, the foundation of computational Structural Analysis and Computational Fluid Dynamics (CFD). Many modern scientific computing challenges in physics stem from combining appropriately two or more of these methods, in order to tackle problems that could not be solved otherwise using just one of them alone. This is known as multi-scale modeling, which aims to achieve a trade-off between computational cost and accuracy by combining two or more physical models at different scales. In this work, a multi-scale domain decomposition technique based on coupling MD and CFD methods, has been developed to make affordable the study of slip and friction, with atomistic detail, at length scales otherwise impossible by fully atomistic methods alone. A software framework has been developed to facilitate the execution of this particular kind of simulations on HPC clusters. This have been possible by employing the in-house developed CPL_LIBRARY software library, which provides key functionality to implement coupling through domain decomposition.Open Acces

    Coupling of particle simulation and lattice Boltzmann background flow on adaptive grids

    Get PDF
    The lattice-Boltzmann method as well as classical molecular dynamics are established and widely used methods for the simulation and research of soft matter. Molecular dynamics is a computer simulation technique on microscopic scales solving the multi-body kinetic equations of the involved particles. The lattice-Boltzmann method describes the hydrodynamic interactions of fluids, gases, or other soft matter on a coarser scale. Many applications, however, are multi-scale problems and they require a coupling of both methods. A basic concept for short ranged interactions in molecular dynamics is the linked cells algorithm that is in O(N) for homogeneously distributed particles. Spatial adaptive methods for the lattice-Boltzmann scheme are used in order to reduce costly scaling effects on the runtime and memory for large-scale simulations. As basis for this work the highly flexible simulation software ESPResSo is used and extended. The adaptive lattice-Boltzmann scheme, that is implemented in ESPResSo, uses a domain decomposition with tree-based grids along the space-filling Morton curve using the p4est software library. However, coupling the regular particle simulation with the adaptive lattice-Boltzmann method for highly parallel computer architectures is a challenging issue that raises several problems. In this work, an approach for the domain decomposition of the linked cells algorithm based on space-filling curves and the p4est library is presented. In general, the grids for molecular dynamics and fluid simulations are not equal. Thus, strategies to distribute differently refined grids on parallel processes are explained, including a parallel algorithm to construct the finest common tree using p4est. Furthermore, a method for interpolation and extrapolation, that is needed for the viscous coupling of particles with the fluid, on adaptively refined grids are discussed. The ESPResSo simulation software is augmented by the developed methods for particle-fluid coupling as well as the Morton curve based domain decompositions in a minimally invasive manner. The original ESPResSo implementation for regular particle and fluid simulations is used as reference for the developed algorithms

    Computational Approaches to Simulation and Analysis of Large Conformational Transitions in Proteins

    Get PDF
    abstract: In a typical living cell, millions to billions of proteins—nanomachines that fluctuate and cycle among many conformational states—convert available free energy into mechanochemical work. A fundamental goal of biophysics is to ascertain how 3D protein structures encode specific functions, such as catalyzing chemical reactions or transporting nutrients into a cell. Protein dynamics span femtosecond timescales (i.e., covalent bond oscillations) to large conformational transition timescales in, and beyond, the millisecond regime (e.g., glucose transport across a phospholipid bilayer). Actual transition events are fast but rare, occurring orders of magnitude faster than typical metastable equilibrium waiting times. Equilibrium molecular dynamics (EqMD) can capture atomistic detail and solute-solvent interactions, but even microseconds of sampling attainable nowadays still falls orders of magnitude short of transition timescales, especially for large systems, rendering observations of such "rare events" difficult or effectively impossible. Advanced path-sampling methods exploit reduced physical models or biasing to produce plausible transitions while balancing accuracy and efficiency, but quantifying their accuracy relative to other numerical and experimental data has been challenging. Indeed, new horizons in elucidating protein function necessitate that present methodologies be revised to more seamlessly and quantitatively integrate a spectrum of methods, both numerical and experimental. In this dissertation, experimental and computational methods are put into perspective using the enzyme adenylate kinase (AdK) as an illustrative example. We introduce Path Similarity Analysis (PSA)—an integrative computational framework developed to quantify transition path similarity. PSA not only reliably distinguished AdK transitions by the originating method, but also traced pathway differences between two methods back to charge-charge interactions (neglected by the stereochemical model, but not the all-atom force field) in several conserved salt bridges. Cryo-electron microscopy maps of the transporter Bor1p are directly incorporated into EqMD simulations using MD flexible fitting to produce viable structural models and infer a plausible transport mechanism. Conforming to the theme of integration, a short compendium of an exploratory project—developing a hybrid atomistic-continuum method—is presented, including initial results and a novel fluctuating hydrodynamics model and corresponding numerical code.Dissertation/ThesisDoctoral Dissertation Physics 201

    EMSL Fiscal Year 2008 Annual Report

    Full text link
    corecore