11 research outputs found

    A GPU-accelerated Direct-sum Boundary Integral Poisson-Boltzmann Solver

    Full text link
    In this paper, we present a GPU-accelerated direct-sum boundary integral method to solve the linear Poisson-Boltzmann (PB) equation. In our method, a well-posed boundary integral formulation is used to ensure the fast convergence of Krylov subspace based linear algebraic solver such as the GMRES. The molecular surfaces are discretized with flat triangles and centroid collocation. To speed up our method, we take advantage of the parallel nature of the boundary integral formulation and parallelize the schemes within CUDA shared memory architecture on GPU. The schemes use only 11N+6Nc11N+6N_c size-of-double device memory for a biomolecule with NN triangular surface elements and NcN_c partial charges. Numerical tests of these schemes show well-maintained accuracy and fast convergence. The GPU implementation using one GPU card (Nvidia Tesla M2070) achieves 120-150X speed-up to the implementation using one CPU (Intel L5640 2.27GHz). With our approach, solving PB equations on well-discretized molecular surfaces with up to 300,000 boundary elements will take less than about 10 minutes, hence our approach is particularly suitable for fast electrostatics computations on small to medium biomolecules

    Staggered Mesh Ewald: An Extension of the Smooth Particle-Mesh Ewald Method Adding Great Versatility

    Get PDF
    We draw on an old technique for improving the accuracy of mesh-based field calculations to extend the popular Smooth Particle Mesh Ewald (SPME) algorithm as the Staggered Mesh Ewald (StME) algorithm. StME improves the accuracy of computed forces by up to 1.2 orders of magnitude and also reduces the drift in system momentum inherent in the SPME method by averaging the results of two separate reciprocal space calculations. StME can use charge mesh spacings roughly 1.5Ă— larger than SPME to obtain comparable levels of accuracy; the one mesh in an SPME calculation can therefore be replaced with two separate meshes, each less than one third of the original size. Coarsening the charge mesh can be balanced with reductions in the direct space cutoff to optimize performance: the efficiency of StME rivals or exceeds that of SPME calculations with similarly optimized parameters. StME may also offer advantages for parallel molecular dynamics simulations because it permits the use of coarser meshes without requiring higher orders of charge interpolation and also because the two reciprocal space calculations can be run independently if that is most suitable for the machine architecture. We are planning other improvements to the standard SPME algorithm, and anticipate that StME will work synergistically will all of them to dramatically improve the efficiency and parallel scaling of molecular simulations

    Integral-equation-based fast algorithms and graph-theoretic methods for large-scale simulations

    Get PDF
    In this dissertation, we extend Greengard and Rokhlin's seminal work on fast multipole method (FMM) in three aspects. First, we have implemented and released open-source new-version of FMM solvers for the Laplace, Yukawa, and low-frequency Helmholtz equations to further broaden and facilitate the applications of FMM in different scientific fields. Secondly, we propose a graph-theoretic parallelization scheme to map the FMM onto modern parallel computer architectures. We have particularly established the critical path analysis, exponential node growth condition for concurrency-breadth, and a spatio-temporal graph partition strategy. Thirdly, we introduce a new kernel-independent FMM based on Fourier series expansions and discuss how information can be collected, compressed, and transmitted through the tree structure for a wide class of kernels

    Contributions algorithmiques pour les simulations complexes en physique des matériaux

    Get PDF
    Because computer performance is always increasing, the numerical simulations of physical phenomena become much harder. Where does this complexity come from ? On the one hand, for a higher fidelity of the solution problem, more advanced physical models are introduced ; on the other hand, the size of the discretization is shrinking which leads to very large-scale numerical problems. In addition the computers themselves become more complex, for instance with hierar- chical memories and hierarchical process units. Hence the need to redesign algorithms, numerical schemes and software to make them more efficient and effective on emerging architectures.The work layout in this document is divided in two mainlines. First, the development and the parallel design of the fast multipole method to compute pair-wise interactions. Then the coupling of models/methods and codes in material physics at atomistic scale, and the computational steering of these simulations as well.Avec l’accroissement de la puissance des ordinateurs, les simulations numériques de phénomènes physiques deviennent de plus en plus complexes. Cette complexité provient d’une part, de l’ajout de modèles physiques plus compliqués pour mieux représenter la physique et d’autre part, par une discrétisation plus fine du problème conduisant à des problèmes de grande taille. Une difficulté supplémentaire s’ajoute avec la complexification des ordinateurs notamment par des mémoires et des unités de calculs hiérarchiques. Il est donc nécessaire de redessiner les algorithmiques, les schémas numériques et les logiciels pour les rendre plus efficaces sur ces architectures émergentes. Le travail présenté dans ce document correspond à deux axes de recherche. Le premier porte sur le développement et la parallélisation de la méthode des multipôles rapides pour calculer rapidement des interactions de paires. Le deuxième axe concerne le couplage de méthodes et de codes en physique des matériaux à échelle atomique ainsi que le pilotage de ces simulations

    Large-scale parallelised boundary element method electrostatics for biomolecular simulation

    Get PDF
    Large-scale biomolecular simulations require a model of particle interactions capable of incorporating the behaviour of large numbers of particles over relatively long timescales. If water is modelled as a continuous medium then the most important intermolecular forces between biomolecules can be modelled as long-range electrostatics governed by the Poisson- Boltzmann Equation (PBE). We present a linearised PBE solver called the "Boundary Element Electrostatics Program"(BEEP). BEEP is based on the Boundary Element Method (BEM), in combination with a recently developed O(N) Fast Multipole Method (FMM) algorithm which approximates the far-�field integrals within the BEM, yielding a method which scales linearly with the number of particles. BEEP improves on existing methods by parallelising the underlying algorithms for use on modern cluster architectures, as well as taking advantage of recent progress in the �field of GPGPU (General Purpose GPU) Programming, to exploit the highly parallel nature of graphics cards. We found the stability and numerical accuracy of the BEM/FMM method to be highly dependent on the choice of surface representation and integration method. For real proteins we demonstrate the critical level of surface detail required to produce converged electrostatic solvation energies, and introduce a curved surface representation based on Point-Normal G1-continuous triangles which we �find generally improves numerical stability compared to a simpler surface constructed from planar triangles. Despite our improvements upon existing BEM methods, we �find that it is not possible to directly integrate BEM surface solutions to obtain intermolecular electrostatic forces. It is, however, practicable to use the total electrostatic solvation energy calculated by BEEP to drive a Monte-Carlo simulation
    corecore