1,735 research outputs found

    Reducing memory requirements for large size LBM simulations on GPUs

    Get PDF
    The scientific community in its never-ending road of larger and more efficient computational resources is in need of more efficient implementations that can adapt efficiently on the current parallel platforms. Graphics processing units are an appropriate platform that cover some of these demands. This architecture presents a high performance with a reduced cost and an efficient power consumption. However, the memory capacity in these devices is reduced and so expensive memory transfers are necessary to deal with big problems. Today, the lattice-Boltzmann method (LBM) has positioned as an efficient approach for Computational Fluid Dynamics simulations. Despite this method is particularly amenable to be efficiently parallelized, it is in need of a considerable memory capacity, which is the consequence of a dramatic fall in performance when dealing with large simulations. In this work, we propose some initiatives to minimize such demand of memory, which allows us to execute bigger simulations on the same platform without additional memory transfers, keeping a high performance. In particular, we present 2 new implementations, LBM-Ghost and LBM-Swap, which are deeply analyzed, presenting the pros and cons of each of them.This project was funded by the Spanish Ministry of Economy and Competitiveness (MINECO): BCAM Severo Ochoa accreditation SEV-2013-0323, MTM2013-40824, Computación de Altas Prestaciones VII TIN2015-65316-P, by the Basque Excellence Research Center (BERC 2014-2017) pro- gram by the Basque Government, and by the Departament d' Innovació, Universitats i Empresa de la Generalitat de Catalunya, under project MPEXPAR: Models de Programació i Entorns d' Execució Paral·lels (2014-SGR-1051). We also thank the support of the computing facilities of Extremadura Research Centre for Advanced Technologies (CETA-CIEMAT) and NVIDIA GPU Research Center program for the provided resources, as well as the support of NVIDIA through the BSC/UPC NVIDIA GPU Center of Excellence.Peer ReviewedPostprint (author's final draft

    Parallel Excluded Volume Tempering for Polymer Melts

    Full text link
    We have developed a technique to accelerate the acquisition of effectively uncorrelated configurations for off-lattice models of dense polymer melts which makes use of both parallel tempering and large scale Monte Carlo moves. The method is based upon simulating a set of systems in parallel, each of which has a slightly different repulsive core potential, such that a thermodynamic path from full excluded volume to an ideal gas of random walks is generated. While each system is run with standard stochastic dynamics, resulting in an NVT ensemble, we implement the parallel tempering through stochastic swaps between the configurations of adjacent potentials, and the large scale Monte Carlo moves through attempted pivot and translation moves which reach a realistic acceptance probability as the limit of the ideal gas of random walks is approached. Compared to pure stochastic dynamics, this results in an increased efficiency even for a system of chains as short as N=60N = 60 monomers, however at this chain length the large scale Monte Carlo moves were ineffective. For even longer chains the speedup becomes substantial, as observed from preliminary data for N=200N = 200

    Regularized lattice Boltzmann Multicomponent models for low Capillary and Reynolds microfluidics flows

    Full text link
    We present a regularized version of the color gradient lattice Boltzmann (LB) scheme for the simulation of droplet formation in microfluidic devices of experimental relevance. The regularized version is shown to provide computationally efficient access to Capillary number regimes relevant to droplet generation via microfluidic devices, such as flow-focusers and the more recent microfluidic step emulsifier devices.Comment: 9 pages, 5 figure
    • …
    corecore