37,597 research outputs found

    Steering in computational science: mesoscale modelling and simulation

    Full text link
    This paper outlines the benefits of computational steering for high performance computing applications. Lattice-Boltzmann mesoscale fluid simulations of binary and ternary amphiphilic fluids in two and three dimensions are used to illustrate the substantial improvements which computational steering offers in terms of resource efficiency and time to discover new physics. We discuss details of our current steering implementations and describe their future outlook with the advent of computational grids.Comment: 40 pages, 11 figures. Accepted for publication in Contemporary Physic

    Towards a lightweight generic computational grid framework for biological research

    Get PDF
    Background: An increasing number of scientific research projects require access to large-scale computational resources. This is particularly true in the biological field, whether to facilitate the analysis of large high-throughput data sets, or to perform large numbers of complex simulations – a characteristic of the emerging field of systems biology. Results: In this paper we present a lightweight generic framework for combining disparate computational resources at multiple sites (ranging from local computers and clusters to established national Grid services). A detailed guide describing how to set up the framework is available from the following URL: http://igrid-ext.cryst.bbk.ac.uk/portal_guide/. Conclusion: This approach is particularly (but not exclusively) appropriate for large-scale biology projects with multiple collaborators working at different national or international sites. The framework is relatively easy to set up, hides the complexity of Grid middleware from the user, and provides access to resources through a single, uniform interface. It has been developed as part of the European ImmunoGrid project

    Multi-Architecture Monte-Carlo (MC) Simulation of Soft Coarse-Grained Polymeric Materials: SOft coarse grained Monte-carlo Acceleration (SOMA)

    Full text link
    Multi-component polymer systems are important for the development of new materials because of their ability to phase-separate or self-assemble into nano-structures. The Single-Chain-in-Mean-Field (SCMF) algorithm in conjunction with a soft, coarse-grained polymer model is an established technique to investigate these soft-matter systems. Here we present an im- plementation of this method: SOft coarse grained Monte-carlo Accelera- tion (SOMA). It is suitable to simulate large system sizes with up to billions of particles, yet versatile enough to study properties of different kinds of molecular architectures and interactions. We achieve efficiency of the simulations commissioning accelerators like GPUs on both workstations as well as supercomputers. The implementa- tion remains flexible and maintainable because of the implementation of the scientific programming language enhanced by OpenACC pragmas for the accelerators. We present implementation details and features of the program package, investigate the scalability of our implementation SOMA, and discuss two applications, which cover system sizes that are difficult to reach with other, common particle-based simulation methods

    Large-scale grid-enabled lattice-Boltzmann simulations of complex fluid flow in porous media and under shear

    Get PDF
    Well designed lattice-Boltzmann codes exploit the essentially embarrassingly parallel features of the algorithm and so can be run with considerable efficiency on modern supercomputers. Such scalable codes permit us to simulate the behaviour of increasingly large quantities of complex condensed matter systems. In the present paper, we present some preliminary results on the large scale three-dimensional lattice-Boltzmann simulation of binary immiscible fluid flows through a porous medium derived from digitised x-ray microtomographic data of Bentheimer sandstone, and from the study of the same fluids under shear. Simulations on such scales can benefit considerably from the use of computational steering and we describe our implementation of steering within the lattice-Boltzmann code, called LB3D, making use of the RealityGrid steering library. Our large scale simulations benefit from the new concept of capability computing, designed to prioritise the execution of big jobs on major supercomputing resources. The advent of persistent computational grids promises to provide an optimal environment in which to deploy these mesoscale simulation methods, which can exploit the distributed nature of compute, visualisation and storage resources to reach scientific results rapidly; we discuss our work on the grid-enablement of lattice-Boltzmann methods in this context.Comment: 17 pages, 6 figures, accepted for publication in Phil.Trans.R.Soc.Lond.

    IMP Science Gateway: from the Portal to the Hub of Virtual Experimental Labs in Materials Science

    Full text link
    "Science gateway" (SG) ideology means a user-friendly intuitive interface between scientists (or scientific communities) and different software components + various distributed computing infrastructures (DCIs) (like grids, clouds, clusters), where researchers can focus on their scientific goals and less on peculiarities of software/DCI. "IMP Science Gateway Portal" (http://scigate.imp.kiev.ua) for complex workflow management and integration of distributed computing resources (like clusters, service grids, desktop grids, clouds) is presented. It is created on the basis of WS-PGRADE and gUSE technologies, where WS-PGRADE is designed for science workflow operation and gUSE - for smooth integration of available resources for parallel and distributed computing in various heterogeneous distributed computing infrastructures (DCI). The typical scientific workflows with possible scenarios of its preparation and usage are presented. Several typical use cases for these science applications (scientific workflows) are considered for molecular dynamics (MD) simulations of complex behavior of various nanostructures (nanoindentation of graphene layers, defect system relaxation in metal nanocrystals, thermal stability of boron nitride nanotubes, etc.). The user experience is analyzed in the context of its practical applications for MD simulations in materials science, physics and nanotechnologies with available heterogeneous DCIs. In conclusion, the "science gateway" approach - workflow manager (like WS-PGRADE) + DCI resources manager (like gUSE)- gives opportunity to use the SG portal (like "IMP Science Gateway Portal") in a very promising way, namely, as a hub of various virtual experimental labs (different software components + various requirements to resources) in the context of its practical MD applications in materials science, physics, chemistry, biology, and nanotechnologies.Comment: 6 pages, 5 figures, 3 tables; 6th International Workshop on Science Gateways, IWSG-2014 (Dublin, Ireland, 3-5 June, 2014). arXiv admin note: substantial text overlap with arXiv:1404.545

    From Quantity to Quality: Massive Molecular Dynamics Simulation of Nanostructures under Plastic Deformation in Desktop and Service Grid Distributed Computing Infrastructure

    Get PDF
    The distributed computing infrastructure (DCI) on the basis of BOINC and EDGeS-bridge technologies for high-performance distributed computing is used for porting the sequential molecular dynamics (MD) application to its parallel version for DCI with Desktop Grids (DGs) and Service Grids (SGs). The actual metrics of the working DG-SG DCI were measured, and the normal distribution of host performances, and signs of log-normal distributions of other characteristics (CPUs, RAM, and HDD per host) were found. The practical feasibility and high efficiency of the MD simulations on the basis of DG-SG DCI were demonstrated during the experiment with the massive MD simulations for the large quantity of aluminum nanocrystals (∼102\sim10^2-10310^3). Statistical analysis (Kolmogorov-Smirnov test, moment analysis, and bootstrapping analysis) of the defect density distribution over the ensemble of nanocrystals had shown that change of plastic deformation mode is followed by the qualitative change of defect density distribution type over ensemble of nanocrystals. Some limitations (fluctuating performance, unpredictable availability of resources, etc.) of the typical DG-SG DCI were outlined, and some advantages (high efficiency, high speedup, and low cost) were demonstrated. Deploying on DG DCI allows to get new scientific quality\it{quality} from the simulated quantity\it{quantity} of numerous configurations by harnessing sufficient computational power to undertake MD simulations in a wider range of physical parameters (configurations) in a much shorter timeframe.Comment: 13 pages, 11 pages (http://journals.agh.edu.pl/csci/article/view/106
    • …
    corecore