8,544 research outputs found

    A review of High Performance Computing foundations for scientists

    Full text link
    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and discuss essential aspects to take into account when running scientific calculations in computers.Comment: 33 page

    Large-scale grid-enabled lattice-Boltzmann simulations of complex fluid flow in porous media and under shear

    Get PDF
    Well designed lattice-Boltzmann codes exploit the essentially embarrassingly parallel features of the algorithm and so can be run with considerable efficiency on modern supercomputers. Such scalable codes permit us to simulate the behaviour of increasingly large quantities of complex condensed matter systems. In the present paper, we present some preliminary results on the large scale three-dimensional lattice-Boltzmann simulation of binary immiscible fluid flows through a porous medium derived from digitised x-ray microtomographic data of Bentheimer sandstone, and from the study of the same fluids under shear. Simulations on such scales can benefit considerably from the use of computational steering and we describe our implementation of steering within the lattice-Boltzmann code, called LB3D, making use of the RealityGrid steering library. Our large scale simulations benefit from the new concept of capability computing, designed to prioritise the execution of big jobs on major supercomputing resources. The advent of persistent computational grids promises to provide an optimal environment in which to deploy these mesoscale simulation methods, which can exploit the distributed nature of compute, visualisation and storage resources to reach scientific results rapidly; we discuss our work on the grid-enablement of lattice-Boltzmann methods in this context.Comment: 17 pages, 6 figures, accepted for publication in Phil.Trans.R.Soc.Lond.

    Coupling of Length Scales and Atomistic Simulation of MEMS Resonators

    Full text link
    We present simulations of the dynamic and temperature dependent behavior of Micro-Electro-Mechanical Systems (MEMS) by utilizing recently developed parallel codes which enable a coupling of length scales. The novel techniques used in this simulation accurately model the behavior of the mechanical components of MEMS down to the atomic scale. We study the vibrational behavior of one class of MEMS devices: micron-scale resonators made of silicon and quartz. The algorithmic and computational avenue applied here represents a significant departure from the usual finite element approach based on continuum elastic theory. The approach is to use an atomistic simulation in regions of significantly anharmonic forces and large surface area to volume ratios or where internal friction due to defects is anticipated. Peripheral regions of MEMS which are well-described by continuum elastic theory are simulated using finite elements for efficiency. Thus, in central regions of the device, the motion of millions of individual atoms is simulated, while the relatively large peripheral regions are modeled with finite elements. The two techniques run concurrently and mesh seamlessly, passing information back and forth. This coupling of length scales gives a natural domain decomposition, so that the code runs on multiprocessor workstations and supercomputers. We present novel simulations of the vibrational behavior of micron-scale silicon and quartz oscillators. Our results are contrasted with the predictions of continuum elastic theory as a function of size, and the failure of the continuum techniques is clear in the limit of small sizes. We also extract the Q value for the resonators and study the corresponding dissipative processes.Comment: 10 pages, 10 figures, to be published in the proceedings of DTM '99; LaTeX with spie.sty, bibtex with spiebib.bst and psfi

    Parallel computing and the generation of basic plasma data

    Get PDF
    Comprehensive simulations of the processing plasmas used in semiconductor fabrication will depend on the availability of basic data for many microscopic processes that occur in the plasma and at the surface. Cross sections for electron collisions, a principal mechanism for producing reactive species in these plasmas, are among the most important such data; however, electron-collision cross sections are difficult to measure, and the available data are, at best, sketchy for the polyatomic feed gases of interest. While computational approaches to obtaining such data are thus potentially of significant value, studies of electron collisions with polyatomic gases at relevant energies are numerically intensive. In this article, we report on the progress we have made in exploiting large-scale distributed-memory parallel computers, consisting of hundreds of interconnected microprocessors, to generate electron-collision cross sections for gases of interest in plasma simulations
    • 

    corecore