31,595 research outputs found
Distributed N-body Simulation on the Grid Using Dedicated Hardware
We present performance measurements of direct gravitational N -body
simulation on the grid, with and without specialized (GRAPE-6) hardware. Our
inter-continental virtual organization consists of three sites, one in Tokyo,
one in Philadelphia and one in Amsterdam. We run simulations with up to 196608
particles for a variety of topologies. In many cases, high performance
simulations over the entire planet are dominated by network bandwidth rather
than latency. With this global grid of GRAPEs our calculation time remains
dominated by communication over the entire range of N, which was limited due to
the use of three sites. Increasing the number of particles will result in a
more efficient execution. Based on these timings we construct and calibrate a
model to predict the performance of our simulation on any grid infrastructure
with or without GRAPE. We apply this model to predict the simulation
performance on the Netherlands DAS-3 wide area computer. Equipping the DAS-3
with GRAPE-6Af hardware would achieve break-even between calculation and
communication at a few million particles, resulting in a compute time of just
over ten hours for 1 N -body time unit. Key words: high-performance computing,
grid, N-body simulation, performance modellingComment: (in press) New Astronomy, 24 pages, 5 figure
The Living Application: a Self-Organising System for Complex Grid Tasks
We present the living application, a method to autonomously manage
applications on the grid. During its execution on the grid, the living
application makes choices on the resources to use in order to complete its
tasks. These choices can be based on the internal state, or on autonomously
acquired knowledge from external sensors. By giving limited user capabilities
to a living application, the living application is able to port itself from one
resource topology to another. The application performs these actions at
run-time without depending on users or external workflow tools. We demonstrate
this new concept in a special case of a living application: the living
simulation. Today, many simulations require a wide range of numerical solvers
and run most efficiently if specialized nodes are matched to the solvers. The
idea of the living simulation is that it decides itself which grid machines to
use based on the numerical solver currently in use. In this paper we apply the
living simulation to modelling the collision between two galaxies in a test
setup with two specialized computers. This simulation switces at run-time
between a GPU-enabled computer in the Netherlands and a GRAPE-enabled machine
that resides in the United States, using an oct-tree N-body code whenever it
runs in the Netherlands and a direct N-body solver in the United States.Comment: 26 pages, 3 figures, accepted by IJHPC
ASCR/HEP Exascale Requirements Review Report
This draft report summarizes and details the findings, results, and
recommendations derived from the ASCR/HEP Exascale Requirements Review meeting
held in June, 2015. The main conclusions are as follows. 1) Larger, more
capable computing and data facilities are needed to support HEP science goals
in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of
the demand at the 2025 timescale is at least two orders of magnitude -- and in
some cases greater -- than that available currently. 2) The growth rate of data
produced by simulations is overwhelming the current ability, of both facilities
and researchers, to store and analyze it. Additional resources and new
techniques for data analysis are urgently needed. 3) Data rates and volumes
from HEP experimental facilities are also straining the ability to store and
analyze large and complex data volumes. Appropriately configured
leadership-class facilities can play a transformational role in enabling
scientific discovery from these datasets. 4) A close integration of HPC
simulation and data analysis will aid greatly in interpreting results from HEP
experiments. Such an integration will minimize data movement and facilitate
interdependent workflows. 5) Long-range planning between HEP and ASCR will be
required to meet HEP's research needs. To best use ASCR HPC resources the
experimental HEP program needs a) an established long-term plan for access to
ASCR computational and data resources, b) an ability to map workflows onto HPC
resources, c) the ability for ASCR facilities to accommodate workflows run by
collaborations that can have thousands of individual members, d) to transition
codes to the next-generation HPC platforms that will be available at ASCR
facilities, e) to build up and train a workforce capable of developing and
using simulations and analysis to support HEP scientific research on
next-generation systems.Comment: 77 pages, 13 Figures; draft report, subject to further revisio
Simulating the universe on an intercontinental grid of supercomputers
Understanding the universe is hampered by the elusiveness of its most common
constituent, cold dark matter. Almost impossible to observe, dark matter can be
studied effectively by means of simulation and there is probably no other
research field where simulation has led to so much progress in the last decade.
Cosmological N-body simulations are an essential tool for evolving density
perturbations in the nonlinear regime. Simulating the formation of large-scale
structures in the universe, however, is still a challenge due to the enormous
dynamic range in spatial and temporal coordinates, and due to the enormous
computer resources required. The dynamic range is generally dealt with by the
hybridization of numerical techniques. We deal with the computational
requirements by connecting two supercomputers via an optical network and make
them operate as a single machine. This is challenging, if only for the fact
that the supercomputers of our choice are separated by half the planet, as one
is located in Amsterdam and the other is in Tokyo. The co-scheduling of the two
computers and the 'gridification' of the code enables us to achieve a 90%
efficiency for this distributed intercontinental supercomputer.Comment: Accepted for publication in IEEE Compute
Survey and Analysis of Production Distributed Computing Infrastructures
This report has two objectives. First, we describe a set of the production
distributed infrastructures currently available, so that the reader has a basic
understanding of them. This includes explaining why each infrastructure was
created and made available and how it has succeeded and failed. The set is not
complete, but we believe it is representative.
Second, we describe the infrastructures in terms of their use, which is a
combination of how they were designed to be used and how users have found ways
to use them. Applications are often designed and created with specific
infrastructures in mind, with both an appreciation of the existing capabilities
provided by those infrastructures and an anticipation of their future
capabilities. Here, the infrastructures we discuss were often designed and
created with specific applications in mind, or at least specific types of
applications. The reader should understand how the interplay between the
infrastructure providers and the users leads to such usages, which we call
usage modalities. These usage modalities are really abstractions that exist
between the infrastructures and the applications; they influence the
infrastructures by representing the applications, and they influence the ap-
plications by representing the infrastructures
- …