7,250 research outputs found
A Distributed Economics-based Infrastructure for Utility Computing
Existing attempts at utility computing revolve around two approaches. The
first consists of proprietary solutions involving renting time on dedicated
utility computing machines. The second requires the use of heavy, monolithic
applications that are difficult to deploy, maintain, and use.
We propose a distributed, community-oriented approach to utility computing.
Our approach provides an infrastructure built on Web Services in which modular
components are combined to create a seemingly simple, yet powerful system. The
community-oriented nature generates an economic environment which results in
fair transactions between consumers and providers of computing cycles while
simultaneously encouraging improvements in the infrastructure of the
computational grid itself.Comment: 8 pages, 1 figur
A Survey of Fault-Tolerance and Fault-Recovery Techniques in Parallel Systems
Supercomputing systems today often come in the form of large numbers of
commodity systems linked together into a computing cluster. These systems, like
any distributed system, can have large numbers of independent hardware
components cooperating or collaborating on a computation. Unfortunately, any of
this vast number of components can fail at any time, resulting in potentially
erroneous output. In order to improve the robustness of supercomputing
applications in the presence of failures, many techniques have been developed
to provide resilience to these kinds of system faults. This survey provides an
overview of these various fault-tolerance techniques.Comment: 11 page
Hyperswitch communication network
The Hyperswitch Communication Network (HCN) is a large scale parallel computer prototype being developed at JPL. Commercial versions of the HCN computer are planned. The HCN computer being designed is a message passing multiple instruction multiple data (MIMD) computer, and offers many advantages in price-performance ratio, reliability and availability, and manufacturing over traditional uniprocessors and bus based multiprocessors. The design of the HCN operating system is a uniquely flexible environment that combines both parallel processing and distributed processing. This programming paradigm can achieve a balance among the following competing factors: performance in processing and communications, user friendliness, and fault tolerance. The prototype is being designed to accommodate a maximum of 64 state of the art microprocessors. The HCN is classified as a distributed supercomputer. The HCN system is described, and the performance/cost analysis and other competing factors within the system design are reviewed
Recommended from our members
Leveraging legacy codes to distributed problem solving environments: A web service approach
This paper describes techniques used to leverage high performance legacy codes as CORBA components to a distributed problem solving environment. It first briefly introduces the software architecture adopted by the environment. Then it presents a CORBA oriented wrapper generator (COWG) which can be used to automatically wrap high performance legacy codes as CORBA components. Two legacy codes have been wrapped with COWG. One is an MPI-based molecular dynamic simulation (MDS) code, the other is a finite element based computational fluid dynamics (CFD) code for simulating incompressible Navier-Stokes flows. Performance comparisons between runs of the MDS CORBA component and the original MDS legacy code on a cluster of workstations and on a parallel computer are also presented. Wrapped as CORBA components, these legacy codes can be reused in a distributed computing environment. The first case shows that high performance can be maintained with the wrapped MDS component. The second case shows that a Web user can submit a task to the wrapped CFD component through a Web page without knowing the exact implementation of the component. In this way, a user’s desktop computing environment can be extended to a high performance computing environment using a cluster of workstations or a parallel computer
Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies
Grid is an infrastructure that involves the integrated and collaborative use
of computers, networks, databases and scientific instruments owned and managed
by multiple organizations. Grid applications often involve large amounts of
data and/or computing resources that require secure resource sharing across
organizational boundaries. This makes Grid application management and
deployment a complex undertaking. Grid middlewares provide users with seamless
computing ability and uniform access to resources in the heterogeneous Grid
environment. Several software toolkits and systems have been developed, most of
which are results of academic research projects, all over the world. This
chapter will focus on four of these middlewares--UNICORE, Globus, Legion and
Gridbus. It also presents our implementation of a resource broker for UNICORE
as this functionality was not supported in it. A comparison of these systems on
the basis of the architecture, implementation model and several other features
is included.Comment: 19 pages, 10 figure
nbodykit: an open-source, massively parallel toolkit for large-scale structure
We present nbodykit, an open-source, massively parallel Python toolkit for
analyzing large-scale structure (LSS) data. Using Python bindings of the
Message Passing Interface (MPI), we provide parallel implementations of many
commonly used algorithms in LSS. nbodykit is both an interactive and scalable
piece of scientific software, performing well in a supercomputing environment
while still taking advantage of the interactive tools provided by the Python
ecosystem. Existing functionality includes estimators of the power spectrum, 2
and 3-point correlation functions, a Friends-of-Friends grouping algorithm,
mock catalog creation via the halo occupation distribution technique, and
approximate N-body simulations via the FastPM scheme. The package also provides
a set of distributed data containers, insulated from the algorithms themselves,
that enable nbodykit to provide a unified treatment of both simulation and
observational data sets. nbodykit can be easily deployed in a high performance
computing environment, overcoming some of the traditional difficulties of using
Python on supercomputers. We provide performance benchmarks illustrating the
scalability of the software. The modular, component-based approach of nbodykit
allows researchers to easily build complex applications using its tools. The
package is extensively documented at http://nbodykit.readthedocs.io, which also
includes an interactive set of example recipes for new users to explore. As
open-source software, we hope nbodykit provides a common framework for the
community to use and develop in confronting the analysis challenges of future
LSS surveys.Comment: 18 pages, 7 figures. Feedback very welcome. Code available at
https://github.com/bccp/nbodykit and for documentation, see
http://nbodykit.readthedocs.i
- …