519,885 research outputs found
Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies
Grid is an infrastructure that involves the integrated and collaborative use
of computers, networks, databases and scientific instruments owned and managed
by multiple organizations. Grid applications often involve large amounts of
data and/or computing resources that require secure resource sharing across
organizational boundaries. This makes Grid application management and
deployment a complex undertaking. Grid middlewares provide users with seamless
computing ability and uniform access to resources in the heterogeneous Grid
environment. Several software toolkits and systems have been developed, most of
which are results of academic research projects, all over the world. This
chapter will focus on four of these middlewares--UNICORE, Globus, Legion and
Gridbus. It also presents our implementation of a resource broker for UNICORE
as this functionality was not supported in it. A comparison of these systems on
the basis of the architecture, implementation model and several other features
is included.Comment: 19 pages, 10 figure
High-Performance Cloud Computing: A View of Scientific Applications
Scientific computing often requires the availability of a massive number of
computers for performing large scale experiments. Traditionally, these needs
have been addressed by using high-performance computing solutions and installed
facilities such as clusters and super computers, which are difficult to setup,
maintain, and operate. Cloud computing provides scientists with a completely
new model of utilizing the computing infrastructure. Compute resources, storage
resources, as well as applications, can be dynamically provisioned (and
integrated within the existing infrastructure) on a pay per use basis. These
resources can be released when they are no more needed. Such services are often
offered within the context of a Service Level Agreement (SLA), which ensure the
desired Quality of Service (QoS). Aneka, an enterprise Cloud computing
solution, harnesses the power of compute resources by relying on private and
public Clouds and delivers to users the desired QoS. Its flexible and service
based infrastructure supports multiple programming paradigms that make Aneka
address a variety of different scenarios: from finance applications to
computational science. As examples of scientific computing in the Cloud, we
present a preliminary case study on using Aneka for the classification of gene
expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape
IMP Science Gateway: from the Portal to the Hub of Virtual Experimental Labs in Materials Science
"Science gateway" (SG) ideology means a user-friendly intuitive interface
between scientists (or scientific communities) and different software
components + various distributed computing infrastructures (DCIs) (like grids,
clouds, clusters), where researchers can focus on their scientific goals and
less on peculiarities of software/DCI. "IMP Science Gateway Portal"
(http://scigate.imp.kiev.ua) for complex workflow management and integration of
distributed computing resources (like clusters, service grids, desktop grids,
clouds) is presented. It is created on the basis of WS-PGRADE and gUSE
technologies, where WS-PGRADE is designed for science workflow operation and
gUSE - for smooth integration of available resources for parallel and
distributed computing in various heterogeneous distributed computing
infrastructures (DCI). The typical scientific workflows with possible scenarios
of its preparation and usage are presented. Several typical use cases for these
science applications (scientific workflows) are considered for molecular
dynamics (MD) simulations of complex behavior of various nanostructures
(nanoindentation of graphene layers, defect system relaxation in metal
nanocrystals, thermal stability of boron nitride nanotubes, etc.). The user
experience is analyzed in the context of its practical applications for MD
simulations in materials science, physics and nanotechnologies with available
heterogeneous DCIs. In conclusion, the "science gateway" approach - workflow
manager (like WS-PGRADE) + DCI resources manager (like gUSE)- gives opportunity
to use the SG portal (like "IMP Science Gateway Portal") in a very promising
way, namely, as a hub of various virtual experimental labs (different software
components + various requirements to resources) in the context of its practical
MD applications in materials science, physics, chemistry, biology, and
nanotechnologies.Comment: 6 pages, 5 figures, 3 tables; 6th International Workshop on Science
Gateways, IWSG-2014 (Dublin, Ireland, 3-5 June, 2014). arXiv admin note:
substantial text overlap with arXiv:1404.545
CERN openlab Whitepaper on Future IT Challenges in Scientific Research
This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates
SciTech News Volume 71, No. 2 (2017)
Columns and Reports From the Editor 3
Division News Science-Technology Division 5 Chemistry Division 8 Engineering Division 9 Aerospace Section of the Engineering Division 12 Architecture, Building Engineering, Construction and Design Section of the Engineering Division 14
Reviews Sci-Tech Book News Reviews 16
Advertisements IEEE
Iso-energy-efficiency: An approach to power-constrained parallel computation
Future large scale high performance supercomputer systems require high energy efficiency to achieve exaflops computational power and beyond. Despite the need to understand energy efficiency in high-performance systems, there are few techniques to evaluate energy efficiency at scale. In this paper, we propose a system-level iso-energy-efficiency model to analyze, evaluate and predict energy-performance of data intensive parallel applications with various execution patterns running on large scale power-aware clusters. Our analytical model can help users explore the effects of machine and application dependent characteristics on system energy efficiency and isolate efficient ways to scale system parameters (e.g. processor count, CPU power/frequency, workload size and network bandwidth) to balance energy use and performance. We derive our iso-energy-efficiency model and apply it to the NAS Parallel Benchmarks on two power-aware clusters. Our results indicate that the model accurately predicts total system energy consumption within 5% error on average for parallel applications with various execution and communication patterns. We demonstrate effective use of the model for various application contexts and in scalability decision-making
High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)
Computing plays an essential role in all aspects of high energy physics. As
computational technology evolves rapidly in new directions, and data throughput
and volume continue to follow a steep trend-line, it is important for the HEP
community to develop an effective response to a series of expected challenges.
In order to help shape the desired response, the HEP Forum for Computational
Excellence (HEP-FCE) initiated a roadmap planning activity with two key
overlapping drivers -- 1) software effectiveness, and 2) infrastructure and
expertise advancement. The HEP-FCE formed three working groups, 1) Applications
Software, 2) Software Libraries and Tools, and 3) Systems (including systems
software), to provide an overview of the current status of HEP computing and to
present findings and opportunities for the desired HEP computational roadmap.
The final versions of the reports are combined in this document, and are
presented along with introductory material.Comment: 72 page
- …