240 research outputs found
Recommended from our members
CamGrid: Experiences in constructing a university-wide, Condor-based grid at the University of Cambridge
Proceedings of the 2004 UK e-Science All Hands Meeting, 31st August - 3rd September, Nottingham UKIn this article we describe recent work done in building a university-wide grid at the University of Cambridge based on the Condor middleware [1]. Once the issues of stakeholder concerns (e.g.
security policies) and technical problems (e.g. firewalls and private IP addresses) have been taken into account, a solution based on two separate Condor environments was decided on. The first of these is a single large pool administered centrally by the University Computing Service (UCS) and
the second a federated service of flocked Condor pools belonging to various departments and run over a Virtual Private Network (VPN). We report on the current status of this ongoing work
Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies
Grid is an infrastructure that involves the integrated and collaborative use
of computers, networks, databases and scientific instruments owned and managed
by multiple organizations. Grid applications often involve large amounts of
data and/or computing resources that require secure resource sharing across
organizational boundaries. This makes Grid application management and
deployment a complex undertaking. Grid middlewares provide users with seamless
computing ability and uniform access to resources in the heterogeneous Grid
environment. Several software toolkits and systems have been developed, most of
which are results of academic research projects, all over the world. This
chapter will focus on four of these middlewares--UNICORE, Globus, Legion and
Gridbus. It also presents our implementation of a resource broker for UNICORE
as this functionality was not supported in it. A comparison of these systems on
the basis of the architecture, implementation model and several other features
is included.Comment: 19 pages, 10 figure
HotGrid: Graduated Access to Grid-based Science Gateways
We describe the idea of a Science Gateway, an application-specific task wrapped as a web service, and some examples of these that are being implemented on the US TeraGrid cyberinfrastructure. We also describe HotGrid, a means of providing simple, immediate access to the Grid through one of these gateways, which we hope will broaden the use of the Grid, drawing in a wide community of users. The secondary purpose of HotGrid is to acclimate a science community to the concepts of certificate use. Our system provides these weakly authenticated users with immediate power to use the Grid resources for science, but without the dangerous power of running arbitrary code. We describe the implementation of these Science Gateways with the Clarens secure web server
DARE: A Reflective Platform Designed to Enable Agile Data-Driven Research on the Cloud
The DARE platform has been designed to help research developers deliver user-facing applications and solutions over diverse underlying e-infrastructures, data and computational contexts. The platform is Cloud-ready, and relies on the exposure of APIs, which are suitable for raising the abstraction level and hiding complexity. At its core, the platform implements the cataloguing and execution of fine-grained and Python-based dispel4py workflows as services. Reflection is achieved via a logical knowledge base, comprising multiple internal catalogues, registries and semantics, while it supports persistent and pervasive data provenance. This paper presents design and implementation aspects of the DARE platform, as well as it provides directions for future development.PublishedSan Diego (CA, USA)3IT. Calcolo scientific
A High Throughput Workflow Environment for Cosmological Simulations
The next generation of wide-area sky surveys offer the power to place
extremely precise constraints on cosmological parameters and to test the source
of cosmic acceleration. These observational programs will employ multiple
techniques based on a variety of statistical signatures of galaxies and
large-scale structure. These techniques have sources of systematic error that
need to be understood at the percent-level in order to fully leverage the power
of next-generation catalogs. Simulations of large-scale structure provide the
means to characterize these uncertainties. We are using XSEDE resources to
produce multiple synthetic sky surveys of galaxies and large-scale structure in
support of science analysis for the Dark Energy Survey. In order to scale up
our production to the level of fifty 10^10-particle simulations, we are working
to embed production control within the Apache Airavata workflow environment. We
explain our methods and report how the workflow has reduced production time by
40% compared to manual management.Comment: 8 pages, 5 figures. V2 corrects an error in figure
Cyberinfrastructure, Science Gateways, Campus Bridging, and Cloud Computing
Computers accelerate our ability to achieve scientific
breakthroughs. As technology evolves and new research
needs come to light, the role for cyberinfrastructure as
“knowledge” infrastructure continues to expand. This
article defines and discusses cyberinfrastructure and the
related topics of science gateways and campus bridging;
identifies future challenges in cyberinfrastructure;
and discusses challenges and opportunities related to
the evolution of cyberinfrastructure, “big data” (datacentric,
data-enabled, and data-intensive research and
data analytics), and cloud computing.This material is based upon work supported by the
National Science Foundation under grants 0504075,
0451237, 0723054, 1062432, 0116050, 0521433,
0503697, and 1053575, and several IBM Shared University
Research grants and support provided by Lilly
Endowment, Inc. for the Indiana University Pervasive
Technology Institute. Any opinions, findings and
conclusions or recommendations expressed herein are
those of the authors and do not necessarily reflect the
views of the supporting agencies
- …