15,033 research outputs found
Enhancing Workflow with a Semantic Description of Scientific Intent
Peer reviewedPreprin
A Semantic Grid Oriented to E-Tourism
With increasing complexity of tourism business models and tasks, there is a
clear need of the next generation e-Tourism infrastructure to support flexible
automation, integration, computation, storage, and collaboration. Currently
several enabling technologies such as semantic Web, Web service, agent and grid
computing have been applied in the different e-Tourism applications, however
there is no a unified framework to be able to integrate all of them. So this
paper presents a promising e-Tourism framework based on emerging semantic grid,
in which a number of key design issues are discussed including architecture,
ontologies structure, semantic reconciliation, service and resource discovery,
role based authorization and intelligent agent. The paper finally provides the
implementation of the framework.Comment: 12 PAGES, 7 Figure
Cosmological Simulations on a Grid of Computers
The work presented in this paper aims at restricting the input parameter
values of the semi-analytical model used in GALICS and MOMAF, so as to derive
which parameters influence the most the results, e.g., star formation, feedback
and halo recycling efficiencies, etc. Our approach is to proceed empirically:
we run lots of simulations and derive the correct ranges of values. The
computation time needed is so large, that we need to run on a grid of
computers. Hence, we model GALICS and MOMAF execution time and output files
size, and run the simulation using a grid middleware: DIET. All the complexity
of accessing resources, scheduling simulations and managing data is harnessed
by DIET and hidden behind a web portal accessible to the users.Comment: Accepted and Published in AIP Conference Proceedings 1241, 2010,
pages 816-82
A Taxonomy of Workflow Management Systems for Grid Computing
With the advent of Grid and application technologies, scientists and
engineers are building more and more complex applications to manage and process
large data sets, and execute scientific experiments on distributed resources.
Such application scenarios require means for composing and executing complex
workflows. Therefore, many efforts have been made towards the development of
workflow management systems for Grid computing. In this paper, we propose a
taxonomy that characterizes and classifies various approaches for building and
executing workflows on Grids. We also survey several representative Grid
workflow systems developed by various projects world-wide to demonstrate the
comprehensiveness of the taxonomy. The taxonomy not only highlights the design
and engineering similarities and differences of state-of-the-art in Grid
workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure
e-Social Science and Evidence-Based Policy Assessment : Challenges and Solutions
Peer reviewedPreprin
Harnessing the Power of Many: Extensible Toolkit for Scalable Ensemble Applications
Many scientific problems require multiple distinct computational tasks to be
executed in order to achieve a desired solution. We introduce the Ensemble
Toolkit (EnTK) to address the challenges of scale, diversity and reliability
they pose. We describe the design and implementation of EnTK, characterize its
performance and integrate it with two distinct exemplar use cases: seismic
inversion and adaptive analog ensembles. We perform nine experiments,
characterizing EnTK overheads, strong and weak scalability, and the performance
of two use case implementations, at scale and on production infrastructures. We
show how EnTK meets the following general requirements: (i) implementing
dedicated abstractions to support the description and execution of ensemble
applications; (ii) support for execution on heterogeneous computing
infrastructures; (iii) efficient scalability up to O(10^4) tasks; and (iv)
fault tolerance. We discuss novel computational capabilities that EnTK enables
and the scientific advantages arising thereof. We propose EnTK as an important
addition to the suite of tools in support of production scientific computing
PaPaS: A Portable, Lightweight, and Generic Framework for Parallel Parameter Studies
The current landscape of scientific research is widely based on modeling and
simulation, typically with complexity in the simulation's flow of execution and
parameterization properties. Execution flows are not necessarily
straightforward since they may need multiple processing tasks and iterations.
Furthermore, parameter and performance studies are common approaches used to
characterize a simulation, often requiring traversal of a large parameter
space. High-performance computers offer practical resources at the expense of
users handling the setup, submission, and management of jobs. This work
presents the design of PaPaS, a portable, lightweight, and generic workflow
framework for conducting parallel parameter and performance studies. Workflows
are defined using parameter files based on keyword-value pairs syntax, thus
removing from the user the overhead of creating complex scripts to manage the
workflow. A parameter set consists of any combination of environment variables,
files, partial file contents, and command line arguments. PaPaS is being
developed in Python 3 with support for distributed parallelization using SSH,
batch systems, and C++ MPI. The PaPaS framework will run as user processes, and
can be used in single/multi-node and multi-tenant computing systems. An example
simulation using the BehaviorSpace tool from NetLogo and a matrix multiply
using OpenMP are presented as parameter and performance studies, respectively.
The results demonstrate that the PaPaS framework offers a simple method for
defining and managing parameter studies, while increasing resource utilization.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced
Research Computing, July 22--26, 2018, Pittsburgh, PA, US
Querying Large Physics Data Sets Over an Information Grid
Optimising use of the Web (WWW) for LHC data analysis is a complex problem
and illustrates the challenges arising from the integration of and computation
across massive amounts of information distributed worldwide. Finding the right
piece of information can, at times, be extremely time-consuming, if not
impossible. So-called Grids have been proposed to facilitate LHC computing and
many groups have embarked on studies of data replication, data migration and
networking philosophies. Other aspects such as the role of 'middleware' for
Grids are emerging as requiring research. This paper positions the need for
appropriate middleware that enables users to resolve physics queries across
massive data sets. It identifies the role of meta-data for query resolution and
the importance of Information Grids for high-energy physics analysis rather
than just Computational or Data Grids. This paper identifies software that is
being implemented at CERN to enable the querying of very large collaborating
HEP data-sets, initially being employed for the construction of CMS detectors.Comment: 4 pages, 3 figure
- …