126,321 research outputs found

    A Framework for Model-Driven Scientific Workflow Engineering

    Get PDF
    So-called scientific workflows are one important means in the context of data-intensive science for reliable and efficient scientific data processing in distributed computing infrastructures such as Grids. Scientific Workflow Management Systems (SWfMS) help scientists model and run scientific workflows, whereas a domain-specific layer for workflow modeling by a scientist and a technical layer for automated workflow execution can be distinguished. Initially, many SWfMS were developed from scratch using custom workflow technologies languages without application of already existing and established business workflow technologies. Among the reasons were different life cycles for scientific and business workflows as well as incompatible interfaces and communication protocols of the respective execution infrastructures. Meanwhile, several business IT infrastructures have evolved to serviceoriented architectures (SOAs), for which many Web service standards and technologies have been developed. The Web Services Business Process Execution Language (BPEL), for example, is a well-accepted standard for the implementation and execution of business workflows in SOAs. The SOA architecture pattern has been adopted in scientific IT infrastructures by so-called Service Grids based on existing standards and technologies. Due to this development, BPEL is also suitable for the execution of scientific workflows at the technical layer, which has been elaborated on in many publications and projects. However, BPEL is a workflow language for IT experts and is originally not suited for scientific workflow modeling by a scientist at the domain-specific layer. A domain-specific abstraction of BPEL is therefore required that can be specifically tailored for scientific workflow modeling as well as a corresponding mapping to the technical layer. These challenges of the domain-specific abstraction and the mapping are addressed in this thesis with the help of the Business Process Model and Notation (BPMN) standard and technologies from Model-Driven Software Development (MDSD). Therefore, the MoDFlow approach for Model-Driven Scientific WorkFlow Engineering is presented to map domain-specific scientific workflow models via a BPMN-based intermediate layer to an executable workflow model. The intermediate layer is specified by MoDFlow.BPMN, which is a BPMN metamodel subset with custom extensions for the scientific domain. MoDFlow.BPMN2BPEL defines three consecutive transformation steps to map MoDFlow.BPMN to BPEL for workflow execution. Furthermore, different methods to utilize and extend MoDFlow.BPMN and MoDFlow.BPMN2BPEL are described in the MoDFlow approach, in which the definition of so-called domain-specific languages (DSLs) for the modeling of scientific workflows at the domain-specific layer is focused. The MoDFlow framework is an implementation of the MoDFlow approach, which is based on the Eclipse Modeling Framework (EMF). The MoDFlow framework is evaluated in three application scenarios, in which different utilization and extension mechanisms are examined. The first two application scenarios investigate the technical feasibility of the approach and support scientific workflows with parameter sweeps that are executed on a Grid infrastructure. The third application scenario has been conducted in collaboration with the PubFlow project, which aims to create an infrastructure to model and execute data publication workflows. Based on the Xtext framework, a textual DSL and a corresponding language infrastructure is defined for this purpose that supports developers in creating data publication workflows. This scenario aims to illustrate the practicability of the MoDFlow framework. PubFlow currently plans to implement an additional graphical DSL based on the BPMN notation and a corresponding workflow editor for scientists

    Supporting the Everyday Work of Scientists: Automating Scientific Workflows

    Get PDF
    This paper describes an action research project that we undertook with National Research Council Canada (NRC) scientists. Based on discussions about their \ud difficulties in using software to collect data and manage processes, we identified three requirements for increasing research productivity: ease of use for end- \ud users; managing scientific workflows; and facilitating software interoperability. Based on these requirements, we developed a software framework, Sweet, to \ud assist in the automation of scientific workflows. \ud \ud Throughout the iterative development process, and through a series of structured interviews, we evaluated how the framework was used in practice, and identified \ud increases in productivity and effectiveness and their causes. While the framework provides resources for writing application wrappers, it was easier to code the applications’ functionality directly into the framework using OSS components. Ease of use for the end-user and flexible and fully parameterized workflow representations were key elements of the framework’s success. \u

    SlicerAstro: a 3-D interactive visual analytics tool for HI data

    Get PDF
    SKA precursors are capable of detecting hundreds of galaxies in HI in a single 12 hours pointing. In deeper surveys one will probe more easily faint HI structures, typically located in the vicinity of galaxies, such as tails, filaments, and extraplanar gas. The importance of interactive visualization has proven to be fundamental for the exploration of such data as it helps users to receive immediate feedback when manipulating the data. We have developed SlicerAstro, a 3-D interactive viewer with new analysis capabilities, based on traditional 2-D input/output hardware. These capabilities enhance the data inspection, allowing faster analysis of complex sources than with traditional tools. SlicerAstro is an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing. We demonstrate the capabilities of the current stable binary release of SlicerAstro, which offers the following features: i) handling of FITS files and astronomical coordinate systems; ii) coupled 2-D/3-D visualization; iii) interactive filtering; iv) interactive 3-D masking; v) and interactive 3-D modeling. In addition, SlicerAstro has been designed with a strong, stable and modular C++ core, and its classes are also accessible via Python scripting, allowing great flexibility for user-customized visualization and analysis tasks.Comment: 18 pages, 11 figures, Accepted by Astronomy and Computing. SlicerAstro link: https://github.com/Punzo/SlicerAstro/wiki#get-slicerastr

    The Dark Energy Survey Data Management System

    Full text link
    The Dark Energy Survey collaboration will study cosmic acceleration with a 5000 deg2 griZY survey in the southern sky over 525 nights from 2011-2016. The DES data management (DESDM) system will be used to process and archive these data and the resulting science ready data products. The DESDM system consists of an integrated archive, a processing framework, an ensemble of astronomy codes and a data access framework. We are developing the DESDM system for operation in the high performance computing (HPC) environments at NCSA and Fermilab. Operating the DESDM system in an HPC environment offers both speed and flexibility. We will employ it for our regular nightly processing needs, and for more compute-intensive tasks such as large scale image coaddition campaigns, extraction of weak lensing shear from the full survey dataset, and massive seasonal reprocessing of the DES data. Data products will be available to the Collaboration and later to the public through a virtual-observatory compatible web portal. Our approach leverages investments in publicly available HPC systems, greatly reducing hardware and maintenance costs to the project, which must deploy and maintain only the storage, database platforms and orchestration and web portal nodes that are specific to DESDM. In Fall 2007, we tested the current DESDM system on both simulated and real survey data. We used Teragrid to process 10 simulated DES nights (3TB of raw data), ingesting and calibrating approximately 250 million objects into the DES Archive database. We also used DESDM to process and calibrate over 50 nights of survey data acquired with the Mosaic2 camera. Comparison to truth tables in the case of the simulated data and internal crosschecks in the case of the real data indicate that astrometric and photometric data quality is excellent.Comment: To be published in the proceedings of the SPIE conference on Astronomical Instrumentation (held in Marseille in June 2008). This preprint is made available with the permission of SPIE. Further information together with preprint containing full quality images is available at http://desweb.cosmology.uiuc.edu/wik
    • …
    corecore