4,937 research outputs found

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    Using the Business Process Execution Language for Managing Scientific Processes

    Get PDF
    This paper describes the use of the Business Process Execution Language for Web Services (BPEL4WS/BPEL) for managing scientific workflows. This work is result of our attempt to adopt Service Oriented Architecture in order to perform Web services – based simulation of metal vapor lasers. Scientific workflows can be more demanding in their requirements than business processes. In the context of addressing these requirements, the features of the BPEL4WS specification are discussed, which is widely regarded as the de-facto standard for orchestrating Web services for business workflows. A typical use case of calculation the electric field potential and intensity distributions is discussed as an example of building a BPEL process to perform distributed simulation constructed by loosely-coupled services

    Change Support in Process-Aware Information Systems - A Pattern-Based Analysis

    Get PDF
    In today's dynamic business world the economic success of an enterprise increasingly depends on its ability to react to changes in its environment in a quick and flexible way. Process-aware information systems (PAIS) offer promising perspectives in this respect and are increasingly employed for operationally supporting business processes. To provide effective business process support, flexible PAIS are needed which do not freeze existing business processes, but allow for loosely specified processes, which can be detailed during run-time. In addition, PAIS should enable authorized users to flexibly deviate from the predefined processes if required (e.g., by allowing them to dynamically add, delete, or move process activities) and to evolve business processes over time. At the same time PAIS must ensure consistency and robustness. The emergence of different process support paradigms and the lack of methods for comparing existing change approaches have made it difficult for PAIS engineers to choose the adequate technology. In this paper we suggest a set of changes patterns and change support features to foster the systematic comparison of existing process management technology with respect to process change support. Based on these change patterns and features, we provide a detailed analysis and evaluation of selected systems from both academia and industry. The identified change patterns and change support features facilitate the comparison of change support frameworks, and consequently will support PAIS engineers in selecting the right technology for realizing flexible PAIS. In addition, this work can be used as a reference for implementing more flexible PAIS

    Inter-organizational fault management: Functional and organizational core aspects of management architectures

    Full text link
    Outsourcing -- successful, and sometimes painful -- has become one of the hottest topics in IT service management discussions over the past decade. IT services are outsourced to external service provider in order to reduce the effort required for and overhead of delivering these services within the own organization. More recently also IT services providers themselves started to either outsource service parts or to deliver those services in a non-hierarchical cooperation with other providers. Splitting a service into several service parts is a non-trivial task as they have to be implemented, operated, and maintained by different providers. One key aspect of such inter-organizational cooperation is fault management, because it is crucial to locate and solve problems, which reduce the quality of service, quickly and reliably. In this article we present the results of a thorough use case based requirements analysis for an architecture for inter-organizational fault management (ioFMA). Furthermore, a concept of the organizational respective functional model of the ioFMA is given.Comment: International Journal of Computer Networks & Communications (IJCNC

    A Dynamic Workflow Simulation Platform

    Get PDF
    International audienceAbstract--In numeric optimization algorithms errors at application level considerably affect the performance of their execution on distributed infrastructures. Hours of execution can be lost only due to bad parameter configurations. Though current grid workflow systems have facilitated the deployment of complex scientific applications on distributed environments, the error handling mechanisms remain mostly those provided by the middleware. In this paper, we propose a collaborative platform for the execution of scientific experiments in which we integrate a new approach for treating application errors, using the dynamicity and exception handling mechanisms of the YAWL workflow management system. Thus, application errors are correctly detected and appropriate handling procedures are triggered in order to save as much as possible of the work already executed

    A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts.</p> <p>Results</p> <p>To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (<it>e.g</it>., for biomolecular sequences, alignments, structures) and functionality (<it>e.g</it>., to parse/write standard file formats).</p> <p>Conclusions</p> <p>PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at <url>http://muralab.org/PaPy</url>, and includes extensive documentation and annotated usage examples.</p

    Autonomic Execution of Computational Workflows

    Get PDF
    This paper describes the application of anautonomic paradigm to manage the complexity of softwaresystems such as computational workflows. To demonstrate ourapproach, the workflow and the services comprising it aretreated as managed resources controlled by hierarchicallyorganized autonomic managers. By applying service-orientedsoftware engineering principles, in particular enterpriseintegration patterns, we have developed a scalable, agile, selfhealingenvironment for execution of dynamic, data-drivenworkflows which are capable of assuring scientific fidelitydespite unavoidable faults and without human intervention

    A framework for the development and maintenance of adaptive, dynamic, context-aware information services

    Get PDF
    This paper presents an agent-based methodological approach to design distributed service-oriented systems which can adapt their behaviour according to changes in the environment and in the user needs, even taking the initiative to make suggestions and proactive choices. The highly dynamic, regulated, complex nature of the distributed, interconnected services is tackled through a methodological framework composed of three interconnected levels. The framework relies on coordination and organisational techniques, as well as on semantically annotated Web services to design, deploy and maintain a distributed system, using both a top-down and bottom-up approach. We present results based on a real use case: interactive community displays with tourist information and services, dynamically personalised according to user context and preferences.Preprin
    corecore