261 research outputs found

    Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud

    Full text link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.Comment: Please cite as: S. Schulte, C. Janiesch, S. Venugopal, I. Weber, and P. Hoenisch (2015). Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud. Future Generation Computer Systems, Volume NN, Number N, NN-NN., http://dx.doi.org/10.1016/j.future.2014.09.00

    Workflow Concept of WS-PGRADE/gUSE

    Get PDF

    Generating eScience Workflows from Statistical Analysis of Prior Data

    Get PDF
    A number of workflow design tools have been developed specifically to enable easy graphical specification of workflows that ensure systematic scientific data capture and analysis and precise provenance information. We believe that an important component that is missing from these existing workflow specification and enactment systems is integration with tools that enable prior detailed analysis of the existing data - and in particular statistical analysis. By thoroughly analyzing the existing relevant datasets first, it is possible to determine precisely where the existing data is sparse or insufficient and what further experimentation is required. Introducing statistical analysis to experimental design will reduce duplication and costs associated with fruitless experimentation and maximize opportunities for scientific breakthroughs. In this paper we describe a workflow specification system that we have developed for a particular eScience application (fuel cell optimization). Experimental workflow instances are generated as a result of detailed statistical analysis and interactive exploration of the existing datasets. This is carried out through a graphical data exploration interface that integrates the widely-used open source statistical analysis software package, R, as a web service

    A programming system for process coordination in virtual organisations

    Get PDF
    PhD thesisDistributed business applications are increasingly being constructed by composing them from services provided by various online businesses. Typically, this leads to trading partners coming together to form virtual organizations (VOs). Each member of a VO maintains their autonomy, except with respect to their agreed goals. The structure of the Virtual Organisation may contain one dominant organisation who dictates the method of achieving the goals or the members may be considered peers of equal importance. The goals of VOs can be defined by the shared global business processes they contain. To be able to execute these business processes, VOs require a flexible enactment model as there may be no single ‘owner’ of the business process and therefore no natural place to enact the business processes. One solution is centralised enactment using a trusted third party, but in some cases this may not be acceptable (for instance because of security reasons). This thesis will present a programming system that allows centralised as well as distributed enactment where each organisation enacts part of the business process. To achieve distributed enactment we must address the problem of specifying the business process in a manner that is amenable to distribution. The first contribution of this thesis is the presentation of the Task Model, a set of languages and notations for describing workflows that can be enacted in a centralised or decentralised manner. The business processes that we specify will coordinate the services that each organisation owns. The second contribution of this thesis is the presentation of a method of describing the observable behaviour of these services. The language we present, SSDL, provides a flexible and extensible way of describing the messaging behaviour of Web Services. We present a method for checking that a set of services described in SSDL are compatible with each other and also that a workflow interacts with a service in the desired manner. The final contribution of this thesis is the presentation of an abstract architecture and prototype implementation of a decentralised workflow engine. The prototype is able to enact workflows described in the Task Model notation in either a centralised or decentralised scenario

    A language-agnostic framework for the analysis of the syntactic structure of process fragments

    Get PDF
    Process fragments are a cornerstone of process modeling in both Service Oriented Architecture and Business Process Management. The state of the art lacks shared, language agnostic definitions of the basic concepts and properties of process fragments. This absence of a common foundation for the research on process fragments hinders the comparison and reuse of the results in the state of the art, and renders impossible the reaching of agreed, intensional definitions of the different typologies of process fragments. The present work aims at filling this gap by providing a framework of language agnostic definitions of properties of the syntactic structure of process fragments based on the mereotopology of discrete space. Alongside familiar mereologic concepts like inclusion, overlap and disjointness, we cover fundamental concepts for process fragments like (dis)connection, selfconnectedness, borders, interiors and exteriors. Besides providing a foundation for further research on process fragments, we discuss the immediate application of the concepts defined in this work in the scope of the change management of process models

    Web Services Support for Dynamic Business Process Outsourcing

    Get PDF
    Outsourcing of business processes is crucial for organizations to be effective, efficient and flexible. To meet fast-changing market conditions, dynamic outsourcing is required, in which business relationships are established and enacted on-the-fly in an adaptive, fine-grained way unrestricted by geographic distance. This requires automated means for both the establishment of outsourcing relationships and for the enactment of services performed in these relationships over electronic channels. Due to wide industry support and the underlying model of loose coupling of services, Web services increasingly become the mechanism of choice to connect organizations across organizational boundaries. This paper analyzes to which extent Web services support the dynamic process outsourcing paradigm. We discuss contract -based dynamic business process outsourcing to define requirements and then introduce the Web services framework. Based on this, we investigate the match between the two. We observe that the Web services framework requires further support for cross - organizational business processes and mechanisms for contracting, QoS management and process-based transaction support and suggest ways to fill those gaps

    On the construction of decentralised service-oriented orchestration systems

    Get PDF
    Modern science relies on workflow technology to capture, process, and analyse data obtained from scientific instruments. Scientific workflows are precise descriptions of experiments in which multiple computational tasks are coordinated based on the dataflows between them. Orchestrating scientific workflows presents a significant research challenge: they are typically executed in a manner such that all data pass through a centralised computer server known as the engine, which causes unnecessary network traffic that leads to a performance bottleneck. These workflows are commonly composed of services that perform computation over geographically distributed resources, and involve the management of dataflows between them. Centralised orchestration is clearly not a scalable approach for coordinating services dispersed across distant geographical locations. This thesis presents a scalable decentralised service-oriented orchestration system that relies on a high-level data coordination language for the specification and execution of workflows. This system’s architecture consists of distributed engines, each of which is responsible for executing part of the overall workflow. It exploits parallelism in the workflow by decomposing it into smaller sub-workflows, and determines the most appropriate engines to execute them using computation placement analysis. This permits the workflow logic to be distributed closer to the services providing the data for execution, which reduces the overall data transfer in the workflow and improves its execution time. This thesis provides an evaluation of the presented system which concludes that decentralised orchestration provides scalability benefits over centralised orchestration, and improves the overall performance of executing a service-oriented workflow
    corecore