44,217 research outputs found

    Workflow completion patterns

    Full text link
    The most common correctness requirement for a (business) workflow is the completion requirement, imposing that, in some form, every case-instance of the workflow reaches its final state. In this paper, we define three workflow completion patterns, called the mandatory, optional and possible completion. These patterns are formalized in terms of the temporal logic CTL*, to remove ambiguities, allow for easy comparison, and have direct applicability. In contrast to the existing methods, we do not look at the control flow in isolation but include some data information as well. In this way the analysis remains tractable but gains precision. Together with our previous work on data-flow (anti-)patterns, this paper is a significant step towards a unifying framework for complete workflow verification, using the well-developed, stable, adaptable, and effective model-checking approach

    Patterns-based Evaluation of Open Source BPM Systems: The Cases of jBPM, OpenWFE, and Enhydra Shark

    Get PDF
    In keeping with the proliferation of free software development initiatives and the increased interest in the business process management domain, many open source workflow and business process management systems have appeared during the last few years and are now under active development. This upsurge gives rise to two important questions: what are the capabilities of these systems? and how do they compare to each other and to their closed source counterparts? i.e. in other words what is the state-of-the-art in the area?. To gain an insight into the area, we have conducted an in-depth analysis of three of the major open source workflow management systems - jBPM, OpenWFE and Enhydra Shark, the results of which are reported here. This analysis is based on the workflow patterns framework and provides a continuation of the series of evaluations performed using the same framework on closed source systems, business process modeling languages and web-service composition standards. The results from evaluations of the three open source systems are compared with each other and also with the results from evaluations of three representative closed source systems - Staffware, WebSphere MQ and Oracle BPEL PM, documented in earlier works. The overall conclusion is that open source systems are targeted more toward developers rather than business analysts. They generally provide less support for the patterns than closed source systems, particularly with respect to the resource perspective which describes the various ways in which work is distributed amongst business users and managed through to completion

    Optimized Time Management for Declarative Workflows

    Get PDF
    Declarative process models are increasingly used since they fit better with the nature of flexible process-aware information systems and the requirements of the stakeholders involved. When managing business processes, in addition, support for representing time and reasoning about it becomes crucial. Given a declarative process model, users may choose among different ways to execute it, i.e., there exist numerous possible enactment plans, each one presenting specific values for the given objective functions (e.g., overall completion time). This paper suggests a method for generating optimized enactment plans (e.g., plans minimizing overall completion time) from declarative process models with explicit temporal constraints. The latter covers a number of well-known workflow time patterns. The generated plans can be used for different purposes like providing personal schedules to users, facilitating early detection of critical situations, or predicting execution times for process activities. The proposed approach is applied to a range of test models of varying complexity. Although the optimization of process execution is a highly constrained problem, results indicate that our approach produces a satisfactory number of suitable solutions, i.e., solutions optimal in many cases

    Translating standard process models to BPEL

    Get PDF
    Standardisation of languages in the field of business process management has long been an elusive goal. Recently though, consensus has built around one process implementation language, namely BPEL, and two fundamentally similar process modelling notations, namely UML Activity Diagram (UML AD) and BPMN. This paper presents a technique for generating BPEL code from process models expressed in a core subset of BPMN and UML AD. This model-to-code translation is a necessary ingredient to the emergence of model-driven business process development environments based on these standards. The proposed translation has been implemented as an open source tool

    Workflow Partitioning and Deployment on the Cloud using Orchestra

    Get PDF
    Orchestrating service-oriented workflows is typically based on a design model that routes both data and control through a single point - the centralised workflow engine. This causes scalability problems that include the unnecessary consumption of the network bandwidth, high latency in transmitting data between the services, and performance bottlenecks. These problems are highly prominent when orchestrating workflows that are composed from services dispersed across distant geographical locations. This paper presents a novel workflow partitioning approach, which attempts to improve the scalability of orchestrating large-scale workflows. It permits the workflow computation to be moved towards the services providing the data in order to garner optimal performance results. This is achieved by decomposing the workflow into smaller sub workflows for parallel execution, and determining the most appropriate network locations to which these sub workflows are transmitted and subsequently executed. This paper demonstrates the efficiency of our approach using a set of experimental workflows that are orchestrated over Amazon EC2 and across several geographic network regions.Comment: To appear in Proceedings of the IEEE/ACM 7th International Conference on Utility and Cloud Computing (UCC 2014

    Distributed data mining in grid computing environments

    Get PDF
    The official published version of this article can be found at the link below.The computing-intensive data mining for inherently Internet-wide distributed data, referred to as Distributed Data Mining (DDM), calls for the support of a powerful Grid with an effective scheduling framework. DDM often shares the computing paradigm of local processing and global synthesizing. It involves every phase of Data Mining (DM) processes, which makes the workflow of DDM very complex and can be modelled only by a Directed Acyclic Graph (DAG) with multiple data entries. Motivated by the need for a practical solution of the Grid scheduling problem for the DDM workflow, this paper proposes a novel two-phase scheduling framework, including External Scheduling and Internal Scheduling, on a two-level Grid architecture (InterGrid, IntraGrid). Currently a DM IntraGrid, named DMGCE (Data Mining Grid Computing Environment), has been developed with a dynamic scheduling framework for competitive DAGs in a heterogeneous computing environment. This system is implemented in an established Multi-Agent System (MAS) environment, in which the reuse of existing DM algorithms is achieved by encapsulating them into agents. Practical classification problems from oil well logging analysis are used to measure the system performance. The detailed experiment procedure and result analysis are also discussed in this paper

    Cloud Process Execution Engine - Evaluation of the Core Concepts

    Full text link
    In this technical report we describe describe the Domain Specific Language (DSL) of the Workflow Execution Execution (WEE). Instead of interpreting an XML based workflow description language like BPEL, the WEE uses a minimized but expressive set of statements that runs directly on to of a virtual machine that supports the Ruby language.Frameworks/Virtual Machines supporting supporting this language include Java, .NET and there exists also a standalone Virtual Machine. Using a DSL gives us the advantage of maintaining a very compact code base of under 400 lines of code, as the host programming language implements all the concepts like parallelism, threads, checking for syntactic correctness. The implementation just hooks into existing statements to keep track of the workflow and deliver information about current existing context variables and state to the environment that embeds WEE
    corecore