144 research outputs found

    BPM News - Folge 3

    Get PDF
    Die BPM-Kolumne des EMISA-Forums berichtet ĂŒber aktuelle Themen, Projekte und Veranstaltungen aus dem BPM-Umfeld. Schwerpunkt der vorliegenden Kolumne bildet das Thema Standardisierung von Prozessbeschreibungssprachen und -notationen im Allgemeinen und BPEL4WS (Business Process Execution Language for Web Services) im Speziellen. Hierzu liefert Jan Mendling von der WirtschaftsuniversitĂ€t Wien in aktuelles Schlagwort. Des weiteren erhalten Leser eine Zusammenfassung zweier im ersten Halbjahr 2006 veranstalteten Workshops zu den Themen „FlexibilitĂ€t prozessorientierter Informationssysteme“ und „Kollaborative Prozesse“ sowie einen BPM Veranstaltungskalender fĂŒr die 2. JahreshĂ€lfte 2006

    A Classification of BPEL Extensions

    Get PDF
    The Business Process Execution Language (BPEL) has emerged as de-facto standard for business processes implementation. This language is designed to be extensible for including additional valuable features in a standardized manner. There are a number of BPEL extensions available. They are, however, neither classified nor evaluated with respect to their compliance to the BPEL standard. This article fills this gap by providing a framework for classifying BPEL extensions, a classification of existing extensions, and a guideline for designing BPEL extensions

    Supporting Quality of Service in Scientific Workflows

    Get PDF
    While workflow management systems have been utilized in enterprises to support businesses for almost two decades, the use of workflows in scientific environments was fairly uncommon until recently. Nowadays, scientists use workflow systems to conduct scientific experiments, simulations, and distributed computations. However, most scientific workflow management systems have not been built using existing workflow technology; rather they have been designed and developed from scratch. Due to the lack of generality of early scientific workflow systems, many domain-specific workflow systems have been developed. Generally speaking, those domain-specific approaches lack common acceptance and tool support and offer lower robustness compared to business workflow systems. In this thesis, the use of the industry standard BPEL, a workflow language for modeling business processes, is proposed for the modeling and the execution of scientific workflows. Due to the widespread use of BPEL in enterprises, a number of stable and mature software products exist. The language is expressive (Turingcomplete) and not restricted to specific applications. BPEL is well suited for the modeling of scientific workflows, but existing implementations of the standard lack important features that are necessary for the execution of scientific workflows. This work presents components that extend an existing implementation of the BPEL standard and eliminate the identified weaknesses. The components thus provide the technical basis for use of BPEL in academia. The particular focus is on so-called non-functional (Quality of Service) requirements. These requirements include scalability, reliability (fault tolerance), data security, and cost (of executing a workflow). From a technical perspective, the workflow system must be able to interface with the middleware systems that are commonly used by the scientific workflow community to allow access to heterogeneous, distributed resources (especially Grid and Cloud resources). The major components cover exactly these requirements: Cloud Resource Provisioner Scalability of the workflow system is achieved by automatically adding additional (Cloud) resources to the workflow system’s resource pool when the workflow system is heavily loaded. Fault Tolerance Module High reliability is achieved via continuous monitoring of workflow execution and corrective interventions, such as re-execution of a failed workflow step or replacement of the faulty resource. Cost Aware Data Flow Aware Scheduler The majority of scientific workflow systems only take the performance and utilization of resources for the execution of workflow steps into account when making scheduling decisions. The presented workflow system goes beyond that. By defining preference values for the weighting of costs and the anticipated workflow execution time, workflow users may influence the resource selection process. The developed multiobjective scheduling algorithm respects the defined weighting and makes both efficient and advantageous decisions using a heuristic approach. Security Extensions Because it supports various encryption, signature and authentication mechanisms (e.g., Grid Security Infrastructure), the workflow system guarantees data security in the transfer of workflow data. Furthermore, this work identifies the need to equip workflow developers with workflow modeling tools that can be used intuitively. This dissertation presents two modeling tools that support users with different needs. The first tool, DAVO (domain-adaptable, Visual BPEL Orchestrator), operates at a low level of abstraction and allows users with knowledge of BPEL to use the full extent of the language. DAVO is a software that offers extensibility and customizability for different application domains. These features are used in the implementation of the second tool, SimpleBPEL Composer. SimpleBPEL is aimed at users with little or no background in computer science and allows for quick and intuitive development of BPEL workflows based on predefined components

    Conceptual modelling of adaptive web services based on high-level petri nets

    Get PDF
    Service technology geared by its SOA architecture and enabling Web services is rapidly gaining in maturity and acceptance. Consequently, most worldwide (private and corporate) cross-organizations are embracing this paradigm by publishing, requesting and composing their businesses and applications in the form of (web-)services. Nevertheless, to face harsh competitiveness such service oriented cross-organizational applications are increasingly pressed to be highly composite, adaptive, knowledge-intensive and very reliable. In contrast to that, Web service standards such as WSDL, WSBPEL, WS-CDL and many others offer just static, manual, purely process-centric and ad-hoc techniques to deploy such services. The main objective of this thesis consists therefore in leveraging the development of service-driven applications towards more reliability, dynamically and adaptable knowledge-intensiveness. This thesis puts forward an innovative framework based on distributed high-level Petri nets and event-driven business rules. More precisely, we developed a new variant of high-level Petri Nets formalism called Service-based Petri nets (CSrv-Nets), that exhibits the following potential characteristics. Firstly, the framework is supported by a stepwise methodology that starts with diagrammatical UML-class diagrams and business rules and leads to dynamically adaptive services specifications. Secondly, the framework soundly integrates behavioural event-driven business rules and stateful services both at the type and instance level and with an inherent distribution. Thirdly, the framework intrinsically permits validation through guided graphical animation. Fourthly, the framework explicitly separates between orchestrations for modelling rule-intensive single services and choreography for cooperating several services through their governing interactive business rules. Fifthly, the framework is based on a two-level conceptualization: (1) the modelling of any rule-centric service with CSrv-Nets; (2) the smooth upgrading of this service modelling with an adaptability-level that allows for dynamically shifting up and down any rule-centric behavior of the running business activities

    An XML-Based Streaming Concept for Business Process Execution

    Get PDF
    Service-oriented environments are central backbone of todays enterprise workflows. These workflow includes traditional process types like travel booking or order processing as well as data-intensive integration processes like operational business intelligence and data analytics. For the latter process types, current execution semantics and concepts do not scale very well in terms of performance and resource consumption. In this paper, we present a concept for data streaming in business processes that is inspired by the typical execution semantics in data management environments. Therefore, we present a conceptual process and execution model that leverages the idea of stream-based service invocation for a scalable and efficient process execution. In selected results of the evaluation we show, that it outperforms the execution model of current process engines

    Towards computerizing intensive care sedation guidelines: design of a rule-based architecture for automated execution of clinical guidelines

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Computerized ICUs rely on software services to convey the medical condition of their patients as well as assisting the staff in taking treatment decisions. Such services are useful for following clinical guidelines quickly and accurately. However, the development of services is often time-consuming and error-prone. Consequently, many care-related activities are still conducted based on manually constructed guidelines. These are often ambiguous, which leads to unnecessary variations in treatments and costs.</p> <p>The goal of this paper is to present a semi-automatic verification and translation framework capable of turning manually constructed diagrams into ready-to-use programs. This framework combines the strengths of the manual and service-oriented approaches while decreasing their disadvantages. The aim is to close the gap in communication between the IT and the medical domain. This leads to a less time-consuming and error-prone development phase and a shorter clinical evaluation phase.</p> <p>Methods</p> <p>A framework is proposed that semi-automatically translates a clinical guideline, expressed as an XML-based flow chart, into a Drools Rule Flow by employing semantic technologies such as ontologies and SWRL. An overview of the architecture is given and all the technology choices are thoroughly motivated. Finally, it is shown how this framework can be integrated into a service-oriented architecture (SOA).</p> <p>Results</p> <p>The applicability of the Drools Rule language to express clinical guidelines is evaluated by translating an example guideline, namely the sedation protocol used for the anaesthetization of patients, to a Drools Rule Flow and executing and deploying this Rule-based application as a part of a SOA. The results show that the performance of Drools is comparable to other technologies such as Web Services and increases with the number of decision nodes present in the Rule Flow. Most delays are introduced by loading the Rule Flows.</p> <p>Conclusions</p> <p>The framework is an effective solution for computerizing clinical guidelines as it allows for quick development, evaluation and human-readable visualization of the Rules and has a good performance. By monitoring the parameters of the patient to automatically detect exceptional situations and problems and by notifying the medical staff of tasks that need to be performed, the computerized sedation guideline improves the execution of the guideline.</p

    Cost-Based Optimization of Integration Flows

    Get PDF
    Integration flows are increasingly used to specify and execute data-intensive integration tasks between heterogeneous systems and applications. There are many different application areas such as real-time ETL and data synchronization between operational systems. For the reasons of an increasing amount of data, highly distributed IT infrastructures, and high requirements for data consistency and up-to-dateness of query results, many instances of integration flows are executed over time. Due to this high load and blocking synchronous source systems, the performance of the central integration platform is crucial for an IT infrastructure. To tackle these high performance requirements, we introduce the concept of cost-based optimization of imperative integration flows that relies on incremental statistics maintenance and inter-instance plan re-optimization. As a foundation, we introduce the concept of periodical re-optimization including novel cost-based optimization techniques that are tailor-made for integration flows. Furthermore, we refine the periodical re-optimization to on-demand re-optimization in order to overcome the problems of many unnecessary re-optimization steps and adaptation delays, where we miss optimization opportunities. This approach ensures low optimization overhead and fast workload adaptation

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation

    The 5th Conference of PhD Students in Computer Science

    Get PDF
    • 

    corecore