11 research outputs found

    Process-aware web programming with Jolie

    Full text link
    We extend the Jolie programming language to capture the native modelling of process-aware web information systems, i.e., web information systems based upon the execution of business processes. Our main contribution is to offer a unifying approach for the programming of distributed architectures on the web, which can capture web servers, stateful process execution, and the composition of services via mediation. We discuss applications of this approach through a series of examples that cover, e.g., static content serving, multiparty sessions, and the evolution of web systems. Finally, we present a performance evaluation that includes a comparison of Jolie-based web systems to other frameworks and a measurement of its scalability.Comment: IMADA-preprint-c

    Knowledge Components and Methods for Policy Propagation in Data Flows

    Get PDF
    Data-oriented systems and applications are at the centre of current developments of the World Wide Web (WWW). On the Web of Data (WoD), information sources can be accessed and processed for many purposes. Users need to be aware of any licences or terms of use, which are associated with the data sources they want to use. Conversely, publishers need support in assigning the appropriate policies alongside the data they distribute. In this work, we tackle the problem of policy propagation in data flows - an expression that refers to the way data is consumed, manipulated and produced within processes. We pose the question of what kind of components are required, and how they can be acquired, managed, and deployed, to support users on deciding what policies propagate to the output of a data-intensive system from the ones associated with its input. We observe three scenarios: applications of the Semantic Web, workflow reuse in Open Science, and the exploitation of urban data in City Data Hubs. Starting from the analysis of Semantic Web applications, we propose a data-centric approach to semantically describe processes as data flows: the Datanode ontology, which comprises a hierarchy of the possible relations between data objects. By means of Policy Propagation Rules, it is possible to link data flow steps and policies derivable from semantic descriptions of data licences. We show how these components can be designed, how they can be effectively managed, and how to reason efficiently with them. In a second phase, the developed components are verified using a Smart City Data Hub as a case study, where we developed an end-to-end solution for policy propagation. Finally, we evaluate our approach and report on a user study aimed at assessing both the quality and the value of the proposed solution

    Process Based Unification for Multi-Model Software Process Improvement

    Get PDF
    A number of differences among quality approaches exist and there can be various situations in which the usage of multiple approaches is required, e.g. to strengthen a particular process with multiple quality approaches or to reach certification of the compliance to a number of standards. First of all it has to be decided which approaches have potential for the organization. In many cases one approach does not contain enough information for process implementation. Consequently, the organization may need to use several approaches and the decision has to be made how the chosen approaches can be used simultaneously. This area is called Multi-model Software Process Improvement (MSPI). The simultaneous usage of multiple quality approaches is called the multi-model problem. In this dissertation we propose a solution for the multi-model problem which we call the Process Based Unification (PBU) framework. The PBU framework consists of the PBU concept, a PBU process and the PBU result. We call PBU concept the mapping of quality approaches to a unified process. The PBU concept is operationalized by a PBU process. The PBU result includes the resulting unified process and the mapping of quality approaches to the unified process.Comment: PhD Thesi

    Choreographic Programming

    Get PDF

    A method for developing Reference Enterprise Architectures

    Get PDF
    Industrial change forces enterprises to constantly adjust their organizational structures in order to stay competitive. In this regard, research acknowledges the potential of Reference Enterprise Architectures (REA). This thesis proposes REAM - a method for developing REAs. After contrasting organizations' needs with approaches available in the current knowledge base, this work identifies the absence of method support for REA development. Proposing REAM, the author aims to close this research gap and evaluates the method's utility by applying REAM in different naturalistic settings

    Flexible evolutionary algorithms for mining structured process models

    Get PDF

    What's next? : operational support for business process execution

    Get PDF
    In the last decade flexibility has become an increasingly important in the area of business process management. Information systems that support the execution of the process are required to work in a dynamic environment that imposes changing demands on the execution of the process. In academia and industry a variety of paradigms and implementations has been developed to support flexibility. While on the one hand these approaches address the industry demands in flexibility, on the other hand, they result in confronting the user with many choices between different alternatives. As a consequence, methods to support users in selecting the best alternative during execution have become essential. In this thesis we introduce a formal framework for providing support to users based on historical evidence available in the execution log of the process. This thesis focuses on support by means of (1) recommendations that provide the user an ordered list of execution alternatives based on estimated utilities and (2) predictions that provide the user general statistics for each execution alternative. Typically, estimations are not an average over all observations, but they are based on observations for "similar" situations. The main question is what similarity means in the context of business process execution. We introduce abstractions on execution traces to capture similarity between execution traces in the log. A trace abstraction considers some trace characteristics rather than the exact trace. Traces that have identical abstraction values are said to be similar. The challenge is to determine those abstractions (characteristics) that are good predictors for the parameter to be estimated in the recommendation or prediction. We analyse the dependency between values of an abstraction and the mean of the parameter to be estimated by means of regression analysis. With regression we obtain a set of abstractions that explain the parameter to be estimated. Dependencies do not only play a role in providing predictions and recommendations to instances at run-time, but they are also essential for simulating the effect of changes in the environment on the processes, both locally and globally. We use stochastic simulation models to simulate the effect of changes in the environment, in particular changed probability distribution caused by recommendations. The novelty of these models is that they include dependencies between abstraction values and simulation parameters, which are estimated from log data. We demonstrate that these models give better approximations of reality than traditional models. A framework for offering operational support has been implemented in the context of the process mining framework ProM

    Proceedings of VVSS2007 - verification and validation of software systems, 23rd March 2007, Eindhoven, The Netherlands

    Get PDF

    Proceedings of VVSS2007 - verification and validation of software systems, 23rd March 2007, Eindhoven, The Netherlands

    Get PDF
    corecore