687 research outputs found

    Data Workflow - A Workflow Model for Continuous Data Processing

    Get PDF
    Online data or streaming data are getting more and more important for enterprise information systems, e.g. by integrating sensor data and workflows. The continuous flow of data provided e.g. by sensors requires new workflow models addressing the data perspective of these applications, since continuous data is potentially infinite while business process instances are always finite.\ud In this paper a formal workflow model is proposed with data driven coordination and explicating properties of the continuous data processing. These properties can be used to optimize data workflows, i.e., reducing the computational power for processing the workflows in an engine by reusing intermediate processing results in several workflows

    Graphing of E-Science Data with varying user requirements

    Get PDF
    Based on our experience in the Swiss Experiment, exploring experimental, scientific data is often done in a visual way. Starting from a global overview the users are zooming in on interesting events. In case of huge data volumes special data structures have to be introduced to provide fast and easy access to the data. Since it is hard to predict on how users will work with the data a generic approach requires self-adaptation of the required special data structures. In this paper we describe the underlying NP-hard problem and present several approaches to address the problem with varying properties. The approaches are illustrated with a small example and are evaluated with a synthetic data set and user queries

    Piloting an Empirical Study on Measures for Workflow Similarity

    Get PDF
    Service discovery of state dependent services has to take workflow aspects into account. To increase the usability of a service discovery, the result list of services should be ordered with regard to the relevance of the services. Means of ordering a list of workflows due to their similarity with regard to a query are missing. This paper presents a pilot of an empirical study on the influence of different measures on workflow similarity. It turns out that, although preliminary, relations between different measures are indicated and that a similarity definition depends on the application scenario in which the service discovery is applied

    A language for information commerce processes

    Get PDF
    Automatizing information commerce requires languages to represent the typical information commerce processes. Existing languages and standards cover either only very specific types of business models or are too general to capture in a concise way the specific properties of information commerce processes. We introduce a language that is specifically designed for information commerce. It can be directly used for the implementation of the processes and communication required in information commerce. It allows to cover existing business models that are known either from standard proposals or existing information commerce applications on the Internet. The language has a concise logical semantics. In this paper we present the language concepts and an implementation architecture

    Observation Centric Sensor Data Model

    Get PDF
    Management of sensor data requires metadata to understand the semantics of observations. While e-science researchers have high demands on metadata, they are selective in entering metadata. The claim in this paper is to focus on the essentials, i.e., the actual observations being described by location, time, owner, instrument, and measurement. The applicability of this approach is demonstrated in two very different case studies

    Start Time and Duration Distribution Estimation in Semi-Structured Processes

    Get PDF
    Semi-structured processes are business workflows, where the execution of the workflow is not completely controlled by a workflow engine, i.e., an implementation of a formal workflow model. Examples are workflows where actors potentially have interaction with customers reporting the result of the interaction in a process aware information system. Building a performance model for resource management in these processes is difficult since the required information is only partially recorded. In this paper we propose a systematic approach for the creation of an event log that is suitable for available process mining tools. This event log is created by an incrementally cleansing of data. The proposed approach is evaluated in an experiment

    Towards Automatic Capturing of Manual Data Processing Provenance

    Get PDF
    Often data processing is not implemented by a work ow system or an integration application but is performed manually by humans along the lines of a more or less specified procedure. Collecting provenance information during manual data processing can not be automated. Further, manual collection of provenance information is error prone and time consuming. Therefore, we propose to infer provenance information based on the read and write access of users. The derived provenance information is complete, but has a low precision. Therefore, we propose further to introducing organizational guidelines in order to improve the precision of the inferred provenance information

    On Formal Consistency between Value and Coordination Models

    Get PDF
    In information systems (IS) engineering dierent techniques for modeling inter-organizational collaborations are applied. In particular, value models estimate the profitability for involved stakeholders, whereas coordination models are used to agree upon the inter-organizational processes before implementing them. During the execution of inter-organizational collaboration, in addition, event logs are collected by the individual organizations representing another view of the IS. The combination of the two models and the event log represent the IS and they should therefore be consistent, i.e., not contradict each other. Since these models are provided by dierent user groups during design time and the event log is collected during run-time consistency is not straight forward. Inconsistency occurs when models contain a conflicting description of the same information, i.e., there exists a conflicting overlap between the models. In this paper we introduce an abstraction of value models, coordination models and event logs which allows ensuring and maintaining alignment between models and event log. We demonstrate its use by outlining a proof of an inconsistency resolution result based on this abstraction. Thus, the introduction of abstractions allows to explore formal inter-model relations based on consistency

    webXice: an Infrastructure for Information Commerce on the WWW

    Get PDF
    Systems for information commerce on the WWW have to support flexible business models if they should be able to cover a wide range of requirements imposed by the different types of information businesses. This leads to non-trivial functional and security requirements both on the provider and consumer side, for which we introduce an architecture and a system implementation, webXice. We focus on the question, how participants with minimal technological requisites, i.e. solely standard Web browsers available, can be technologically enabled to articipate in the information commerce at a system level, while not sacrificing the functionality and security required by an autonomous participant in an information commerce scenario. In particular, we propose an implementation strategy to efficiently support persistent message logging for light-weight clients, that enables clients to collect and manage non-reputiable messages as proofs. We believe that the capability to support minimal system platforms is a necessary precondition for the wide-spread use of any information commerce infrastructure

    What are the Problem Makers: Ranking Activities According to their Relevance for Process Changes

    Get PDF
    Recently, a new generation of adaptive process management technology has emerged, which enables dynamic changes of composite services and process models respectively. This, in turn, results in a large number of process variants derived from the same process model, but differing in structure due to the applied changes. Since such process variants are expensive to maintain, the process model should be evolved accordingly. In this context, we need to know which activities have been more often involved in process adaptations than others, such that we can focus on them when reconfiguring the process model. This paper provides two approaches for ranking activities according to their involvement in process adaptations. The first one allows to precisely rank the activities, but is expensive to perform since the algorithm is at NP level. We therefore provide as alternative an approximation ranking algorithm which computes in polynomial time. The performance of the approximation algorithm is evaluated and compared through a simulation of 3600 process models. Statistical significance tests indicate that the performance of the approximation ranking algorithm does not depend on the size of process models, i.e., our algorithm can scale up
    corecore